journal-title
stringclasses 191
values | pmid
stringlengths 8
8
⌀ | pmc
stringlengths 10
11
| doi
stringlengths 12
31
⌀ | article-title
stringlengths 11
423
| abstract
stringlengths 18
3.69k
⌀ | related-work
stringlengths 12
84k
| references
sequencelengths 0
206
| reference_info
listlengths 0
192
|
---|---|---|---|---|---|---|---|---|
Scientific Reports | 31534156 | PMC6751173 | 10.1038/s41598-019-49967-4 | Multi-view based integrative analysis of gene expression data for identifying biomarkers | The widespread applications in microarray technology have produced the vast quantity of publicly available gene expression datasets. However, analysis of gene expression data using biostatistics and machine learning approaches is a challenging task due to (1) high noise; (2) small sample size with high dimensionality; (3) batch effects and (4) low reproducibility of significant biomarkers. These issues reveal the complexity of gene expression data, thus significantly obstructing microarray technology in clinical applications. The integrative analysis offers an opportunity to address these issues and provides a more comprehensive understanding of the biological systems, but current methods have several limitations. This work leverages state of the art machine learning development for multiple gene expression datasets integration, classification and identification of significant biomarkers. We design a novel integrative framework, MVIAm - Multi-View based Integrative Analysis of microarray data for identifying biomarkers. It applies multiple cross-platform normalization methods to aggregate multiple datasets into a multi-view dataset and utilizes a robust learning mechanism Multi-View Self-Paced Learning (MVSPL) for gene selection in cancer classification problems. We demonstrate the capabilities of MVIAm using simulated data and studies of breast cancer and lung cancer, it can be applied flexibly and is an effective tool for facing the four challenges of gene expression data analysis. Our proposed model makes microarray integrative analysis more systematic and expands its range of applications. | Related workSelf-paced learning (SPL)The self-paced learning model combines a weighted loss term for all samples and a general self-paced regularizer imposed on the samples weight. Suppose given a dataset D = {(X1, y1), (X2, y2), …, (Xn, yn)}, where Xi = (xi1, xi2,…, xip) is the i-th input sample with p features and yi is class of the i-th sample (e.g. yi ∈ {0, 1}). Let L(yi,f(xi, β)) denotes the loss function, which calculates the loss between the real label yi and the estimated value f(xi, β). The β represents the model parameter inside the decision function f(xi, β). The goal of the SPL is to jointly learn the model parameter β and the latent weight variable v = [v1, v2, …, vn] by minimizing:7\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\mathop{{\rm{\min }}}\limits_{\beta ,v\in {[0,1]}^{n}}E(\beta ,v;\lambda ,\gamma )=\mathop{\sum }\limits_{i=1}^{n}{v}_{i}L({y}_{i},f({x}_{i},\beta ))-\gamma \mathop{\sum }\limits_{i=1}^{n}{v}_{i}+\lambda {\Vert \beta \Vert }_{1}$$\end{document}minβ,v∈[0,1]nE(β,v;λ,γ)=∑i=1nviL(yi,f(xi,β))−γ∑i=1nvi+λ‖β‖1where γ is the age parameter for controlling the learning pace and λ is a tuning parameter. The alternative optimization strategy algorithm can effectively solve the SPL problem. When β is fixed, the optimum weight variable \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${v}^{\ast }=[{v}_{1}^{\ast },{v}_{2}^{\ast \ast },\mathrm{...},{v}_{n}^{\ast }]$$\end{document}v⁎=[v1⁎,v2⁎⁎,...,vn⁎] can be calculated by:8\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${v}_{i}^{\ast }=\{\begin{array}{ll}1, & L({y}_{i},f({x}_{i},\beta )) < \gamma \\ 0, & {\rm{otherwise}}\end{array}$$\end{document}vi⁎={1,L(yi,f(xi,β))<γ0,otherwiseBy jointly updating model parameter β and the latent weight variable v, we can conclude that: (1) When updating v with a fixed β, if the loss value of a sample is smaller than the age parameter γ, then the sample is treated as an easy sample with \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${v}_{i}^{\ast }=1$$\end{document}vi⁎=1, otherwise, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${v}_{i}^{\ast }=0$$\end{document}vi⁎=0. (2) When updating β with a fixed v, using the selected samples (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${v}_{i}^{\ast }=1$$\end{document}vi⁎=1) to train the classifier. (3) Before running the next iteration, increase the age parameter γ to adjust the learning pace. When γ is small, only select easy samples with small loss values. With γ increases, more samples with larger losses will be gradually selected to train a more “mature” model.By jointly learning the model parameter β and the latent weight variable v based on the iterative algorithm with gradually increasing the age parameter, more samples can be automatically selected into training from easy to complex in a self-paced way. | [
"23193258",
"12047881",
"22133085",
"23777239",
"15846360",
"20838408",
"18588682",
"22262733",
"15184677",
"12855442",
"24359104",
"19118496",
"25196635",
"25829177",
"27600230",
"16632515",
"18325927",
"21386892",
"12925520",
"15461798",
"23216969",
"19447787",
"26495380",
"23550210",
"26033031",
"24476821",
"26418898",
"28540450",
"17344846",
"19607727"
] | [
{
"pmid": "23193258",
"title": "NCBI GEO: archive for functional genomics data sets--update.",
"abstract": "The Gene Expression Omnibus (GEO, http://www.ncbi.nlm.nih.gov/geo/) is an international public repository for high-throughput microarray and next-generation sequence functional genomic data sets submitted by the research community. The resource supports archiving of raw data, processed data and metadata which are indexed, cross-linked and searchable. All data are freely available for download in a variety of formats. GEO also provides several web-based tools and strategies to assist users to query, analyse and visualize data. This article reports current status and recent database developments, including the release of GEO2R, an R-based web application that helps users analyse GEO data."
},
{
"pmid": "12047881",
"title": "Statistical intelligence: effective analysis of high-density microarray data.",
"abstract": "Microarrays enable researchers to interrogate thousands of genes simultaneously. A crucial step in data analysis is the selection of subsets of interesting genes from the initial set of genes. In many cases, especially when comparing genes expressed in a specific condition to a reference condition, the genes of interest are those which are differentially regulated. This review focuses on the methods currently available for the selection of such genes. Fold change, unusual ratio, univariate testing with correction for multiple experiments, ANOVA and noise sampling methods are reviewed and compared."
},
{
"pmid": "22133085",
"title": "Relative impact of key sources of systematic noise in Affymetrix and Illumina gene-expression microarray experiments.",
"abstract": "BACKGROUND\nSystematic processing noise, which includes batch effects, is very common in microarray experiments but is often ignored despite its potential to confound or compromise experimental results. Compromised results are most likely when re-analysing or integrating datasets from public repositories due to the different conditions under which each dataset is generated. To better understand the relative noise-contributions of various factors in experimental-design, we assessed several Illumina and Affymetrix datasets for technical variation between replicate hybridisations of Universal Human Reference (UHRR) and individual or pooled breast-tumour RNA.\n\n\nRESULTS\nA varying degree of systematic noise was observed in each of the datasets, however in all cases the relative amount of variation between standard control RNA replicates was found to be greatest at earlier points in the sample-preparation workflow. For example, 40.6% of the total variation in reported expressions were attributed to replicate extractions, compared to 13.9% due to amplification/labelling and 10.8% between replicate hybridisations. Deliberate probe-wise batch-correction methods were effective in reducing the magnitude of this variation, although the level of improvement was dependent on the sources of noise included in the model. Systematic noise introduced at the chip, run, and experiment levels of a combined Illumina dataset were found to be highly dependent upon the experimental design. Both UHRR and pools of RNA, which were derived from the samples of interest, modelled technical variation well although the pools were significantly better correlated (4% average improvement) and better emulated the effects of systematic noise, over all probes, than the UHRRs. The effect of this noise was not uniform over all probes, with low GC-content probes found to be more vulnerable to batch variation than probes with a higher GC-content.\n\n\nCONCLUSIONS\nThe magnitude of systematic processing noise in a microarray experiment is variable across probes and experiments, however it is generally the case that procedures earlier in the sample-preparation workflow are liable to introduce the most noise. Careful experimental design is important to protect against noise, detailed meta-data should always be provided, and diagnostic procedures should be routinely performed prior to downstream analyses for the detection of bias in microarray studies."
},
{
"pmid": "23777239",
"title": "Sparse logistic regression with a L1/2 penalty for gene selection in cancer classification.",
"abstract": "BACKGROUND\nMicroarray technology is widely used in cancer diagnosis. Successfully identifying gene biomarkers will significantly help to classify different cancer types and improve the prediction accuracy. The regularization approach is one of the effective methods for gene selection in microarray data, which generally contain a large number of genes and have a small number of samples. In recent years, various approaches have been developed for gene selection of microarray data. Generally, they are divided into three categories: filter, wrapper and embedded methods. Regularization methods are an important embedded technique and perform both continuous shrinkage and automatic gene selection simultaneously. Recently, there is growing interest in applying the regularization techniques in gene selection. The popular regularization technique is Lasso (L1), and many L1 type regularization terms have been proposed in the recent years. Theoretically, the Lq type regularization with the lower value of q would lead to better solutions with more sparsity. Moreover, the L1/2 regularization can be taken as a representative of Lq (0 <q < 1) regularizations and has been demonstrated many attractive properties.\n\n\nRESULTS\nIn this work, we investigate a sparse logistic regression with the L1/2 penalty for gene selection in cancer classification problems, and propose a coordinate descent algorithm with a new univariate half thresholding operator to solve the L1/2 penalized logistic regression. Experimental results on artificial and microarray data demonstrate the effectiveness of our proposed approach compared with other regularization methods. Especially, for 4 publicly available gene expression datasets, the L1/2 regularization method achieved its success using only about 2 to 14 predictors (genes), compared to about 6 to 38 genes for ordinary L1 and elastic net regularization approaches.\n\n\nCONCLUSIONS\nFrom our evaluations, it is clear that the sparse logistic regression with the L1/2 penalty achieves higher classification accuracy than those of ordinary L1 and elastic net regularization approaches, while fewer but informative genes are selected. This is an important consideration for screening and diagnostic applications, where the goal is often to develop an accurate test using as few features as possible in order to control cost. Therefore, the sparse logistic regression with the L1/2 penalty is effective technique for gene selection in real classification problems."
},
{
"pmid": "15846360",
"title": "Independence and reproducibility across microarray platforms.",
"abstract": "Microarrays have been widely used for the analysis of gene expression, but the issue of reproducibility across platforms has yet to be fully resolved. To address this apparent problem, we compared gene expression between two microarray platforms: the short oligonucleotide Affymetrix Mouse Genome 430 2.0 GeneChip and a spotted cDNA array using a mouse model of angiotensin II-induced hypertension. RNA extracted from treated mice was analyzed using Affymetrix and cDNA platforms and then by quantitative RT-PCR (qRT-PCR) for validation of specific genes. For the 11,710 genes present on both arrays, we assessed the relative impact of experimental treatment and platform on measured expression and found that biological treatment had a far greater impact on measured expression than did platform for more than 90% of genes, a result validated by qRT-PCR. In the small number of cases in which platforms yielded discrepant results, qRT-PCR generally did not confirm either set of data, suggesting that sequence-specific effects may make expression predictions difficult to make using any technique."
},
{
"pmid": "20838408",
"title": "Tackling the widespread and critical impact of batch effects in high-throughput data.",
"abstract": "High-throughput technologies are widely used, for example to assay genetic variants, gene and protein expression, and epigenetic modifications. One often overlooked complication with such studies is batch effects, which occur because measurements are affected by laboratory conditions, reagent lots and personnel differences. This becomes a major problem when batch effects are correlated with an outcome of interest and lead to incorrect conclusions. Using both published studies and our own analyses, we argue that batch effects (as well as other technical and biological artefacts) are widespread and critical to address. We review experimental and computational approaches for doing so."
},
{
"pmid": "18588682",
"title": "Pathway analysis reveals functional convergence of gene expression profiles in breast cancer.",
"abstract": "BACKGROUND\nA recent study has shown high concordance of several breast-cancer gene signatures in predicting disease recurrence despite minimal overlap of the gene lists. It raises the question if there are common themes underlying such prediction concordance that are not apparent on the individual gene-level. We therefore studied the similarity of these gene-signatures on the basis of their functional annotations.\n\n\nRESULTS\nWe found the signatures did not identify the same set of genes but converged on the activation of a similar set of oncogenic and clinically-relevant pathways. A clear and consistent pattern across the four breast cancer signatures is the activation of the estrogen-signaling pathway. Other common features include BRCA1-regulated pathway, reck pathways, and insulin signaling associated with the ER-positive disease signatures, all providing possible explanations for the prediction concordance.\n\n\nCONCLUSION\nThis work explains why independent breast cancer signatures that appear to perform equally well at predicting patient prognosis show minimal overlap in gene membership."
},
{
"pmid": "22262733",
"title": "Comprehensive literature review and statistical considerations for microarray meta-analysis.",
"abstract": "With the rapid advances of various high-throughput technologies, generation of '-omics' data is commonplace in almost every biomedical field. Effective data management and analytical approaches are essential to fully decipher the biological knowledge contained in the tremendous amount of experimental data. Meta-analysis, a set of statistical tools for combining multiple studies of a related hypothesis, has become popular in genomic research. Here, we perform a systematic search from PubMed and manual collection to obtain 620 genomic meta-analysis papers, of which 333 microarray meta-analysis papers are summarized as the basis of this paper and the other 249 GWAS meta-analysis papers are discussed in the next companion paper. The review in the present paper focuses on various biological purposes of microarray meta-analysis, databases and software and related statistical procedures. Statistical considerations of such an analysis are further scrutinized and illustrated by a case study. Finally, several open questions are listed and discussed."
},
{
"pmid": "15184677",
"title": "Large-scale meta-analysis of cancer microarray data identifies common transcriptional profiles of neoplastic transformation and progression.",
"abstract": "Many studies have used DNA microarrays to identify the gene expression signatures of human cancer, yet the critical features of these often unmanageably large signatures remain elusive. To address this, we developed a statistical method, comparative metaprofiling, which identifies and assesses the intersection of multiple gene expression signatures from a diverse collection of microarray data sets. We collected and analyzed 40 published cancer microarray data sets, comprising 38 million gene expression measurements from >3,700 cancer samples. From this, we characterized a common transcriptional profile that is universally activated in most cancer types relative to the normal tissues from which they arose, likely reflecting essential transcriptional features of neoplastic transformation. In addition, we characterized a transcriptional profile that is commonly activated in various types of undifferentiated cancer, suggesting common molecular mechanisms by which cancer cells progress and avoid differentiation. Finally, we validated these transcriptional profiles on independent data sets."
},
{
"pmid": "12855442",
"title": "Combining multiple microarray studies and modeling interstudy variation.",
"abstract": "We have established a method for systematic integration of multiple microarray datasets. The method was applied to two different sets of cancer profiling studies. The change of gene expression in cancer was expressed as 'effect size', a standardized index measuring the magnitude of a treatment or covariate effect. The effect sizes were combined to obtain the estimate of the overall mean. The statistical significance was determined by a permutation test extended to multiple datasets. It was shown that the data integration promotes the discovery of small but consistent expression changes with increased sensitivity and reliability. The effect size methods provided the efficient modeling framework for addressing interstudy variation as well. Based on the result of homogeneity tests, a fixed effects model was adopted for one set of datasets that had been created in controlled experimental conditions. By contrast, a random effects model was shown to be appropriate for the other set of datasets that had been published by independent groups. We also developed an alternative modeling procedure based on a Bayesian approach, which would offer flexibility and robustness compared to the classical procedure."
},
{
"pmid": "24359104",
"title": "Meta-analysis methods for combining multiple expression profiles: comparisons, statistical characterization and an application guideline.",
"abstract": "BACKGROUND\nAs high-throughput genomic technologies become accurate and affordable, an increasing number of data sets have been accumulated in the public domain and genomic information integration and meta-analysis have become routine in biomedical research. In this paper, we focus on microarray meta-analysis, where multiple microarray studies with relevant biological hypotheses are combined in order to improve candidate marker detection. Many methods have been developed and applied in the literature, but their performance and properties have only been minimally investigated. There is currently no clear conclusion or guideline as to the proper choice of a meta-analysis method given an application; the decision essentially requires both statistical and biological considerations.\n\n\nRESULTS\nWe performed 12 microarray meta-analysis methods for combining multiple simulated expression profiles, and such methods can be categorized for different hypothesis setting purposes: (1) HS(A): DE genes with non-zero effect sizes in all studies, (2) HS(B): DE genes with non-zero effect sizes in one or more studies and (3) HS(r): DE gene with non-zero effect in \"majority\" of studies. We then performed a comprehensive comparative analysis through six large-scale real applications using four quantitative statistical evaluation criteria: detection capability, biological association, stability and robustness. We elucidated hypothesis settings behind the methods and further apply multi-dimensional scaling (MDS) and an entropy measure to characterize the meta-analysis methods and data structure, respectively.\n\n\nCONCLUSIONS\nThe aggregated results from the simulation study categorized the 12 methods into three hypothesis settings (HS(A), HS(B), and HS(r)). Evaluation in real data and results from MDS and entropy analyses provided an insightful and practical guideline to the choice of the most suitable method in a given application. All source files for simulation and real data are available on the author's publication website."
},
{
"pmid": "19118496",
"title": "Regularized gene selection in cancer microarray meta-analysis.",
"abstract": "BACKGROUND\nIn cancer studies, it is common that multiple microarray experiments are conducted to measure the same clinical outcome and expressions of the same set of genes. An important goal of such experiments is to identify a subset of genes that can potentially serve as predictive markers for cancer development and progression. Analyses of individual experiments may lead to unreliable gene selection results because of the small sample sizes. Meta analysis can be used to pool multiple experiments, increase statistical power, and achieve more reliable gene selection. The meta analysis of cancer microarray data is challenging because of the high dimensionality of gene expressions and the differences in experimental settings amongst different experiments.\n\n\nRESULTS\nWe propose a Meta Threshold Gradient Descent Regularization (MTGDR) approach for gene selection in the meta analysis of cancer microarray data. The MTGDR has many advantages over existing approaches. It allows different experiments to have different experimental settings. It can account for the joint effects of multiple genes on cancer, and it can select the same set of cancer-associated genes across multiple experiments. Simulation studies and analyses of multiple pancreatic and liver cancer experiments demonstrate the superior performance of the MTGDR.\n\n\nCONCLUSION\nThe MTGDR provides an effective way of analyzing multiple cancer microarray studies and selecting reliable cancer-associated genes."
},
{
"pmid": "25196635",
"title": "Meta-analysis based variable selection for gene expression data.",
"abstract": "Recent advance in biotechnology and its wide applications have led to the generation of many high-dimensional gene expression data sets that can be used to address similar biological questions. Meta-analysis plays an important role in summarizing and synthesizing scientific evidence from multiple studies. When the dimensions of datasets are high, it is desirable to incorporate variable selection into meta-analysis to improve model interpretation and prediction. According to our knowledge, all existing methods conduct variable selection with meta-analyzed data in an \"all-in-or-all-out\" fashion, that is, a gene is either selected in all of studies or not selected in any study. However, due to data heterogeneity commonly exist in meta-analyzed data, including choices of biospecimens, study population, and measurement sensitivity, it is possible that a gene is important in some studies while unimportant in others. In this article, we propose a novel method called meta-lasso for variable selection with high-dimensional meta-analyzed data. Through a hierarchical decomposition on regression coefficients, our method not only borrows strength across multiple data sets to boost the power to identify important genes, but also keeps the selection flexibility among data sets to take into account data heterogeneity. We show that our method possesses the gene selection consistency, that is, when sample size of each data set is large, with high probability, our method can identify all important genes and remove all unimportant genes. Simulation studies demonstrate a good performance of our method. We applied our meta-lasso method to a meta-analysis of five cardiovascular studies. The analysis results are clinically meaningful."
},
{
"pmid": "25829177",
"title": "Robust meta-analysis of gene expression using the elastic net.",
"abstract": "Meta-analysis of gene expression has enabled numerous insights into biological systems, but current methods have several limitations. We developed a method to perform a meta-analysis using the elastic net, a powerful and versatile approach for classification and regression. To demonstrate the utility of our method, we conducted a meta-analysis of lung cancer gene expression based on publicly available data. Using 629 samples from five data sets, we trained a multinomial classifier to distinguish between four lung cancer subtypes. Our meta-analysis-derived classifier included 58 genes and achieved 91% accuracy on leave-one-study-out cross-validation and on three independent data sets. Our method makes meta-analysis of gene expression more systematic and expands the range of questions that a meta-analysis can be used to address. As the amount of publicly available gene expression data continues to grow, our method will be an effective tool to help distill these data into knowledge."
},
{
"pmid": "27600230",
"title": "Microarray Meta-Analysis and Cross-Platform Normalization: Integrative Genomics for Robust Biomarker Discovery.",
"abstract": "The diagnostic and prognostic potential of the vast quantity of publicly-available microarray data has driven the development of methods for integrating the data from different microarray platforms. Cross-platform integration, when appropriately implemented, has been shown to improve reproducibility and robustness of gene signature biomarkers. Microarray platform integration can be conceptually divided into approaches that perform early stage integration (cross-platform normalization) versus late stage data integration (meta-analysis). A growing number of statistical methods and associated software for platform integration are available to the user, however an understanding of their comparative performance and potential pitfalls is critical for best implementation. In this review we provide evidence-based, practical guidance to researchers performing cross-platform integration, particularly with an objective to discover biomarkers."
},
{
"pmid": "16632515",
"title": "Adjusting batch effects in microarray expression data using empirical Bayes methods.",
"abstract": "Non-biological experimental variation or \"batch effects\" are commonly observed across multiple batches of microarray experiments, often rendering the task of combining data from these batches difficult. The ability to combine microarray data sets is advantageous to researchers to increase statistical power to detect biological phenomena from studies where logistical considerations restrict sample size or in studies that require the sequential hybridization of arrays. In general, it is inappropriate to combine data sets without adjusting for batch effects. Methods have been proposed to filter batch effects from data, but these are often complicated and require large batch sizes ( > 25) to implement. Because the majority of microarray studies are conducted using much smaller sample sizes, existing methods are not sufficient. We propose parametric and non-parametric empirical Bayes frameworks for adjusting data for batch effects that is robust to outliers in small sample sizes and performs comparable to existing methods for large samples. We illustrate our methods using two example data sets and show that our methods are justifiable, easy to apply, and useful in practice. Software for our method is freely available at: http://biosun1.harvard.edu/complab/batch/."
},
{
"pmid": "18325927",
"title": "Merging two gene-expression studies via cross-platform normalization.",
"abstract": "MOTIVATION\nGene-expression microarrays are currently being applied in a variety of biomedical applications. This article considers the problem of how to merge datasets arising from different gene-expression studies of a common organism and phenotype. Of particular interest is how to merge data from different technological platforms.\n\n\nRESULTS\nThe article makes two contributions to the problem. The first is a simple cross-study normalization method, which is based on linked gene/sample clustering of the given datasets. The second is the introduction and description of several general validation measures that can be used to assess and compare cross-study normalization methods. The proposed normalization method is applied to three existing breast cancer datasets, and is compared to several competing normalization methods using the proposed validation measures.\n\n\nAVAILABILITY\nThe supplementary materials and XPN Matlab code are publicly available at website: https://genome.unc.edu/xpn"
},
{
"pmid": "21386892",
"title": "Removing batch effects in analysis of expression microarray data: an evaluation of six batch adjustment methods.",
"abstract": "The expression microarray is a frequently used approach to study gene expression on a genome-wide scale. However, the data produced by the thousands of microarray studies published annually are confounded by \"batch effects,\" the systematic error introduced when samples are processed in multiple batches. Although batch effects can be reduced by careful experimental design, they cannot be eliminated unless the whole study is done in a single batch. A number of programs are now available to adjust microarray data for batch effects prior to analysis. We systematically evaluated six of these programs using multiple measures of precision, accuracy and overall performance. ComBat, an Empirical Bayes method, outperformed the other five programs by most metrics. We also showed that it is essential to standardize expression data at the probe level when testing for correlation of expression profiles, due to a sizeable probe effect in microarray data that can inflate the correlation among replicates and unrelated samples."
},
{
"pmid": "12925520",
"title": "Exploration, normalization, and summaries of high density oligonucleotide array probe level data.",
"abstract": "In this paper we report exploratory analyses of high-density oligonucleotide array data from the Affymetrix GeneChip system with the objective of improving upon currently used measures of gene expression. Our analyses make use of three data sets: a small experimental study consisting of five MGU74A mouse GeneChip arrays, part of the data from an extensive spike-in study conducted by Gene Logic and Wyeth's Genetics Institute involving 95 HG-U95A human GeneChip arrays; and part of a dilution study conducted by Gene Logic involving 75 HG-U95A GeneChip arrays. We display some familiar features of the perfect match and mismatch probe (PM and MM) values of these data, and examine the variance-mean relationship with probe-level data from probes believed to be defective, and so delivering noise only. We explain why we need to normalize the arrays to one another using probe level intensities. We then examine the behavior of the PM and MM using spike-in data and assess three commonly used summary measures: Affymetrix's (i) average difference (AvDiff) and (ii) MAS 5.0 signal, and (iii) the Li and Wong multiplicative model-based expression index (MBEI). The exploratory data analyses of the probe level data motivate a new summary measure that is a robust multi-array average (RMA) of background-adjusted, normalized, and log-transformed PM values. We evaluate the four expression summary measures using the dilution study data, assessing their behavior in terms of bias, variance and (for MBEI and RMA) model fit. Finally, we evaluate the algorithms in terms of their ability to detect known levels of differential expression using the spike-in data. We conclude that there is no obvious downside to using RMA and attaching a standard error (SE) to this quantity using a linear model which removes probe-specific affinities."
},
{
"pmid": "15461798",
"title": "Bioconductor: open software development for computational biology and bioinformatics.",
"abstract": "The Bioconductor project is an initiative for the collaborative creation of extensible software for computational biology and bioinformatics. The goals of the project include: fostering collaborative development and widespread use of innovative software, reducing barriers to entry into interdisciplinary scientific research, and promoting the achievement of remote reproducibility of research results. We describe details of our aims and methods, identify current challenges, compare Bioconductor to other open bioinformatics projects, and provide working examples."
},
{
"pmid": "23216969",
"title": "A computational pipeline for the development of multi-marker bio-signature panels and ensemble classifiers.",
"abstract": "BACKGROUND\nBiomarker panels derived separately from genomic and proteomic data and with a variety of computational methods have demonstrated promising classification performance in various diseases. An open question is how to create effective proteo-genomic panels. The framework of ensemble classifiers has been applied successfully in various analytical domains to combine classifiers so that the performance of the ensemble exceeds the performance of individual classifiers. Using blood-based diagnosis of acute renal allograft rejection as a case study, we address the following question in this paper: Can acute rejection classification performance be improved by combining individual genomic and proteomic classifiers in an ensemble?\n\n\nRESULTS\nThe first part of the paper presents a computational biomarker development pipeline for genomic and proteomic data. The pipeline begins with data acquisition (e.g., from bio-samples to microarray data), quality control, statistical analysis and mining of the data, and finally various forms of validation. The pipeline ensures that the various classifiers to be combined later in an ensemble are diverse and adequate for clinical use. Five mRNA genomic and five proteomic classifiers were developed independently using single time-point blood samples from 11 acute-rejection and 22 non-rejection renal transplant patients. The second part of the paper examines five ensembles ranging in size from two to 10 individual classifiers. Performance of ensembles is characterized by area under the curve (AUC), sensitivity, and specificity, as derived from the probability of acute rejection for individual classifiers in the ensemble in combination with one of two aggregation methods: (1) Average Probability or (2) Vote Threshold. One ensemble demonstrated superior performance and was able to improve sensitivity and AUC beyond the best values observed for any of the individual classifiers in the ensemble, while staying within the range of observed specificity. The Vote Threshold aggregation method achieved improved sensitivity for all 5 ensembles, but typically at the cost of decreased specificity.\n\n\nCONCLUSION\nProteo-genomic biomarker ensemble classifiers show promise in the diagnosis of acute renal allograft rejection and can improve classification performance beyond that of individual genomic or proteomic classifiers alone. Validation of our results in an international multicenter study is currently underway."
},
{
"pmid": "19447787",
"title": "Gradient lasso for Cox proportional hazards model.",
"abstract": "MOTIVATION\nThere has been an increasing interest in expressing a survival phenotype (e.g. time to cancer recurrence or death) or its distribution in terms of a subset of the expression data of a subset of genes. Due to high dimensionality of gene expression data, however, there is a serious problem of collinearity in fitting a prediction model, e.g. Cox's proportional hazards model. To avoid the collinearity problem, several methods based on penalized Cox proportional hazards models have been proposed. However, those methods suffer from severe computational problems, such as slow or even failed convergence, because of high-dimensional matrix inversions required for model fitting. We propose to implement the penalized Cox regression with a lasso penalty via the gradient lasso algorithm that yields faster convergence to the global optimum than do other algorithms. Moreover the gradient lasso algorithm is guaranteed to converge to the optimum under mild regularity conditions. Hence, our gradient lasso algorithm can be a useful tool in developing a prediction model based on high-dimensional covariates including gene expression data.\n\n\nRESULTS\nResults from simulation studies showed that the prediction model by gradient lasso recovers the prognostic genes. Also results from diffuse large B-cell lymphoma datasets and Norway/Stanford breast cancer dataset indicate that our method is very competitive compared with popular existing methods by Park and Hastie and Goeman in its computational time, prediction and selectivity.\n\n\nAVAILABILITY\nR package glcoxph is available at http://datamining.dongguk.ac.kr/R/glcoxph."
},
{
"pmid": "23550210",
"title": "Integrative analysis of complex cancer genomics and clinical profiles using the cBioPortal.",
"abstract": "The cBioPortal for Cancer Genomics (http://cbioportal.org) provides a Web resource for exploring, visualizing, and analyzing multidimensional cancer genomics data. The portal reduces molecular profiling data from cancer tissues and cell lines into readily understandable genetic, epigenetic, gene expression, and proteomic events. The query interface combined with customized data storage enables researchers to interactively explore genetic alterations across samples, genes, and pathways and, when available in the underlying data, to link these to clinical outcomes. The portal provides graphical summaries of gene-level data from multiple platforms, network visualization and analysis, survival analysis, patient-centric queries, and software programmatic access. The intuitive Web interface of the portal makes complex cancer genomics profiles accessible to researchers and clinicians without requiring bioinformatics expertise, thus facilitating biological discoveries. Here, we provide a practical guide to the analysis and visualization features of the cBioPortal for Cancer Genomics."
},
{
"pmid": "26033031",
"title": "Upregulated PFTK1 promotes tumor cell proliferation, migration, and invasion in breast cancer.",
"abstract": "PFTK1 was a cell division cycle 2-related serine/threonine protein kinase, which was up-regulated in breast cancer tissues and breast cancer lines. And up-regulated PFTK1 was highly associated with grade, axillary lymph node status, and Ki-67. Moreover, Kaplan-Meier curve showed that up-regulated PFTK1 was related to the poor breast carcinoma patients' overall survival. Here, we first discovered and confirmed that cyclin B was a new interacting protein of PFTK1, and the complex might increase the amount of DVL2, which triggers Wnt/β-catenin signaling pathway. Furthermore, knockdown of PFTK1 attenuated cell proliferation, anchorage-independent cell growth, and cell migration and invasion by inhibiting the transcriptional activation of β-catenin for cyclin D1, MMP9, and HEF1, whereas exogenous expression of PFTK1 might promote MDA-MB-231 cells proliferation, migration, and invasion via promoting PFTK1-DVL2-β-catenin axis. Our findings supported the notion that up-regulated PFTK1 might promote breast cancer progression and metastasis by activating Wnt signaling pathway through the PFTK1-DVL2-β-catenin axis."
},
{
"pmid": "24476821",
"title": "Comprehensive molecular characterization of urothelial bladder carcinoma.",
"abstract": "Urothelial carcinoma of the bladder is a common malignancy that causes approximately 150,000 deaths per year worldwide. So far, no molecularly targeted agents have been approved for treatment of the disease. As part of The Cancer Genome Atlas project, we report here an integrated analysis of 131 urothelial carcinomas to provide a comprehensive landscape of molecular alterations. There were statistically significant recurrent mutations in 32 genes, including multiple genes involved in cell-cycle regulation, chromatin regulation, and kinase signalling pathways, as well as 9 genes not previously reported as significantly mutated in any cancer. RNA sequencing revealed four expression subtypes, two of which (papillary-like and basal/squamous-like) were also evident in microRNA sequencing and protein data. Whole-genome and RNA sequencing identified recurrent in-frame activating FGFR3-TACC3 fusions and expression or integration of several viruses (including HPV16) that are associated with gene inactivation. Our analyses identified potential therapeutic targets in 69% of the tumours, including 42% with targets in the phosphatidylinositol-3-OH kinase/AKT/mTOR pathway and 45% with targets (including ERBB2) in the RTK/MAPK pathway. Chromatin regulatory genes were more frequently mutated in urothelial carcinoma than in any other common cancer studied so far, indicating the future possibility of targeted therapy for chromatin abnormalities."
},
{
"pmid": "26418898",
"title": "Induction of methionine adenosyltransferase 2A in tamoxifen-resistant breast cancer cells.",
"abstract": "We previously showed that S-adenosylmethionine-mediated hypermethylation of the PTEN promoter was important for the growth of tamoxifen-resistant MCF-7 (TAMR-MCF-7) cancer cells. Here, we found that the basal expression level of methionine adenosyltransferase 2A (MAT2A), a critical enzyme for the biosynthesis of S-adenosylmethionine, was up-regulated in TAMR-MCF-7 cells compared with control MCF-7 cells. Moreover, the basal expression level of MAT2A in T47D cells, a TAM-resistant estrogen receptor-positive cell line was higher compared to MCF-7 cells. Immunohistochemistry confirmed that MAT2A expression in TAM-resistant human breast cancer tissues was higher than that in TAM-responsive cases. The promoter region of human MAT2A contains binding sites for nuclear factor-κB, activator protein-1 (AP-1), and NF-E2-related factor 2 (Nrf2), and the activities of these three transcription factors were enhanced in TAMR-MCF-7 cells. Both the protein expression and transcriptional activity of MAT2A in TAMR-MCF-7 cells were potently suppressed by NF-κB inhibition but not by c-Jun/AP-1 or Nrf2 knock-down. Interestingly, the expression levels of microRNA (miR)-146a and -146b were diminished in TAMR-MCF-7 cells, and miR-146b transduction decreased NF-κB-mediated MAT2A expression. miR-146b restored PTEN expression via the suppression of PTEN promoter methylation in TAMR-MCF-7 cells. Additionally, miR-146b overexpression inhibited cell proliferation and reversed chemoresistance to 4-hydroxytamoxifen in TAMR-MCF-7 cells."
},
{
"pmid": "28540450",
"title": "High neuronatin (NNAT) expression is associated with poor outcome in breast cancer.",
"abstract": "Neuronatin (NNAT) is a proteolipid involved in cation homeostasis especially in the developing brain. Its expression has been associated with the progression of lung cancer, glioblastoma, and neuroblastoma as well as glucose induced apoptosis in pancreatic cells. We performed a retrospective study of 148 breast cancer specimens for NNAT expression by immunohistochemistry to evaluate this protein as a prognostic marker for breast cancer. We found a high NNAT immunoreactivity score (by multivariate cox regression) to be an independent prognostic marker for relapse-free (hazard ratio HR = 3.55, p = 0.002) and overall survival (HR = 6.29, p < 0.001). However, NNAT expression was not associated with classical parameters such as hormone receptor expression (p = 0.86) or lymph node metastasis (p = 0.83). Additional independent risk factors in this study population were tumor size (≤2 cm; overall survival: HR = 0.36, p = 0.023; relapse-free survival: HR = 0.26, p < 0.01) and blood vessel infiltration (overall survival: HR = 0.34 p < 0.01). NNAT expression determined by immunohistochemistry might therefore become a helpful additional biomarker to identify high-risk breast cancer patients."
},
{
"pmid": "17344846",
"title": "Patterns of somatic mutation in human cancer genomes.",
"abstract": "Cancers arise owing to mutations in a subset of genes that confer growth advantage. The availability of the human genome sequence led us to propose that systematic resequencing of cancer genomes for mutations would lead to the discovery of many additional cancer genes. Here we report more than 1,000 somatic mutations found in 274 megabases (Mb) of DNA corresponding to the coding exons of 518 protein kinase genes in 210 diverse human cancers. There was substantial variation in the number and pattern of mutations in individual cancers reflecting different exposures, DNA repair defects and cellular origins. Most somatic mutations are likely to be 'passengers' that do not contribute to oncogenesis. However, there was evidence for 'driver' mutations contributing to the development of the cancers studied in approximately 120 genes. Systematic sequencing of cancer genomes therefore reveals the evolutionary diversity of cancers and implicates a larger repertoire of cancer genes than previously anticipated."
},
{
"pmid": "19607727",
"title": "Identification of novel candidate target genes, including EPHB3, MASP1 and SST at 3q26.2-q29 in squamous cell carcinoma of the lung.",
"abstract": "BACKGROUND\nThe underlying genetic alterations for squamous cell carcinoma (SCC) and adenocarcinoma (AC) carcinogenesis are largely unknown.\n\n\nMETHODS\nHigh-resolution array- CGH was performed to identify the differences in the patterns of genomic imbalances between SCC and AC of non-small cell lung cancer (NSCLC).\n\n\nRESULTS\nOn a genome-wide profile, SCCs showed higher frequency of gains than ACs (p = 0.067). More specifically, statistically significant differences were observed across the histologic subtypes for gains at 2q14.2, 3q26.2-q29, 12p13.2-p13.33, and 19p13.3, as well as losses at 3p26.2-p26.3, 16p13.11, and 17p11.2 in SCC, and gains at 7q22.1 and losses at 15q22.2-q25.2 occurred in AC (P < 0.05). The most striking difference between SCC and AC was gains at the 3q26.2-q29, occurring in 86% (19/22) of SCCs, but in only 21% (3/14) of ACs. Many significant genes at the 3q26.2-q29 regions previously linked to a specific histology, such as EVI1,MDS1, PIK3CA and TP73L, were observed in SCC (P < 0.05). In addition, we identified the following possible target genes (> 30% of patients) at 3q26.2-q29: LOC389174 (3q26.2),KCNMB3 (3q26.32),EPHB3 (3q27.1), MASP1 and SST (3q27.3), LPP and FGF12 (3q28), and OPA1,KIAA022,LOC220729, LOC440996,LOC440997, and LOC440998 (3q29), all of which were significantly targeted in SCC (P < 0.05). Among these same genes, high-level amplifications were detected for the gene, EPHB3, at 3q27.1, and MASP1 and SST, at 3q27.3 (18, 18, and 14%, respectively). Quantitative real time PCR demonstrated array CGH detected potential candidate genes that were over expressed in SCCs.\n\n\nCONCLUSION\nUsing whole-genome array CGH, we have successfully identified significant differences and unique information of chromosomal signatures prevalent between the SCC and AC subtypes of NSCLC. The newly identified candidate target genes may prove to be highly attractive candidate molecular markers for the classification of NSCLC histologic subtypes, and could potentially contribute to the pathogenesis of the squamous cell carcinoma of the lung."
}
] |
Journal of Cheminformatics | 30460426 | PMC6755597 | 10.1186/s13321-018-0308-5 | Improved understanding of aqueous solubility modeling through topological data analysis | Topological data analysis is a family of recent mathematical techniques seeking to understand the ‘shape’ of data, and has been used to understand the structure of the descriptor space produced from a standard chemical informatics software from the point of view of solubility. We have used the mapper algorithm, a TDA method that creates low-dimensional representations of data, to create a network visualization of the solubility space. While descriptors with clear chemical implications are prominent features in this space, reflecting their importance to the chemical properties, an unexpected and interesting correlation between chlorine content and rings and their implication for solubility prediction is revealed. A parallel representation of the chemical space was generated using persistent homology applied to molecular graphs. Links between this chemical space and the descriptor space were shown to be in agreement with chemical heuristics. The use of persistent homology on molecular graphs, extended by the use of norms on the associated persistence landscapes allow the conversion of discrete shape descriptors to continuous ones, and a perspective of the application of these descriptors to quantitative structure property relations is presented.Electronic supplementary materialThe online version of this article (10.1186/s13321-018-0308-5) contains supplementary material, which is available to authorized users. | Related workPersistence based methods have recently been used as a tool to discover new nanoporous materials [13], where they were used as an effective way to identify materials with similar pore geometries. Moreover, in a case study of materials for methane storage, it was shown that it is possible to find materials that perform as well as known top-performing materials by searching the database for materials with similar pore shapes. Conversely, the pore shapes of the top-performing materials can be sorted into topologically distinct classes, and materials from each class require a different optimisation strategy [13]. Furthermore, persistence has found use in a wide variety of materials applications, such as categorising amorphous solids [14], and analysing phase transitions [15, 16].Persistent homology has also been used in the analysis of protein folding [17–20], and in particular persistent homology at different coarse-grained scales has been shown to enable the calculation of topological invariants in protein classes. Persistent homology has been used to relate molecular shape to binding affinity, and other molecular properties [21, 22]. Alternatively, persistent homology has been used as a descriptor in the construction of models of shape-dependent properties, such as in the case of fullerene stability [23].In parallel, mapper based methods have found use in chemical fields. These range from the analysis of hyperspectral imaging data [24], to exploring protein folding pathways [25]. In these works, the mapper algorithm provides a visualisation technique for cluster analysis to detect minor compounds in a multiphase chemical system, and to detect low-density transient states in folding pathways, such as hairpins. Interestingly, standard computational chemistry analysis techniques have been introduced to understand structure in high-dimensional Euclidean data sets, such as in [26]. Here, the nudged elastic band algorithm, standard in determining minimum energy pathways, is used alongside Morse theory, as an alternative to both mapper and persistent homology. | [
"11922952",
"9611785",
"15154768",
"23795551",
"18624401",
"19877720",
"24919008",
"11277722",
"19226181",
"28208329",
"26150288",
"29036440",
"25523342",
"19368437",
"23393618",
"24464287",
"25687211",
"25719940",
"27095195"
] | [
{
"pmid": "11922952",
"title": "Prediction of drug solubility from structure.",
"abstract": "The aqueous solubility of a drug is an important factor affecting its bioavailability. Numerous computational methods have been developed for the prediction of aqueous solubility from a compound's structure. A review is provided of the methodology and quality of results for the most useful procedures including the model implemented in the QikProp program. Viable methods now exist for predictions with less than 1 log unit uncertainty, which is adequate for prescreening synthetic candidates or design of combinatorial libraries. Further progress with predictive methods would require an experimental database of highly accurate solubilities for a large, diverse collection of drug-like molecules."
},
{
"pmid": "9611785",
"title": "Aqueous solubility prediction of drugs based on molecular topology and neural network modeling.",
"abstract": "A method for predicting the aqueous solubility of drug compounds was developed based on topological indices and artificial neural network (ANN) modeling. The aqueous solubility values for 211 drugs and related compounds representing acidic, neutral, and basic drugs of different structural classes were collected from the literature. The data set was divided into a training set (n = 160) and a randomly chosen test set (n = 51). Structural parameters used as inputs in a 23-5-1 artificial neural network included 14 atom-type electrotopological indices and nine other topological indices. For the test set, a predictive r2 = 0.86 and s = 0.53 (log units) were achieved."
},
{
"pmid": "15154768",
"title": "ESOL: estimating aqueous solubility directly from molecular structure.",
"abstract": "This paper describes a simple method for estimating the aqueous solubility (ESOL--Estimated SOLubility) of a compound directly from its structure. The model was derived from a set of 2874 measured solubilities using linear regression against nine molecular properties. The most significant parameter was calculated logP(octanol), followed by molecular weight, proportion of heavy atoms in aromatic systems, and number of rotatable bonds. The model performed consistently well across three validation sets, predicting solubilities within a factor of 5-8 of their measured values, and was competitive with the well-established \"General Solubility Equation\" for medicinal/agrochemical sized molecules."
},
{
"pmid": "23795551",
"title": "Deep architectures and deep learning in chemoinformatics: the prediction of aqueous solubility for drug-like molecules.",
"abstract": "Shallow machine learning methods have been applied to chemoinformatics problems with some success. As more data becomes available and more complex problems are tackled, deep machine learning methods may also become useful. Here, we present a brief overview of deep learning methods and show in particular how recursive neural network approaches can be applied to the problem of predicting molecular properties. However, molecules are typically described by undirected cyclic graphs, while recursive approaches typically use directed acyclic graphs. Thus, we develop methods to address this discrepancy, essentially by considering an ensemble of recursive neural networks associated with all possible vertex-centered acyclic orientations of the molecular graph. One advantage of this approach is that it relies only minimally on the identification of suitable molecular descriptors because suitable representations are learned automatically from the data. Several variants of this approach are applied to the problem of predicting aqueous solubility and tested on four benchmark data sets. Experimental results show that the performance of the deep learning methods matches or exceeds the performance of other state-of-the-art methods according to several evaluation metrics and expose the fundamental limitations arising from training sets that are too small or too noisy. A Web-based predictor, AquaSol, is available online through the ChemDB portal ( cdb.ics.uci.edu ) together with additional material."
},
{
"pmid": "18624401",
"title": "Solubility challenge: can you predict solubilities of 32 molecules using a database of 100 reliable measurements?",
"abstract": "Solubility is a key physicochemical property of molecules. Serious deficiencies exist in the consistency and reliability of solubility data in the literature. The accurate prediction of solubility would be very useful. However, systematic errors and lack of metadata associated with measurements greatly reduce the confidence in current models. To address this, we are accurately measuring intrinsic solubility values, and here we report results for a diverse set of 100 druglike molecules at 25 degrees C and an ionic strength of 0.15 M using the CheqSol approach. This is a highly reproducible potentiometric technique that ensures the thermodynamic equilibrium is reached rapidly. Results with a coefficient of variation higher than 4% were rejected. In addition, the Potentiometric Cycling for Polymorph Creation method, [PC] (2), was used to obtain multiple polymorph forms from aqueous solution. We now challenge researchers to predict the intrinsic solubility of 32 other druglike molecules that have been measured but are yet to be published."
},
{
"pmid": "19877720",
"title": "In silico prediction of aqueous solubility: the solubility challenge.",
"abstract": "The dissolution of a chemical into water is a process fundamental to both chemistry and biology. The persistence of a chemical within the environment and the effects of a chemical within the body are dependent primarily upon aqueous solubility. With the well-documented limitations hindering the accurate experimental determination of aqueous solubility, the utilization of predictive methods have been widely investigated and employed. The setting of a solubility challenge by this journal proved an excellent opportunity to explore several different modeling methods, utilizing a supplied dataset of high-quality aqueous solubility measurements. Four contrasting approaches (simple linear regression, artificial neural networks, category formation, and available in silico models) were utilized within our laboratory and the quality of these predictions was assessed. These were chosen to span the multitude of modeling methods now in use, while also allowing for the evaluation of existing commercial solubility models. The conclusions of this study were surprising, in that a simple linear regression approach proved to be superior over more complex modeling methods. Possible explanations for this observation are discussed and also recommendations are made for future solubility prediction."
},
{
"pmid": "24919008",
"title": "Is experimental data quality the limiting factor in predicting the aqueous solubility of druglike molecules?",
"abstract": "We report the results of testing quantitative structure-property relationships (QSPR) that were trained upon the same druglike molecules but two different sets of solubility data: (i) data extracted from several different sources from the published literature, for which the experimental uncertainty is estimated to be 0.6-0.7 log S units (referred to mol/L); (ii) data measured by a single accurate experimental method (CheqSol), for which experimental uncertainty is typically <0.05 log S units. Contrary to what might be expected, the models derived from the CheqSol experimental data are not more accurate than those derived from the \"noisy\" literature data. The results suggest that, at the present time, it is the deficiency of QSPR methods (algorithms and/or descriptor sets), and not, as is commonly quoted, the uncertainty in the experimental measurements, which is the limiting factor in accurately predicting aqueous solubility for pharmaceutical molecules."
},
{
"pmid": "11277722",
"title": "Prediction of drug solubility by the general solubility equation (GSE).",
"abstract": "The revised general solubility equation (GSE) proposed by Jain and Yalkowsky is used to estimate the aqueous solubility of a set of organic nonelectrolytes studied by Jorgensen and Duffy. The only inputs used in the GSE are the Celsius melting point (MP) and the octanol water partition coefficient (K(ow)). These are generally known, easily measured, or easily calculated. The GSE does not utilize any fitted parameters. The average absolute error for the 150 compounds is 0.43 compared to 0.56 with Jorgensen and Duffy's computational method, which utilitizes five fitted parameters. Thus, the revised GSE is simpler and provides a more accurate estimation of aqueous solubility of the same set of organic compounds. It is also more accurate than the original version of the GSE."
},
{
"pmid": "19226181",
"title": "Aqueous solubility prediction based on weighted atom type counts and solvent accessible surface areas.",
"abstract": "In this work, four reliable aqueous solubility models, ASM-ATC (aqueous solubility model based on atom type counts), ASM-ATC-LOGP (aqueous solubility model based on atom type counts and ClogP as an additional descriptor), ASM-SAS (aqueous solubility model based on solvent accessible surface areas), and ASM-SAS-LOGP (aqueous solubility model based on solvent accessible surface areas and ClogP as an additional descriptor), have been developed for a diverse data set of 3664 compounds. All four models were extensively validated by various cross-validation tests, and encouraging predictability was achieved. ASM-ATC-LOGP, the best model, achieves leave-one-out correlation coefficient square (q2) and root-mean-square error (RMSE) of 0.832 and 0.840 logarithm unit, respectively. In a 10,000 times 85/15 cross-validation test, this model achieves the mean of q2 and RMSE being 0.832 and 0.841 logarithm unit, respectively. We believe that those robust models can serve as an important rule in druglikeness analysis and an efficient filter in prioritizing compound libraries prior to high throughput screenings (HTS)."
},
{
"pmid": "28208329",
"title": "Persistent homology analysis of craze formation.",
"abstract": "We apply a persistent homology analysis to investigate the behavior of nanovoids during the crazing process of glassy polymers. We carry out a coarse-grained molecular dynamics simulation of the uniaxial deformation of an amorphous polymer and analyze the results with persistent homology. Persistent homology reveals the void coalescence during craze formation, and the results suggest that the yielding process is regarded as the percolation of nanovoids created by deformation."
},
{
"pmid": "26150288",
"title": "Persistent homology and many-body atomic structure for medium-range order in the glass.",
"abstract": "The characterization of the medium-range (MRO) order in amorphous materials and its relation to the short-range order is discussed. A new topological approach to extract a hierarchical structure of amorphous materials is presented, which is robust against small perturbations and allows us to distinguish it from periodic or random configurations. This method is called the persistence diagram (PD) and introduces scales to many-body atomic structures to facilitate size and shape characterization. We first illustrate the representation of perfect crystalline and random structures in PDs. Then, the MRO in amorphous silica is characterized using the appropriate PD. The PD approach compresses the size of the data set significantly, to much smaller geometrical summaries, and has considerable potential for application to a wide range of materials, including complex molecular liquids, granular materials, and metallic glasses."
},
{
"pmid": "29036440",
"title": "Analysis and prediction of protein folding energy changes upon mutation by element specific persistent homology.",
"abstract": "MOTIVATION\nSite directed mutagenesis is widely used to understand the structure and function of biomolecules. Computational prediction of mutation impacts on protein stability offers a fast, economical and potentially accurate alternative to laboratory mutagenesis. Most existing methods rely on geometric descriptions, this work introduces a topology based approach to provide an entirely new representation of mutation induced protein stability changes that could not be obtained from conventional techniques.\n\n\nRESULTS\nTopology based mutation predictor (T-MP) is introduced to dramatically reduce the geometric complexity and number of degrees of freedom of proteins, while element specific persistent homology is proposed to retain essential biological information. The present approach is found to outperform other existing methods in the predictions of globular protein stability changes upon mutation. A Pearson correlation coefficient of 0.82 with an RMSE of 0.92 kcal/mol is obtained on a test set of 350 mutation samples. For the prediction of membrane protein stability changes upon mutation, the proposed topological approach has a 84% higher Pearson correlation coefficient than the current state-of-the-art empirical methods, achieving a Pearson correlation of 0.57 and an RMSE of 1.09 kcal/mol in a 5-fold cross validation on a set of 223 membrane protein mutation samples.\n\n\nAVAILABILITY AND IMPLEMENTATION\nhttp://weilab.math.msu.edu/TML/TML-MP/.\n\n\nCONTACT\[email protected].\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "25523342",
"title": "Persistent homology for the quantitative prediction of fullerene stability.",
"abstract": "Persistent homology is a relatively new tool often used for qualitative analysis of intrinsic topological features in images and data originated from scientific and engineering applications. In this article, we report novel quantitative predictions of the energy and stability of fullerene molecules, the very first attempt in using persistent homology in this context. The ground-state structures of a series of small fullerene molecules are first investigated with the standard Vietoris-Rips complex. We decipher all the barcodes, including both short-lived local bars and long-lived global bars arising from topological invariants, and associate them with fullerene structural details. Using accumulated bar lengths, we build quantitative models to correlate local and global Betti-2 bars, respectively with the heat of formation and total curvature energies of fullerenes. It is found that the heat of formation energy is related to the local hexagonal cavities of small fullerenes, while the total curvature energies of fullerene isomers are associated with their sphericities, which are measured by the lengths of their long-lived Betti-2 bars. Excellent correlation coefficients (>0.94) between persistent homology predictions and those of quantum or curvature analysis have been observed. A correlation matrix based filtration is introduced to further verify our findings."
},
{
"pmid": "19368437",
"title": "Topological methods for exploring low-density states in biomolecular folding pathways.",
"abstract": "Characterization of transient intermediate or transition states is crucial for the description of biomolecular folding pathways, which is, however, difficult in both experiments and computer simulations. Such transient states are typically of low population in simulation samples. Even for simple systems such as RNA hairpins, recently there are mounting debates over the existence of multiple intermediate states. In this paper, we develop a computational approach to explore the relatively low populated transition or intermediate states in biomolecular folding pathways, based on a topological data analysis tool, MAPPER, with simulation data from large-scale distributed computing. The method is inspired by the classical Morse theory in mathematics which characterizes the topology of high-dimensional shapes via some functional level sets. In this paper we exploit a conditional density filter which enables us to focus on the structures on pathways, followed by clustering analysis on its level sets, which helps separate low populated intermediates from high populated folded/unfolded structures. A successful application of this method is given on a motivating example, a RNA hairpin with GCAA tetraloop, where we are able to provide structural evidence from computer simulations on the multiple intermediate states and exhibit different pictures about unfolding and refolding pathways. The method is effective in dealing with high degree of heterogeneity in distribution, capturing structural features in multiple pathways, and being less sensitive to the distance metric than nonlinear dimensionality reduction or geometric embedding methods. The methodology described in this paper admits various implementations or extensions to incorporate more information and adapt to different settings, which thus provides a systematic tool to explore the low-density intermediate states in complex biomolecular folding systems."
},
{
"pmid": "23393618",
"title": "Extracting insights from the shape of complex data using topology.",
"abstract": "This paper applies topological methods to study complex high dimensional data sets by extracting shapes (patterns) and obtaining insights about them. Our method combines the best features of existing standard methodologies such as principal component and cluster analyses to provide a geometric representation of complex data sets. Through this hybrid method, we often find subgroups in data sets that traditional methodologies fail to find. Our method also permits the analysis of individual data sets as well as the analysis of relationships between related data sets. We illustrate the use of our method by applying it to three very different kinds of data, namely gene expression from breast tumors, voting data from the United States House of Representatives and player performance data from the NBA, in each case finding stratifications of the data which are more refined than those produced by standard methods."
},
{
"pmid": "24464287",
"title": "Similarity network fusion for aggregating data types on a genomic scale.",
"abstract": "Recent technologies have made it cost-effective to collect diverse types of genome-wide data. Computational methods are needed to combine these data to create a comprehensive view of a given disease or a biological process. Similarity network fusion (SNF) solves this problem by constructing networks of samples (e.g., patients) for each available data type and then efficiently fusing these into one network that represents the full spectrum of underlying data. For example, to create a comprehensive view of a disease given a cohort of patients, SNF computes and fuses patient similarity networks obtained from each of their data types separately, taking advantage of the complementarity in the data. We used SNF to combine mRNA expression, DNA methylation and microRNA (miRNA) expression data for five cancer data sets. SNF substantially outperforms single data type analysis and established integrative approaches when identifying cancer subtypes and is effective for predicting survival."
},
{
"pmid": "25687211",
"title": "The chemical space project.",
"abstract": "One of the simplest questions that can be asked about molecular diversity is how many organic molecules are possible in total? To answer this question, my research group has computationally enumerated all possible organic molecules up to a certain size to gain an unbiased insight into the entire chemical space. Our latest database, GDB-17, contains 166.4 billion molecules of up to 17 atoms of C, N, O, S, and halogens, by far the largest small molecule database reported to date. Molecules allowed by valency rules but unstable or nonsynthesizable due to strained topologies or reactive functional groups were not considered, which reduced the enumeration by at least 10 orders of magnitude and was essential to arrive at a manageable database size. Despite these restrictions, GDB-17 is highly relevant with respect to known molecules. Beyond enumeration, understanding and exploiting GDBs (generated databases) led us to develop methods for virtual screening and visualization of very large databases in the form of a \"periodic system of molecules\" comprising six different fingerprint spaces, with web-browsers for nearest neighbor searches, and the MQN- and SMIfp-Mapplet application for exploring color-coded principal component maps of GDB and other large databases. Proof-of-concept applications of GDB for drug discovery were realized by combining virtual screening with chemical synthesis and activity testing for neurotransmitter receptor and transporter ligands. One surprising lesson from using GDB for drug analog searches is the incredible depth of chemical space, that is, the fact that millions of very close analogs of any molecule can be readily identified by nearest-neighbor searches in the MQN-space of the various GDBs. The chemical space project has opened an unprecedented door on chemical diversity. Ongoing and yet unmet challenges concern enumerating molecules beyond 17 atoms and synthesizing GDB molecules with innovative scaffolds and pharmacophores."
},
{
"pmid": "25719940",
"title": "Chemography of natural product space.",
"abstract": "We present the application of the generative topographic map algorithm to visualize the chemical space populated by natural products and synthetic drugs. Generative topographic maps may be used for nonlinear dimensionality reduction and probabilistic modeling. For compound mapping, we represented the molecules by two-dimensional pharmacophore features (chemically advanced template search descriptor). The results obtained suggest a close resemblance of synthetic drugs with natural products in terms of their pharmacophore features, despite pronounced differences in chemical structure. Generative topographic map-based cluster analysis revealed both known and new potential activities of natural products and drug-like compounds. We conclude that the generative topographic map method is suitable for inferring functional similarities between these two classes of compounds and predicting macromolecular targets of natural products."
},
{
"pmid": "27095195",
"title": "Pharmit: interactive exploration of chemical space.",
"abstract": "Pharmit (http://pharmit.csb.pitt.edu) provides an online, interactive environment for the virtual screening of large compound databases using pharmacophores, molecular shape and energy minimization. Users can import, create and edit virtual screening queries in an interactive browser-based interface. Queries are specified in terms of a pharmacophore, a spatial arrangement of the essential features of an interaction, and molecular shape. Search results can be further ranked and filtered using energy minimization. In addition to a number of pre-built databases of popular compound libraries, users may submit their own compound libraries for screening. Pharmit uses state-of-the-art sub-linear algorithms to provide interactive screening of millions of compounds. Queries typically take a few seconds to a few minutes depending on their complexity. This allows users to iteratively refine their search during a single session. The easy access to large chemical datasets provided by Pharmit simplifies and accelerates structure-based drug design. Pharmit is available under a dual BSD/GPL open-source license."
}
] |
Journal of Cheminformatics | 30560325 | PMC6755615 | 10.1186/s13321-018-0314-7 | Statistical principle-based approach for gene and protein related object recognition | The large number of chemical and pharmaceutical patents has attracted researchers doing biomedical text mining to extract valuable information such as chemicals, genes and gene products. To facilitate gene and gene product annotations in patents, BioCreative V.5 organized a gene- and protein-related object (GPRO) recognition task, in which participants were assigned to identify GPRO mentions and determine whether they could be linked to their unique biological database records. In this paper, we describe the system constructed for this task. Our system is based on two different NER approaches: the statistical-principle-based approach (SPBA) and conditional random fields (CRF). Therefore, we call our system SPBA-CRF. SPBA is an interpretable machine-learning framework for gene mention recognition. The predictions of SPBA are used as features for our CRF-based GPRO recognizer. The recognizer was developed for identifying chemical mentions in patents, and we adapted it for GPRO recognition. In the BioCreative V.5 GPRO recognition task, SPBA-CRF obtained an F-score of 73.73% on the evaluation metric of GPRO type 1 and an F-score of 78.66% on the evaluation metric of combining GPRO types 1 and 2. Our results show that SPBA trained on an external NER dataset can perform reasonably well on the partial match evaluation metric. Furthermore, SPBA can significantly improve performance of the CRF-based recognizer trained on the GPRO dataset. | Related workIn this section, we briefly review state-of-the-art GPRO recognition systems and SPBA-related work.Gene and protein related objectThe GPRO recognition task was first included in BioCreative V [4], where the top-performing system was developed by [5]. They combined the results of five recognizers by majority voting method. All recognizers were CRF-based but used different combinations of GPRO mention types and features, which were adapted from GNormPlus features [6]. In addition, [5] employed some heuristic post-processing steps like enforcing tag consistency and full-abbreviation. Also, a maximum-entropy (ME)-based filter was developed to remove false positive predictions. They achieved an F-score of 81.37% in the BioCreative V GPRO task.In the BioCreative V.5 GPRO task, [7] used a BiLSTM (Bidirectional Long Short-Term Memory) model to identify gene and protein related objects. The BiLSTM architecture was the same as that used by [8]. The word embedding consisted of character-level and token-level representations, and bidirectional LSTM was used to generate character-level embedding from the characters of a word. The input embedding of characters was randomly initialized. Character-level representation could capture the morphology of words like prefixes and suffixes. Then a word embedding layer was used as the input for the next bidirectional LSTM layer. Using bidirectional LSTM layers could capture the context information of the current token. Following the bidirectional LSTM layer was a CRF layer which was able to learn the label transition states of GPRO labels. Their system achieved F-scores of 76.34% and 75.91% on the GPRO Type 1 and GPRO Type 1 + 2 evaluation metrics, respectively. Luo et al.’s [9] approach was basically the same as Liu et al. [7]; however, [9] achieved a higher F-score of 79.19% on the GPRO Type 1 evaluation metric compared to Liu et al. [7] 76.34%. Luo et al.’s [9] system also achieved an F-score of 72.28% on the GPRO Type 1 + 2 evaluation metric. The lower performance on the GPRO Type 1 + 2 metric mainly resulted from the failure of their system to identify many Type 2 GPRO mentions (false negative).Statistical principle-based approachSPBA is a straightforward, easy-to-interpret framework for resolving natural language processing (NLP) problems such as question answering or topic classification. SPBA consists of three main parts: semantic map/ontology, principle generation, and principle matching. SPBA was first used to solve tasks in general domains such as sentiment classification of Chinese news [10] and answering restaurant-related questions [11]. SPBA has been adapted for biomedical tasks, including miRNA recognition [12], miRNA-target interaction extraction [13], and gene-metastasis relation extraction [14]. | [
"25810766",
"28025348",
"26590260",
"28025339",
"23703206",
"27242036",
"25326323",
"24203711"
] | [
{
"pmid": "25810766",
"title": "CHEMDNER: The drugs and chemical names extraction challenge.",
"abstract": "Natural language processing (NLP) and text mining technologies for the chemical domain (ChemNLP or chemical text mining) are key to improve the access and integration of information from unstructured data such as patents or the scientific literature. Therefore, the BioCreative organizers posed the CHEMDNER (chemical compound and drug name recognition) community challenge, which promoted the development of novel, competitive and accessible chemical text mining systems. This task allowed a comparative assessment of the performance of various methodologies using a carefully prepared collection of manually labeled text prepared by specially trained chemists as Gold Standard data. We evaluated two important aspects: one covered the indexing of documents with chemicals (chemical document indexing - CDI task), and the other was concerned with finding the exact mentions of chemicals in text (chemical entity mention recognition - CEM task). 27 teams (23 academic and 4 commercial, a total of 87 researchers) returned results for the CHEMDNER tasks: 26 teams for CEM and 23 for the CDI task. Top scoring teams obtained an F-score of 87.39% for the CEM task and 88.20% for the CDI task, a very promising result when compared to the agreement between human annotators (91%). The strategies used to detect chemicals included machine learning methods (e.g. conditional random fields) using a variety of features, chemistry and drug lexica, and domain-specific rules. We expect that the tools and resources resulting from this effort will have an impact in future developments of chemical text mining applications and will form the basis to find related chemical information for the detected entities, such as toxicological or pharmacogenomic properties."
},
{
"pmid": "28025348",
"title": "Pressing needs of biomedical text mining in biocuration and beyond: opportunities and challenges.",
"abstract": "Text mining in the biomedical sciences is rapidly transitioning from small-scale evaluation to large-scale application. In this article, we argue that text-mining technologies have become essential tools in real-world biomedical research. We describe four large scale applications of text mining, as showcased during a recent panel discussion at the BioCreative V Challenge Workshop. We draw on these applications as case studies to characterize common requirements for successfully applying text-mining techniques to practical biocuration needs. We note that system 'accuracy' remains a challenge and identify several additional common difficulties and potential research directions including (i) the 'scalability' issue due to the increasing need of mining information from millions of full-text articles, (ii) the 'interoperability' issue of integrating various text-mining systems into existing curation workflows and (iii) the 'reusability' issue on the difficulty of applying trained systems to text genres that are not seen previously during development. We then describe related efforts within the text-mining community, with a special focus on the BioCreative series of challenge workshops. We believe that focusing on the near-term challenges identified in this work will amplify the opportunities afforded by the continued adoption of text-mining tools. Finally, in order to sustain the curation ecosystem and have text-mining systems adopted for practical benefits, we call for increased collaboration between text-mining researchers and various stakeholders, including researchers, publishers and biocurators."
},
{
"pmid": "26590260",
"title": "miRTarBase 2016: updates to the experimentally validated miRNA-target interactions database.",
"abstract": "MicroRNAs (miRNAs) are small non-coding RNAs of approximately 22 nucleotides, which negatively regulate the gene expression at the post-transcriptional level. This study describes an update of the miRTarBase (http://miRTarBase.mbc.nctu.edu.tw/) that provides information about experimentally validated miRNA-target interactions (MTIs). The latest update of the miRTarBase expanded it to identify systematically Argonaute-miRNA-RNA interactions from 138 crosslinking and immunoprecipitation sequencing (CLIP-seq) data sets that were generated by 21 independent studies. The database contains 4966 articles, 7439 strongly validated MTIs (using reporter assays or western blots) and 348 007 MTIs from CLIP-seq. The number of MTIs in the miRTarBase has increased around 7-fold since the 2014 miRTarBase update. The miRNA and gene expression profiles from The Cancer Genome Atlas (TCGA) are integrated to provide an effective overview of this exponential growth in the miRNA experimental data. These improvements make the miRTarBase one of the more comprehensively annotated, experimentally validated miRNA-target interactions databases and motivate additional miRNA research efforts."
},
{
"pmid": "28025339",
"title": "iMITEdb: the genome-wide landscape of miniature inverted-repeat transposable elements in insects.",
"abstract": "Miniature inverted-repeat transposable elements (MITEs) have attracted much attention due to their widespread occurrence and high copy numbers in eukaryotic genomes. However, the systematic knowledge about MITEs in insects and other animals is still lacking. In this study, we identified 6012 MITE families from 98 insect species genomes. Comparison of these MITEs with known MITEs in the NCBI non-redundant database and Repbase showed that 5701(∼95%) of 6012 MITE families are novel. The abundance of MITEs varies drastically among different insect species, and significantly correlates with genome size. In general, larger genomes contain more MITEs than small genomes. Furthermore, all identified MITEs were included in a newly constructed database (iMITEdb) (http://gene.cqu.edu.cn/iMITEdb/), which has functions such as browse, search, BLAST and download. Overall, our results not only provide insight on insect MITEs but will also improve assembly and annotation of insect genomes. More importantly, the results presented in this study will promote studies of MITEs function, evolution and application in insects. DATABASE URL: http://gene.cqu.edu.cn/iMITEdb/."
},
{
"pmid": "23703206",
"title": "PubTator: a web-based text mining tool for assisting biocuration.",
"abstract": "Manually curating knowledge from biomedical literature into structured databases is highly expensive and time-consuming, making it difficult to keep pace with the rapid growth of the literature. There is therefore a pressing need to assist biocuration with automated text mining tools. Here, we describe PubTator, a web-based system for assisting biocuration. PubTator is different from the few existing tools by featuring a PubMed-like interface, which many biocurators find familiar, and being equipped with multiple challenge-winning text mining algorithms to ensure the quality of its automatic results. Through a formal evaluation with two external user groups, PubTator was shown to be capable of improving both the efficiency and accuracy of manual curation. PubTator is publicly available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator/."
},
{
"pmid": "27242036",
"title": "CoopTFD: a repository for predicted yeast cooperative transcription factor pairs.",
"abstract": "In eukaryotic cells, transcriptional regulation of gene expression is usually accomplished by cooperative Transcription Factors (TFs). Therefore, knowing cooperative TFs is helpful for uncovering the mechanisms of transcriptional regulation. In yeast, many cooperative TF pairs have been predicted by various algorithms in the literature. However, until now, there is still no database which collects the predicted yeast cooperative TFs from existing algorithms. This prompts us to construct Cooperative Transcription Factors Database (CoopTFD), which has a comprehensive collection of 2622 predicted cooperative TF pairs (PCTFPs) in yeast from 17 existing algorithms. For each PCTFP, our database also provides five types of validation information: (i) the algorithms which predict this PCTFP, (ii) the publications which experimentally show that this PCTFP has physical or genetic interactions, (iii) the publications which experimentally study the biological roles of both TFs of this PCTFP, (iv) the common Gene Ontology (GO) terms of this PCTFP and (v) the common target genes of this PCTFP. Based on the provided validation information, users can judge the biological plausibility of a PCTFP of interest. We believe that CoopTFD will be a valuable resource for yeast biologists to study the combinatorial regulation of gene expression controlled by cooperative TFs.Database URL: http://cosbi.ee.ncku.edu.tw/CoopTFD/ or http://cosbi2.ee.ncku.edu.tw/CoopTFD/."
},
{
"pmid": "25326323",
"title": "The Comparative Toxicogenomics Database's 10th year anniversary: update 2015.",
"abstract": "Ten years ago, the Comparative Toxicogenomics Database (CTD; http://ctdbase.org/) was developed out of a need to formalize, harmonize and centralize the information on numerous genes and proteins responding to environmental toxic agents across diverse species. CTD's initial approach was to facilitate comparisons of nucleotide and protein sequences of toxicologically significant genes by curating these sequences and electronically annotating them with chemical terms from their associated references. Since then, however, CTD has vastly expanded its scope to robustly represent a triad of chemical-gene, chemical-disease and gene-disease interactions that are manually curated from the scientific literature by professional biocurators using controlled vocabularies, ontologies and structured notation. Today, CTD includes 24 million toxicogenomic connections relating chemicals/drugs, genes/proteins, diseases, taxa, phenotypes, Gene Ontology annotations, pathways and interaction modules. In this 10th year anniversary update, we outline the evolution of CTD, including our increased data content, new 'Pathway View' visualization tool, enhanced curation practices, pilot chemical-phenotype results and impending exposure data set. The prototype database originally described in our first report has transformed into a sophisticated resource used actively today to help scientists develop and test hypotheses about the etiologies of environmentally influenced diseases."
},
{
"pmid": "24203711",
"title": "DrugBank 4.0: shedding new light on drug metabolism.",
"abstract": "DrugBank (http://www.drugbank.ca) is a comprehensive online database containing extensive biochemical and pharmacological information about drugs, their mechanisms and their targets. Since it was first described in 2006, DrugBank has rapidly evolved, both in response to user requests and in response to changing trends in drug research and development. Previous versions of DrugBank have been widely used to facilitate drug and in silico drug target discovery. The latest update, DrugBank 4.0, has been further expanded to contain data on drug metabolism, absorption, distribution, metabolism, excretion and toxicity (ADMET) and other kinds of quantitative structure activity relationships (QSAR) information. These enhancements are intended to facilitate research in xenobiotic metabolism (both prediction and characterization), pharmacokinetics, pharmacodynamics and drug design/discovery. For this release, >1200 drug metabolites (including their structures, names, activity, abundance and other detailed data) have been added along with >1300 drug metabolism reactions (including metabolizing enzymes and reaction types) and dozens of drug metabolism pathways. Another 30 predicted or measured ADMET parameters have been added to each DrugCard, bringing the average number of quantitative ADMET values for Food and Drug Administration-approved drugs close to 40. Referential nuclear magnetic resonance and MS spectra have been added for almost 400 drugs as well as spectral and mass matching tools to facilitate compound identification. This expanded collection of drug information is complemented by a number of new or improved search tools, including one that provides a simple analyses of drug-target, -enzyme and -transporter associations to provide insight on drug-drug interactions."
}
] |
Frontiers in Physiology | 31607939 | PMC6757193 | 10.3389/fphys.2019.01156 | A Dynamic Jaw Model With a Finite-Element Temporomandibular Joint | The masticatory region is an important human motion system that is essential for basic human tasks like mastication, speech or swallowing. An association between temporomandibular disorders (TMDs) and high temporomandibular joint (TMJ) stress has been suggested, but in vivo joint force measurements are not feasible to directly test this assumption. Consequently, biomechanical computer simulation remains as one of a few means to investigate this complex system. To thoroughly examine orofacial biomechanics, we developed a novel, dynamic computer model of the masticatory system. The model combines a muscle driven rigid body model of the jaw region with a detailed finite element model (FEM) disk and elastic foundation (EF) articular cartilage. The model is validated using high-resolution MRI data for protrusion and opening that were collected from the same volunteer. Joint stresses for a clenching task as well as protrusive and opening movements are computed. Simulations resulted in mandibular positions as well as disk positions and shapes that agree well with the MRI data. The model computes reasonable disk stress patterns for dynamic tasks. Moreover, to the best of our knowledge this model presents the first ever contact model using a combination of EF layers and a FEM body, which results in a clear decrease in computation time. In conclusion, the presented model is a valuable tool for the investigation of the human TMJ and can potentially help in the future to increase the understanding of the masticatory system and the relationship between TMD and joint stress and to highlight potential therapeutic approaches for the restoration of orofacial function. | Related WorkRigid Body Modeling of the TMJEarly computational investigations of the masticatory region were mostly performed using two-dimensional rigid body models. These investigations focused on static investigations of joint reaction forces utilizing muscle force estimations derived from maximum bite force estimations (Greaves, 1978; Throckmorton and Throckmorton, 1985). While these investigations are a valuable tool for the examination of bite performance and the mechanical efficiency of masticatory muscles, they cannot be used for dynamic investigations and are an oversimplification of the three-dimensional masticatory system. An early example of a three-dimensional dynamic model was published by Koolstra and van Eijden (1997). The TMJ was modeled as purely elastic, frictionless contact between a sphere (simplified condyle) and a spline surface representing the articular fossa. Their model used contact points on the lower teeth and a flat surface mimicking the occlusal surface of the upper dentition. Moreover, the presented model includes a muscle model that connects muscle force to muscle activation, muscle length and a force-velocity curve. Tooth contact was modeled as contact between points representing the tip of the lower teeth and a plane that represented the occlusal plane. In a more recent iteration of the model (Tuijt et al., 2010) the TMJ surfaces were modeled as 3D shell type meshes. An ellipsoid was used for the condyle and the fossa was modeled using a third-degree polynomial in the sagittal plane, combined with a second-degree polynomial for the mediolateral curve. In this model a tangent plane approximation of the fossa mesh around a contact point was used. Penetrating vertices were defined, and point-to-plane distance was calculated to derive the joint reaction force for each penetrating vertex. The upper teeth were modeled as single bite plane and the lower dentition was modeled using points for an incisor and the two second molars.Peck et al. (2000) also used an ellipsoidal shape to approximate the condyle and a combination of multiple linear plates to model the condylar path along the fossa. Again, contact was monitored by interpenetration of the two geometrical shapes. In the case of constant contact an instantaneous constraint was added to simulate sliding along the surfaces. Dentition was simulated as a flat occlusal plane and muscles were modeled as Hill-type actuators (Hill, 1953). The Stavness et al. (2006) model can be seen as most recent version of this “model family.” The model uses a bilateral or unilateral point constraint, sitting in the anatomical center of the condyle and a combination of three rigid, frictionless surfaces. These surfaces define the movement of the mandible in the anterior–posterior and medial-lateral directions. de Zee et al. (2007) used a comparable approach modeling the TMJ by using a single unilateral, planar constraint that was angled downwards and canted medially. In a more recent version, the group used an elastic contact foundation model to solve contact between the condyle and the fossa articularis using a Force Dependent Kinematics approach to track movement data and compute muscle, ligament and contact forces (Andersen et al., 2017).Finite Element Modeling of the TMJNagahara et al. (1999) investigated the stress distribution and displacement during static clenching. This early investigation used a CT scan of a cadaver for bony structures and the TMJ disk was digitized after extraction. Muscle forces were modeled using external forces in the direction of the main closing muscles. Perez del Palomar and Doblaré created FEM simulations of mouth opening as well as lateral movements (Pérez del Palomar and Doblaré, 2006a, b). The models were built from medical scans of a patient and used a porohyperelastic material model for the TMJ disk. No muscle representations were included, and the movements were simulated by prescribed translation of the mandible. While the model used for opening simulations only contains one half of the masticatory system, the model for lateral movements contains both joints. Mori et al. (2010) built a model from 1.5T MRI images of a volunteer. The cartilaginous and ligamentous structures were modeled using a Kelvin material model and retrodiscal tissue and the TMJ capsule were included. Moreover, the articular cartilage was included as uniform layer. Mandible movement was constrained to only allow movement in the sagittal plane and clenching was simulated using an external load. Hattori-Hara et al. (2014) created a model of the TMJ from a CT scan and a 1.5T MRI of a patient for the investigation of unilateral disk displacement during static clenching. The model includes the bony structures and TMJ disk. Articular cartilage and capsule were modeled as uniform layers. Muscle forces were modeled as external forces and distributed over the insertion of the muscles. Commisso et al. (2014) created a model of the mandible and TMJ from a human cadaver. The model includes cube like teeth that are connected to the mandible with a layer of elements mimicking the periodontal ligament. A quasi-linear viscoelastic material was used for the TMJ disk. Forces were applied as external load at the insertion area of each respective muscle. They used the model to simulate sustained clenching as well as rhythmic masticatory muscle activity. In a more recent study they used the same model to investigate the lateral pterygoid muscle during a unilateral mastication cycle (Commisso et al., 2015). For this purpose, they used a two-step setup. They estimated muscle forces using a Hill-type muscle model, while neglecting force-length as well as force-velocity dependencies. These calculated muscle forces were applied to the muscle insertion areas as external forces. Martinez Choy et al. (2017) proposed a full FEM modeling approach, containing detailed meshes of all involved structures as well as Hill-type muscles. The model was built using data from various literature sources and used for the investigation of a chewing cycle.Recently, co-simulation techniques have been proposed to use musculoskeletal models to define boundary conditions for static FEM simulations. Examples of modeled joint systems include tibial loading while load carrying (Xu et al., 2019) or patellofemoral cartilage stresses during a stair climb task (Pal et al., 2019). Currently, no such approach has been reported for jaw models, even though the recent, more sophisticated rigid body models are theoretically capable of driving such a modeling strategy (Andersen et al., 2017). Nevertheless, this approach does not fully solve the presented problems. While using such a technique potentially decreases the simulation time needed, the use of two different modeling toolkits increases the complexity of model setup and therefore decreases the likelihood of clinical use. Moreover, the use of forces computed with a simple joint set-up might not necessarily compute the correct motion and reaction forces when applied to a more complex FEM joint. To the best of our knowledge the only previous dynamic rigid body model that incorporated a FEM TMJ was published by Koolstra and van Eijden (2005). The mandible was modeled as dynamic rigid body with 12 Hill-type actuators attached and the TMJ disks were included as FE models with tetrahedral elements and an edge length of approximately 0.5 mm. The articular cartilage was represented as a uniform layer using a FE approach. The model was built from cadaver data of the right TMJ and mirrored for the left side. A maximum jaw opening of 3 cm was achieved. Table 1 presents an overview of the literature review.TABLE 1Overview of features of computer models of the jaw region.ReferencesPoint constraintGeometric contactFEM diskDeformable articular cartilageFEM capsule/ligamentsTeeth can be in and leave contactDynamicMuscle drivenIndividually modeled jointsReported simulation timeKoolstra and van Eijden, 1997×✓×××✓✓✓××Tuijt et al., 2010×✓×××✓✓✓××Peck et al., 2000×✓×××✓✓✓××Hannam et al., 2008✓××××✓✓✓××de Zee et al., 2007✓××××✓✓✓××Andersen et al., 2017×✓×××✓✓✓✓×Nagahara et al., 1999××✓×××××××Pérez del Palomar and Doblaré, 2006a××✓×✓×××××Pérez del Palomar and Doblaré, 2006b××✓×✓×××✓×Mori et al., 2010××✓✓✓×××✓×Hattori-Hara et al., 2014××✓✓✓×××✓×Commisso et al., 2014 + 15××✓×✓✓×RB model for muscle forces✓×Martinez Choy et al., 2017××✓✓✓✓✓✓××Koolstra and van Eijden, 2005××✓✓×✓✓✓××Our model××✓✓×✓✓✓✓✓ | [
"29803823",
"25565306",
"28639682",
"11379888",
"15564115",
"1761580",
"24651655",
"25460400",
"21062283",
"19356762",
"16930608",
"12684970",
"17393335",
"23982908",
"20819138",
"18191864",
"266827",
"25458347",
"13047276",
"19252985",
"19627392",
"17141788",
"16214491",
"7560417",
"9302617",
"10414871",
"28464688",
"27376178",
"28258640",
"20728866",
"8028866",
"10456606",
"19627517",
"30596523",
"11000383",
"16524337",
"16125714",
"29993500",
"30786005",
"24482784",
"18298185",
"282342",
"23871384",
"20635264",
"11988934",
"17985243",
"18946004",
"3839795",
"7657681",
"28536534",
"24182620",
"20096414",
"31150325"
] | [
{
"pmid": "29803823",
"title": "Design and clinical outcome of a novel 3D-printed prosthetic joint replacement for the human temporomandibular joint.",
"abstract": "BACKGROUND\nStock prosthetic temporomandibular joint replacements come in limited sizes, and do not always encompass the joint anatomy that presents clinically. The aims of this study were twofold. Firstly, to design a personalized prosthetic total joint replacement for the treatment of a patient's end-stage temporomandibular joint osteoarthritis, to implant the prosthesis into the patient, and assess clinical outcome 12-months post-operatively; and secondly, to evaluate the influence of changes in prosthetic condyle geometry on implant load response during mastication.\n\n\nMETHODS\nA 48-year-old female patient with Grade-5 osteoarthritis to the left temporomandibular joint was recruited, and a prosthesis developed to match the native temporomandibular joint anatomy. The prosthesis was 3D printed, sterilized and implanted into the patient, and pain and function measured 12-months post-operatively. The prosthesis load response during a chewing-bite and maximum-force bite was evaluated using a personalized multi-body musculoskeletal model. Simulations were performed after perturbing condyle thickness, neck length and head sphericity.\n\n\nFINDINGS\nIncreases in prosthetic condyle neck length malaligned the mandible and perturbed temporomandibular joint force. Changes in condylar component thickness greatly influenced fixation screw stress response, while a more eccentric condylar head increased prosthetic joint-contact loading. Post-operatively, the prosthetic temporomandibular joint surgery reduced patient pain from 7/10 to 1/10 on a visual analog scale, and increased intercisal opening distance from 22 mm to 38 mm.\n\n\nINTERPRETATION\nThis study demonstrates effectiveness of a personalized prosthesis that may ultimately be adapted to treat a wide-range of end-stage temporomandibular joint conditions, and highlights sensitivity of prosthesis load response to changes in condylar geometry."
},
{
"pmid": "25565306",
"title": "Prosthesis loading after temporomandibular joint replacement surgery: a musculoskeletal modeling study.",
"abstract": "One of the most widely reported complications associated with temporomandibular joint (TMJ) prosthetic total joint replacement (TJR) surgery is condylar component screw loosening and instability. The objective of this study was to develop a musculoskeletal model of the human jaw to assess the influence of prosthetic condylar component orientation and screw placement on condylar component loading during mastication. A three-dimensional model of the jaw comprising the maxilla, mandible, masticatory muscles, articular cartilage, and articular disks was developed. Simulations of mastication and a maximum force bite were performed for the natural TMJ and the TMJ after prosthetic TJR surgery, including cases for mastication where the condylar component was rotated anteriorly by 0 deg, 5 deg, 10 deg, and 15 deg. Three clinically significant screw configurations were investigated: a complete, posterior, and minimal-posterior screw (MPS) configuration. Increases in condylar anterior rotation led to an increase in prosthetic condylar component contact stresses and substantial increases in condylar component screw stresses. The use of more screws in condylar fixation reduced screw stress magnitudes and maximum condylar component stresses. Screws placed superiorly experienced higher stresses than those of all other condylar fixation screws. The results of the present study have important implication for the way in which prosthetic components are placed during TMJ prosthetic TJR surgery."
},
{
"pmid": "28639682",
"title": "Introduction to Force-Dependent Kinematics: Theory and Application to Mandible Modeling.",
"abstract": "Knowledge of the muscle, ligament, and joint forces is important when planning orthopedic surgeries. Since these quantities cannot be measured in vivo under normal circumstances, the best alternative is to estimate them using musculoskeletal models. These models typically assume idealized joints, which are sufficient for general investigations but insufficient if the joint in focus is far from an idealized joint. The purpose of this study was to provide the mathematical details of a novel musculoskeletal modeling approach, called force-dependent kinematics (FDK), capable of simultaneously computing muscle, ligament, and joint forces as well as internal joint displacements governed by contact surfaces and ligament structures. The method was implemented into the anybody modeling system and used to develop a subject-specific mandible model, which was compared to a point-on-plane (POP) model and validated against joint kinematics measured with a custom-built brace during unloaded emulated chewing, open and close, and protrusion movements. Generally, both joint models estimated the joint kinematics well with the POP model performing slightly better (root-mean-square-deviation (RMSD) of less than 0.75 mm for the POP model and 1.7 mm for the FDK model). However, substantial differences were observed when comparing the estimated joint forces (RMSD up to 24.7 N), demonstrating the dependency on the joint model. Although the presented mandible model still contains room for improvements, this study shows the capabilities of the FDK methodology for creating joint models that take the geometry and joint elasticity into account."
},
{
"pmid": "11379888",
"title": "Dynamic properties of the human temporomandibular joint disc.",
"abstract": "The cartilaginous intra-articular disc of the human temporomandibular joint shows clear anteroposterior variations in its morphology. However, anteroposterior variations in its tissue behavior have not been investigated thoroughly. To test the hypothesis that the mechanical properties of fresh human temporomandibular joint discs vary in anteroposterior direction, we performed dynamic indentation tests at three anteroposteriorly different locations. The disc showed strong viscoelastic behavior dependent on the amplitude and frequency of the indentation, the location, and time. The resistance against deformations and the shock absorbing capabilities were larger in the intermediate zone than in regions located more anteriorly and posteriorly. Because several studies have predicted that the intermediate zone is the predominantly loaded region of the disc, it can be concluded that the topological variations in its tissue behavior enable the disc to combine the functions of load distribution and shock absorption effectively."
},
{
"pmid": "15564115",
"title": "Multibody dynamic simulation of knee contact mechanics.",
"abstract": "Multibody dynamic musculoskeletal models capable of predicting muscle forces and joint contact pressures simultaneously would be valuable for studying clinical issues related to knee joint degeneration and restoration. Current three-dimensional multibody knee models are either quasi-static with deformable contact or dynamic with rigid contact. This study proposes a computationally efficient methodology for combining multibody dynamic simulation methods with a deformable contact knee model. The methodology requires preparation of the articular surface geometry, development of efficient methods to calculate distances between contact surfaces, implementation of an efficient contact solver that accounts for the unique characteristics of human joints, and specification of an application programming interface for integration with any multibody dynamic simulation environment. The current implementation accommodates natural or artificial tibiofemoral joint models, small or large strain contact models, and linear or nonlinear material models. Applications are presented for static analysis (via dynamic simulation) of a natural knee model created from MRI and CT data and dynamic simulation of an artificial knee model produced from manufacturer's CAD data. Small and large strain natural knee static analyses required 1 min of CPU time and predicted similar contact conditions except for peak pressure, which was higher for the large strain model. Linear and nonlinear artificial knee dynamic simulations required 10 min of CPU time and predicted similar contact force and torque but different contact pressures, which were lower for the nonlinear model due to increased contact area. This methodology provides an important step toward the realization of dynamic musculoskeletal models that can predict in vivo knee joint motion and loading simultaneously."
},
{
"pmid": "1761580",
"title": "Articular contact in a three-dimensional model of the knee.",
"abstract": "This study is aimed at the analysis of articular contact in a three-dimensional mathematical model of the human knee-joint. In particular the effect of articular contact on the passive motion characteristics is assessed in relation to experimentally obtained joint kinematics. Two basically different mathematical contact descriptions were compared for this purpose. One description was for rigid contact and one for deformable contact. The description of deformable contact is based on a simplified theory for contact of a thin elastic layer on a rigid foundation. The articular cartilage was described either as a linear elastic material or as a non-linear elastic material. The contact descriptions were introduced in a mathematical model of the knee. The locations of the ligament insertions and the geometry of the articular surfaces were obtained from a joint specimen of which experimentally determined kinematic data were available, and were used as input for the model. The ligaments were described by non-linear elastic line elements. The mechanical properties of the ligaments and the articular cartilage were derived from literature data. Parametric model evaluations showed that, relative to rigid articular contact, the incorporation of deformable contact did not alter the motion characteristics in a qualitative sense, and that the quantitative changes were small. Variation of the elasticity of the elastic layer revealed that decreasing the surface stiffness caused the ligaments to relax and, as a consequence, increased the joint laxity, particularly for axial rotation. The difference between the linear and the non-linear deformable contact in the knee model was very small for moderate loading conditions. The motion characteristics simulated with the knee model compared very well with the experiments. It is concluded that for simulation of the passive motion characteristics of the knee, the simplified description for contact of a thin linear elastic layer on a rigid foundation is a valid approach when aiming at the study of the motion characteristics for moderate loading conditions. With deformable contact in the knee model, geometric conformity between the surfaces can be modelled as opposed to rigid contact which assumed only point contact."
},
{
"pmid": "24651655",
"title": "A study of the temporomandibular joint during bruxism.",
"abstract": "A finite element model of the temporomandibular joint (TMJ) and the human mandible was fabricated to study the effect of abnormal loading, such as awake and asleep bruxism, on the articular disc. A quasilinear viscoelastic model was used to simulate the behaviour of the disc. The viscoelastic nature of this tissue is shown to be an important factor when sustained (awake bruxism) or cyclic loading (sleep bruxism) is simulated. From the comparison of the two types of bruxism, it was seen that sustained clenching is the most detrimental activity for the TMJ disc, producing an overload that could lead to severe damage of this tissue."
},
{
"pmid": "25460400",
"title": "Finite element analysis of the human mastication cycle.",
"abstract": "The aim of this paper is to propose a biomechanical model that could serve as a tool to overcome some difficulties encountered in experimental studies of the mandible. One of these difficulties is the inaccessibility of the temporomandibular joint (TMJ) and the lateral pterygoid muscle. The focus of this model is to study the stresses in the joint and the influence of the lateral pterygoid muscle on the mandible movement. A finite element model of the mandible, including the TMJ, was built to simulate the process of unilateral mastication. Different activation patterns of the left and right pterygoid muscles were tried. The maximum stresses in the articular disc and in the whole mandible during a complete mastication cycle were reached during the instant of centric occlusion. The simulations show a great influence of the coordination of the right and left lateral pterygoid muscles on the movement of the jaw during mastication. An asynchronous activation of the lateral pterygoid muscles is needed to achieve a normal movement of the jaw during mastication."
},
{
"pmid": "21062283",
"title": "Craniofacial biomechanics: an overview of recent multibody modelling studies.",
"abstract": "Multibody modelling is underutilised in craniofacial analyses, particularly when compared to other computational methods such as finite element analysis. However, there are many potential applications within this area, where bony movements, muscle forces, joint kinematics and bite forces can all be studied. This paper provides an overview of recent, three-dimensional, multibody modelling studies related to the analysis of skulls. The goal of this paper is not to offer a critical review of past studies, but instead intends to inform the reader of what has been achieved with multibody modelling."
},
{
"pmid": "19356762",
"title": "Prediction of the articular eminence shape in a patient with unilateral hypoplasia of the right mandibular ramus before and after distraction osteogenesis-A simulation study.",
"abstract": "The aim of this work was to predict the shape of the articular eminence in a patient with unilateral hypoplasia of the right mandibular ramus before and after distraction osteogenesis (DO). Using a patient-specific musculoskeletal model of the mandible the hypothesis that the observed differences in this patient in the left and right articular eminence inclinations were consistent with minimisation of joint loads was tested. Moreover, a prediction was made of the final shape of the articular eminence after DO when the expected remodelling has reached a steady state. The individual muscle forces and the average TMJ loading were computed for each combination of articular eminence angles both before and after DO. This exhaustive parameter study provides a full overview of average TMJ loading depending on the angles of the articular eminences. Before DO the parameter study resulted in different articular eminence inclinations between left and right sides consistent with patient data obtained from CT scans, indicating that in this patient the articular eminence shapes result from minimisation of joint loads. The simulation model predicts development of almost equal articular eminence shapes after DO. The same tendency was observed in cone beam CT scans (NewTom) of the patient taken 6.5 years after surgery."
},
{
"pmid": "16930608",
"title": "Validation of a musculo-skeletal model of the mandible and its application to mandibular distraction osteogenesis.",
"abstract": "Mandibular distraction osteogenesis will lead to a change in muscle coordination and load transfer to the temporomandibular joints (TMJ). The objective of this work is to present and validate a rigid-body musculo-skeletal model of the mandible based on inverse dynamics for calculation of the muscle activations, muscle forces and TMJ reaction forces for different types of clenching tasks and dynamic tasks. This approach is validated on a symmetric mandible model and an application will be presented where the TMJ reaction forces during unilateral clenching are estimated for a virtual distraction patient with a shortened left ramus. The mandible model consists of 2 rigid segments and has 4 degrees-of-freedom. The model was equipped with 24 hill-type musculotendon actuators. During the validation experiment one subject was asked to do several tasks while measuring EMG activity, bite force and kinematics. The bite force and kinematics were used as input for the simulations of the same tasks after which the estimated muscle activities were compared with the measured muscle activities. This resulted in an average correlation coefficient of 0.580 and an average of the Mean Absolute Error of 0.109. The virtual distraction model showed a large difference in the TMJ reaction forces between left and right compared with the symmetric model for the same loading case. The present work is a step in the direction of building patient-specific mandible models, which can assess the mechanical effects on the TMJ before mandibular distraction osteogenesis surgery."
},
{
"pmid": "12684970",
"title": "Structure and function of the temporomandibular joint disc: implications for tissue engineering.",
"abstract": "The temporomandibular joint (TMJ) disc is a little understood structure that, unfortunately, exhibits a plethora of pathologic disorders. Tissue engineering approaches may be warranted to address TMJ disc pathophysiology, but first a clear understanding of structure-function relationships needs to be developed, especially as they relate to the regenerative potential of the tissue. In this review, we correlate the biochemical content of the TMJ disc to its mechanical behavior and discuss what this correlation infers for tissue engineering studies of the TMJ disc. The disc of the TMJ exhibits a somewhat biconcave shape, being thicker in the anterior and posterior bands and thinner in the intermediate zone. The disc, which is certainly an anisotropic and nonhomogeneous tissue, consists almost entirely of type I collagen with trace amounts of type II and other types. In general, collagen fibers in the intermediate zone appear to run primarily in an anteroposterior direction and in a ringlike fashion around the periphery. Collagen orientation is reflected in higher tensile stiffness and strength in the center anteroposteriorly than mediolaterally and in the anterior and posterior bands than the intermediate zone mediolaterally. Tensile tests have shown the disc is stiffer and stronger in the direction of the collagen fibers. Elastin fibers in general appear along the collagen fibers and most likely function in restoring and retaining disc form after loading. The 2 primary glycosaminoglycans of the disc by far are chondroitin sulfate and dermatan sulfate, although their distribution is not clear. Compression studies are conflicting, but evidence suggests the disc is compressively stiffest in the center. Only a few tissue engineering studies of the TMJ disc have been performed to date. Tissue engineering studies must take advantage of existing information for experimental design and construct validation, and more research is necessary to characterize the disc to create a clearer picture of our goals in tissue engineering the TMJ disc."
},
{
"pmid": "17393335",
"title": "A call to action for bioengineers and dental professionals: directives for the future of TMJ bioengineering.",
"abstract": "The world's first TMJ Bioengineering Conference was held May 25-27, 2006, in Broomfield, Colorado. Presentations were given by 34 invited speakers representing industry, academics, government agencies such as NIH, and private practice, which included surgeons, engineers, biomedical scientists, and patient advocacy leaders. Other attendees included documentary film makers and FDA officials. The impetus for the conference was that the field of TMJ research has been lacking continuity, with no open forum available for surgeons, scientists, and bioengineers to exchange scientific and clinical ideas and identify common goals, strengths, and capabilities. The goal was thus to plant the seeds for establishing a forum for multidisciplinary and interdisciplinary interactions. The collective wisdom and interactions brought about by a melting pot of these diverse individuals has been pooled and is disseminated in this article, which offers specific directives to bioengineers, basic scientists, and medical and dental professionals including oral and maxillofacial surgeons, pain specialists, orthodontists, prosthodontists, endocrinologists, rheumatologists, immunologists, radiologists, neurologists, and orthopaedic surgeons. A primary goal of this article was to attract researchers across a breadth of research areas to lend their expertise to a significant clinical problem with a dire need for new talent. For example, researchers with expertise in finite element modeling will find an extensive list of clinically significant problems. Specific suggestions for TMJ research were presented by the leading organizations for TMJ surgeons and TMJ patients, and further research needs were identified in a series of group discussions. The specific needs identified at the conference and presented here will be essential for those who endeavor to engage in TMJ research, especially in the areas of tissue engineering and biomechanics. Collectively, it is our hope that many of the questions and directives presented here find their way into the proposals of multidisciplinary teams across the world with new and promising approaches to diagnose, prevent and treat TMJ disorders."
},
{
"pmid": "23982908",
"title": "Bone remodelling in the natural acetabulum is influenced by muscle force-induced bone stress.",
"abstract": "A modelling framework using the international Physiome Project is presented for evaluating the role of muscles on acetabular stress patterns in the natural hip. The novel developments include the following: (i) an efficient method for model generation with validation; (ii) the inclusion of electromyography-estimated muscle forces from gait; and (iii) the role that muscles play in the hip stress pattern. The 3D finite element hip model includes anatomically based muscle area attachments, material properties derived from Hounsfield units and validation against an Instron compression test. The primary outcome from this study is that hip loading applied as anatomically accurate muscle forces redistributes the stress pattern and reduces peak stress throughout the pelvis and within the acetabulum compared with applying the same net hip force without muscles through the femur. Muscle forces also increased stress where large muscles have small insertion sites. This has implications for the hip where bone stress and strain are key excitation variables used to initiate bone remodelling based on the strain-based bone remodelling theory. Inclusion of muscle forces reduces the predicted sites and degree of remodelling. The secondary outcome is that the key muscles that influenced remodelling in the acetabulum were the rectus femoris, adductor magnus and iliacus."
},
{
"pmid": "20819138",
"title": "Current computational modelling trends in craniomandibular biomechanics and their clinical implications.",
"abstract": "Computational models of interactions in the craniomandibular apparatus are used with increasing frequency to study biomechanics in normal and abnormal masticatory systems. Methods and assumptions in these models can be difficult to assess by those unfamiliar with current practices in this field; health professionals are often faced with evaluating the appropriateness, validity and significance of models which are perhaps more familiar to the engineering community. This selective review offers a foundation for assessing the strength and implications of a craniomandibular modelling study. It explores different models used in general science and engineering and focuses on current best practices in biomechanics. The problem of validation is considered at some length, because this is not always fully realisable in living subjects. Rigid-body, finite element and combined approaches are discussed, with examples of their application to basic and clinically relevant problems. Some advanced software platforms currently available for modelling craniomandibular systems are mentioned. Recent studies of the face, masticatory muscles, tongue, craniomandibular skeleton, temporomandibular joint, dentition and dental implants are reviewed, and the significance of non-linear and non-isotropic material properties is emphasised. The unique challenges in clinical application are discussed, and the review concludes by posing some questions which one might reasonably expect to find answered in plausible modelling studies of the masticatory apparatus."
},
{
"pmid": "18191864",
"title": "A dynamic model of jaw and hyoid biomechanics during chewing.",
"abstract": "Our understanding of human jaw biomechanics has been enhanced by computational modelling, but comparatively few studies have addressed the dynamics of chewing. Consequently, ambiguities remain regarding predicted jaw-gapes and forces on the mandibular condyles. Here, we used a new platform to simulate unilateral chewing. The model, based on a previous study, included curvilinear articular guidance, a mobile hyoid apparatus, and a compressible food bolus. Muscles were represented by Hill-type actuators with drive profiles tuned to produce target jaw and hyoid movements. The cycle duration was 732 ms. At maximum gape, the lower incisor-point was 20.1mm down, 5.8mm posterior, and 2.3mm lateral to its initial, tooth-contact position. Its maximum laterodeviation to the working-side during closing was 6.1mm, at which time the bolus was struck. The hyoid's movement, completed by the end of jaw-opening, was 3.4mm upward and 1.6mm forward. The mandibular condyles moved asymmetrically. Their compressive loads were low during opening, slightly higher on the working-side at bolus-collapse, and highest bilaterally when the teeth contacted. The model's movements and the directions of its condylar forces were consistent with experimental observations, resolving seeming discordances in previous simulations. Its inclusion of hyoid dynamics is a step towards modelling mastication."
},
{
"pmid": "266827",
"title": "Thickness of the soft tissue layers and the articular disk in the temporomandibular joint.",
"abstract": "Out of 115 right temporomandibular joints from Swedish subjects aged 1 day to 93 years, 48 joints without any gross sign of arthrosis or deviation in form were examined histologically. The joint components were cut sagittaly, each into four parts. Histological sections were made of the condyle, the temporal component and of the articular disk. The total thickness of the soft tissue layers was measured in decalcified sections, cut from the medio-central and lateral parts of the condyle and the temporal component and from the medial, medio-central, latero-central and lateral regions of the disk. In the medio-central sections from the condyle and temporal component the thickness of the fibrous connective tissue layer i.e. the surface layer was also registered. The soft tissue layers were thickest in the condyle superiorly, about 0.4-0.5 mm, in the temporal component on the postero-inferior slope of the articular tubercle, about 0.5 mm, and in the disk posteriorly about 2.9 mm. In the roof of the fossa it was only 0.1 mm. The soft tissue layers on the condyle as well as the disk were thinner laterally while the corresponding tissue in the temporal component was thicker laterally. The thickness of the soft tissue layers seem to reflect the growth and functional load to which the joint is exposed."
},
{
"pmid": "25458347",
"title": "The influence of unilateral disc displacement on stress in the contralateral joint with a normally positioned disc in a human temporomandibular joint: an analytic approach using the finite element method.",
"abstract": "OBJECTIVES\nTo investigate the influence of unilateral disc displacement (DD) in the temporomandibular joint (TMJ) on the stress in the contralateral joint, with a normally-positioned disc, during clenching.\n\n\nSTUDY DESIGN\nA finite element model of the TMJ was constructed based on MRI and 3D-CT of a single patient with a unilateral DD. A second model with bilateral normally-positioned discs served as a reference. The differences in stress distribution in various TMJ components during clenching were predicted with these models.\n\n\nRESULTS\nIn the unaffected joint of the unilateral DD model, the largest von Mises stress at the start of clenching was predicted in the inferior surface of the disc and increased by 30% during clenching. In the connective tissue the largest stress (1.16 MPa) did not reduce during clenching, in contrast to the (unaffected) joints of the reference model. In the affected joint, the largest stress was predicted in the temporal cartilage throughout clenching. In the surrounding connective tissue, the largest stress (1.42 MPa) hardly changed during clenching indicating no, or negligible, stress relaxation.\n\n\nCONCLUSIONS\nThis suggested that a unilateral DD could affect the stresses in the unaffected (contralateral) joint during clenching, where it may lead to weakening of the tissues that keep the disc on the top of the condyle. The results may be helpful in counseling worried patients, since they give insight into possible future developments of the disorder."
},
{
"pmid": "19252985",
"title": "Temporomandibular joint: disorders, treatments, and biomechanics.",
"abstract": "Temporomandibular joint (TMJ) is a complex, sensitive, and highly mobile joint. Millions of people suffer from temporomandibular disorders (TMD) in USA alone. The TMD treatment options need to be looked at more fully to assess possible improvement of the available options and introduction of novel techniques. As reconstruction with either partial or total joint prosthesis is the potential treatment option in certain TMD conditions, it is essential to study outcomes of the FDA approved TMJ implants in a controlled comparative manner. Evaluating the kinetics and kinematics of the TMJ enables the understanding of structure and function of normal and diseased TMJ to predict changes due to alterations, and to propose more efficient methods of treatment. Although many researchers have conducted biomechanical analysis of the TMJ, many of the methods have certain limitations. Therefore, a more comprehensive analysis is necessary for better understanding of different movements and resulting forces and stresses in the joint components. This article provides the results of a state-of-the-art investigation of the TMJ anatomy, TMD, treatment options, a review of the FDA approved TMJ prosthetic devices, and the TMJ biomechanics."
},
{
"pmid": "19627392",
"title": "Tensile stress patterns predicted in the articular disc of the human temporomandibular joint.",
"abstract": "The direction of the first principal stress in the articular disc of the temporomandibular joint was predicted with a biomechanical model of the human masticatory system. The results were compared with the orientation of its collagen fibers. Furthermore, the effect of an active pull of the superior lateral pterygoid muscle, which is directly attached to the articular disc, was studied. It was hypothesized that the markedly antero-posterior direction of the collagen fibers would be reflected in the direction of the tensile stresses in the disc and that active pull of the superior lateral pterygoid muscle would augment these tensions. It was found that the tensile patterns were extremely dependent on the stage of movement and on the mandibular position. They differed between the superior and inferior layers of the disc. The hypothesis could only be confirmed for the anterior and middle portions of the disc. The predicted tensile principal stresses in the posterior part of the disc alternated between antero-posterior and medio-lateral directions."
},
{
"pmid": "17141788",
"title": "Viscoelastic material model for the temporomandibular joint disc derived from dynamic shear tests or strain-relaxation tests.",
"abstract": "Viscoelastic material models for the temporomandibular joint disc, based upon strain relaxation, were considered to underestimate energy absorption for loads with time constants beyond the relaxation time. Therefore, the applicability of a material model that takes the viscous behavior at a wide range of frequencies into account was assessed. To that purpose a non-linear multi-mode Maxwell model was tested in cyclic large-strain compression tests. Its material constants were approximated from dynamic small-strain shear deformation tests. The storage and loss moduli as obtained from a disc sample could be approximated with a four-mode Maxwell model. In simulated large-strain compression tests it behaved similarly as observed from the experimental tests. The underestimation of energy dissipation, as obtained from a single-mode Maxwell model was considerably reduced, especially for deformations with a higher strain rate. Furthermore, in contrast to the latter it was able to predict the increase of the stress amplitude with the compression frequency much better. In conclusion, the applied four-mode Maxwell model, based upon dynamic shear tests, was considered more suitable to predict higher frequency viscoelastic response, for instance during shock absorption, than a model based upon strain-relaxation."
},
{
"pmid": "16214491",
"title": "Combined finite-element and rigid-body analysis of human jaw joint dynamics.",
"abstract": "The jaw joint plays a crucial role in human mastication. It acts as a guidance for jaw movements and as a fulcrum for force generation. The joint is subjected to loading which causes tensions and deformations in its cartilaginous structures. These are assumed to be a major determinant for development, maintenance and also degeneration of the joint. To analyze the distribution of tensions and deformations in the cartilaginous structures of the jaw joint during jaw movement, a dynamical model of the human masticatory system has been constructed. Its movements are controlled by muscle activation. The articular cartilage layers and articular disc were included as finite-element (FE) models. As this combination of rigid-body and FE modeling had not been applied to musculoskeletal systems yet, its benefits and limitations were assessed by simulating both unloaded and loaded jaw movements. It was demonstrated that joint loads increase with muscle activation, irrespective of the external loads. With increasing joint load, the size of the stressed area of the articular surfaces was enlarged, whereas the peak stresses were much less affected. The results suggest that the articular disc enables distribution of local contact stresses over a much wider area of the very incongruent articular surfaces by transforming compressive principal stress into shear stress."
},
{
"pmid": "7560417",
"title": "Biomechanical analysis of jaw-closing movements.",
"abstract": "This study concerns the complex interaction between active muscle forces and passive guiding structures during jaw-closing movements. It is generally accepted that the ligaments of the joint play a major role in condylar guidance during these movements. While these ligaments permit a wide range of motions, it was assumed that they are not primarily involved in force transmission in the joints. Therefore, it was hypothesized that muscle forces and movement constraints caused by the articular surfaces imply a necessary and sufficient condition to generate ordinary jaw-closing movements. This hypothesis was tested by biomechanical analysis. A dynamic six-degrees-of-freedom mathematical model of the human masticatory system has been developed for qualitative analysis of the contributions of the different masticatory muscles to jaw-closing movements, it was found that the normally observed movement, which includes a swing-slide condylar movement along the articular eminence, can be generated by various separate pairs of masticatory muscles, among which the different parts of the masseter as well as the medial pterygoid muscle appeared to be the most suitable to complete this action. The results seem to be in contrast to the general opinion that a muscle with a forward-directed force component may not be suitable for generating jaw movements in which the condyle moves backward. The results can be explained, however, by biomechanical analysis which includes not only muscle and joint forces as used in standard textbooks of anatomy, but also the torques generated by these forces."
},
{
"pmid": "9302617",
"title": "The jaw open-close movements predicted by biomechanical modelling.",
"abstract": "The aim of this study was to analyse unloaded jaw-opening and jaw-closing movements in humans. For this purpose a dynamical 6-degree-of-freedom mathematical model of the human masticatory system was developed. It incorporated morphology, muscle architecture and dynamical muscle properties. Various symmetrical jaw-opening and jaw-closing movements were simulated based upon different muscle activation schemes. It was found that the balance between swing and slide of the mandibular condyle at the onset of a jaw-opening movement was predominantly dependent on the level of activation of the digastric and inferior lateral pterygoid muscles. The level of activation of the temporalis muscle parts was of critical importance for the jaw-closing movements. The amount of jaw opening was limited by the passive forces of the jaw-closing muscles. In contrast, the influence of the passive forces of the jaw-opening muscles on the jaw-closing movement was neglectable. Throughout the movements the temporomandibular joints remained loaded. The average torques generated by the jaw-opening or jaw-closing muscles with respect to the centre of gravity of the lower jaw had similar orientations and can be considered to be responsible for joint stabilization. The average direction of their lines of action, however, was about opposite, and this can be considered as the major discriminant between a movement in opening or closing direction."
},
{
"pmid": "10414871",
"title": "The role of passive muscle tensions in a three-dimensional dynamic model of the human jaw.",
"abstract": "The role of passive muscle tensions in human jaw function are largely unknown. It seems reasonable to assume that passive muscle-tension properties are optimized for the multiple physiological tasks the jaw performs in vivo. However, the inaccessibility of the jaw muscles is a major obstacle to measuring their passive tensions, and understanding their effects. Computer modelling offers an alternative method for doing this. Here, a three-dimensional, dynamic model was used to predict active and passive jaw-muscle tensions during simulated postural rest, jaw opening and chewing. The model included a rigid mandible, two temporomandibular joints, multiple dental bite points, and an artificial food bolus located between the right first molars. It was driven by 18 Hill-type actuators representing nine pairs of jaw muscles. All anatomical forms, positions and properties used in the model were based on previously published, average values. Two states were stimulated, one in which all optimal lengths for the length-tension curves in the closing muscles were defined as their fibre-component lengths when the incisor teeth were 2 mm apart (S2), and another in which the optimal lengths were set for a 12.0 mm interincisal separation (S12). At rest, the jaw attained 3.6 mm interincisal separation in S2, and 14.8 mm in S12. Activation of the inferior lateral pterygoid (ILP) and digastric (DG) muscles in various combinations always induced passive jaw-closer tensions, and compressive condylar loads. Maximum midline gape (from maximum bilateral co-activation of DG and ILP) was 16.2 mm in S2, and 32.0 mm in S12. When both model states were driven with muscle patterns typical for human mastication, recognizable unilateral and vertical \"chopping\" chewing cycles were produced. Both states revealed condylar loading in the opening and closing phases of mastication. During unilateral chewing, compressive force on the working-side condyle exceeded that on the balancing side. In contrast, during the \"chopping\" cycle, loading on the balancing side was greater than that on the working side. In S2, chewing was limited in both vertical and lateral directions. These results suggest that the assumptions used in S12 more closely approximated human behaviour than those in S2. Despite its limitations, modelling appears to provide a useful conceptual framework for developing hypotheses regarding the role of muscle tensions during human jaw function."
},
{
"pmid": "28464688",
"title": "Variability in muscle activation of simple speech motions: A biomechanical modeling approach.",
"abstract": "Biomechanical models of the oropharynx facilitate the study of speech function by providing information that cannot be directly derived from imaging data, such as internal muscle forces and muscle activation patterns. Such models, when constructed and simulated based on anatomy and motion captured from individual speakers, enable the exploration of inter-subject variability of speech biomechanics. These models also allow one to answer questions, such as whether speakers produce similar sounds using essentially the same motor patterns with subtle differences, or vastly different motor equivalent patterns. Following this direction, this study uses speaker-specific modeling tools to investigate the muscle activation variability in two simple speech tasks that move the tongue forward (/ə-ɡis/) vs backward (/ə-suk/). Three dimensional tagged magnetic resonance imaging data were used to inversely drive the biomechanical models in four English speakers. Results show that the genioglossus is the workhorse muscle of the tongue, with activity levels of 10% in different subdivisions at different times. Jaw and hyoid positioners (inferior pterygoid and digastric) also show high activation during specific phonemes. Other muscles may be more involved in fine tuning the shapes. For example, slightly more activation of the anterior portion of the transverse is found during apical than laminal /s/, which would protrude the tongue tip to a greater extent for the apical /s/."
},
{
"pmid": "27376178",
"title": "The influence of the human TMJ eminence inclination on predicted masticatory muscle forces.",
"abstract": "Aim of this paper was to investigate the change in masticatory muscle forces and temporomandibular joint (TMJ) reaction forces simulated by inverse dynamics when thesteepness of the anterior fossa slope was varied. We used the model by de Zee et al. (2007) created in AnyBody™. The model was equipped with 24musculotendon actuators. Mandibular movement was governed by thetrajectory of theincisal point. The TMJ was modelled as a planar constraint canted 5°medially and thecaudal inclination relative to the occlusal plane was varied from 10° to 70°. Our models showed that for the two simulated movements (empty chewing and unilateral clenching) the joint reaction forces were smallest for the eminence inclination of 30° and 40° and highest for 70°. The muscle forces were relatively insensitive to change of the eminence inclination for the angles between 20° and 50°. This did not hold for the pterygoid muscle, for which the muscle forces increased continually with increasing fossa inclination. For empty chewing the muscle force reached smaller values than for clenching. During clenching, the muscle forces changed by up to 200N."
},
{
"pmid": "28258640",
"title": "Realistic kinetic loading of the jaw system during single chewing cycles: a finite element study.",
"abstract": "Although knowledge of short-range kinetic interactions between antagonistic teeth during mastication is of essential importance for ensuring interference-free fixed dental reconstructions, little information is available. In this study, the forces on and displacements of the teeth during kinetic molar biting simulating the power stroke of a chewing cycle were investigated by use of a finite-element model that included all the essential components of the human masticatory system, including an elastic food bolus. We hypothesised that the model can approximate the loading characteristics of the dentition found in previous experimental studies. The simulation was a transient analysis, that is, it considered the dynamic behaviour of the jaw. In particular, the reaction forces on the teeth and joints arose from contact, rather than nodal forces or constraints. To compute displacements of the teeth, the periodontal ligament (PDL) was modelled by use of an Ogden material model calibrated on the basis of results obtained in previous experiments. During the initial holding phase of the power stroke, bite forces were aligned with the roots of the molars until substantial deformation of the bolus occurred. The forces tilted the molars in the bucco-lingual and mesio-distal directions, but as the intrusive force increased the teeth returned to their initial configuration. The Ogden material model used for the PDL enabled accurate prediction of the displacements observed in experimental tests. In conclusion, the comprehensive kinetic finite element model reproduced the kinematic and loading characteristics of previous experimental investigations."
},
{
"pmid": "20728866",
"title": "Three-dimensional finite element analysis of cartilaginous tissues in human temporomandibular joint during prolonged clenching.",
"abstract": "OBJECTIVE\nBruxism, the parafunctional habit of nocturnal grinding of the teeth and clenching, is associated with the onset of joint degeneration. Especially prolonged clenching is suggested to cause functional overloading in the temporomandibular joint (TMJ). In this study, the distributions of stresses in the cartilaginous TMJ disc and articular cartilage, were analysed during prolonged clenching. The purpose of this study was to examine if joint degradation due to prolonged clenching can be attributed to changes in stress concentration in the cartilaginous tissues.\n\n\nDESIGN\nFinite element model was developed on the basis of magnetic resonance images from a healthy volunteer. Condylar movements recorded during prolonged clenching were used as the loading condition for stress analysis.\n\n\nRESULTS\nAt the onset of clenching (time=0s), the highest von Mises stresses were located in the middle and posterior areas (6.18MPa) of the inferior disc surface facing the condylar cartilage. The largest magnitude of the minimum principal stress (-6.72MPa) was found in the condylar cartilage. The stress concentrations were relieved towards the superior disc surface facing the temporal cartilage. On the surfaces of the temporal cartilage, relatively lower stresses were found. After 5-min clenching, both stress values induced in the TMJ components were reduced to 50-80% of the stress values at the onset of clenching, although the concomitant strains increased slightly during this period.\n\n\nCONCLUSIONS\nIt is suggested that both the condylar and temporal cartilage layers along with the TMJ disc, play an important role in stress distribution and transmission during prolonged clenching due to tissue expansion. Furthermore, our study suggests that a development of stress concentrations in the TMJ during prolonged clenching and risk factors for the initiation of TMJ degeneration could not be confirmed."
},
{
"pmid": "8028866",
"title": "Positional change of the hyoid bone at maximal mouth opening.",
"abstract": "The positional change of the hyoid bone in both closed and maximal mouth-opening positions of the mandible was investigated by cephalometric measurements. The following results were obtained: (1) With the increase in mouth opening the hyoid bone moved downward and backward. At maximal mouth opening the head posture changed posteriorly compared with that of occluded mouth position. (2) By superimposing films of the S-N plane, it became apparent that the hyoid bone was displaced downward by sagittal opening movement of the mandible and backward by the posterior change of the head posture. (3) Significant correlations were found between the degrees of sagittal rotation of the mandible and the position of the hyoid bone. (4) These results suggest that the posterior change of the head posture and inferior shift of the hyoid bone with mouth opening are important factors in obtaining maximal mouth opening."
},
{
"pmid": "10456606",
"title": "Displacement and stress distribution in the temporomandibular joint during clenching.",
"abstract": "The aim of this study was to analyze biomechanical reactions in the mandible and TMJ during clenching under various restraint conditions. A three-dimensional finite element model of the mandible, including the TMJ, was created for test purposes. The results were as follows: (1) Under any restraint conditions, displacement was greatest on the surface of the condyle and less on the articular disc and the surface of the glenoid fossa, in that order. Resultant stresses followed the same trend. (2) Displacement and stress were greatest when the lower central incisor was restrained and attenuated as the posterior teeth were restrained. Because the biomechanical reaction of the TMJ during clenching was greatest when the lower central incisor was restrained, premature contact of these teeth may be one of the factors involved in the initiation of temporomandibular arthrosis."
},
{
"pmid": "19627517",
"title": "Static and dynamic mechanics of the temporomandibular joint: plowing forces, joint load and tissue stress.",
"abstract": "OBJECTIVES - To determine the combined effects 1) of stress-field aspect ratio and velocity and compressive strain and 2) joint load, on temporomandibular joint (TMJ) disc mechanics. SETTING AND SAMPLE POPULATION - Fifty-two subjects (30 female; 22 male) participated in the TMJ load experiments. MATERIAL AND METHODS - In the absence of human tissue, pig TMJ discs were used to determine the effects of variables 1) on surface plowing forces, and to build a biphasic finite element model (bFEM) to test the effect of human joint loads and 2) on tissue stresses. In the laboratory, discs received a 7.6 N static load via an acrylic indenter before cyclic movement. Data were recorded and analysed using anova. To determine human joint loads, Research Diagnostic Criteria calibrated investigators classified subjects based on signs of disc displacement (DD) and pain (+DD/+pain, n = 18; +DD/-pain, n = 17; -DD/-pain, n = 17). Three-dimensional geometries were produced for each subject and used in a computer model to calculate joint loads. RESULTS - The combined effects of compressive strain, and aspect ratio and velocity of stress-field translation correlated with plowing forces (R(2) = 0.85). +DD/-pain subjects produced 60% higher joint loads (ANOVA, p < 0.05), which increased bFEM-calculated compressive strain and peak total normal stress. CONCLUSIONS - Static and dynamic variables of the stress-field and subject-dependent joint load significantly affect disc mechanics."
},
{
"pmid": "30596523",
"title": "Patellofemoral cartilage stresses are most sensitive to variations in vastus medialis muscle forces.",
"abstract": "The purpose of this study was to evaluate the effects of variations in quadriceps muscle forces on patellofemoral stress. We created subject-specific finite element models for 21 individuals with chronic patellofemoral pain and 16 pain-free control subjects. We extracted three-dimensional geometries from high resolution magnetic resonance images and registered the geometries to magnetic resonance images from an upright weight bearing squat with the knees flexed at 60°. We estimated quadriceps muscle forces corresponding to 60° knee flexion during a stair climb task from motion analysis and electromyography-driven musculoskeletal modelling. We applied the quadriceps muscle forces to our finite element models and evaluated patellofemoral cartilage stress. We quantified cartilage stress using an energy-based effective stress, a scalar quantity representing the local stress intensity in the tissue. We used probabilistic methods to evaluate the effects of variations in quadriceps muscle forces from five trials of the stair climb task for each subject. Patellofemoral effective stress was most sensitive to variations in forces in the two branches of the vastus medialis muscle. Femur cartilage effective stress was most sensitive to variations in vastus medialis forces in 29/37 (78%) subjects, and patella cartilage effective stress was most sensitive to variations in vastus medialis forces in 21/37 (57%) subjects. Femur cartilage effective stress was more sensitive to variations in vastus medialis longus forces in subjects classified as maltrackers compared to normal tracking subjects (p = 0.006). This study provides new evidence of the importance of the vastus medialis muscle in the treatment of patellofemoral pain."
},
{
"pmid": "11000383",
"title": "Dynamic simulation of muscle and articular properties during human wide jaw opening.",
"abstract": "Human mandibular function is determined in part by masticatory muscle tensions and morphological restraints within the craniomandibular system. As only limited information about their interactions can be obtained in vivo, mathematical modeling is a useful alternative. It allows simulation of causal relations between structure and function and the demonstration of hypothetical events in functional or dysfunctional systems. Here, the external force required to reach maximum jaw gape was determined in five relaxed participants, and this information used, with other musculoskeletal data, to construct a dynamic, muscle-driven, three-dimensional mathematical model of the craniomandibular system. The model was programmed to express relations between muscle tensions and articular morphology during wide jaw opening. It was found that a downward force of 5 N could produce wide gape in vivo. When the model's passive jaw-closing muscle tensions were adjusted to permit this, the jaw's resting posture was lower than that normally observed in alert individuals, and low-level active tone was needed in the closer muscles to maintain a typical rest position. Plausible jaw opening to wide gape was possible when activity in the opener muscles increased incrementally over time. When the model was altered structurally by decreasing its angles of condylar guidance, jaw opening required less activity in these muscles. Plausible asymmetrical jaw opening occurred with deactivation of the ipsilateral lateral pterygoid actuator. The model's lateral deviation was limited by passive tensions in the ipsilateral medial pterygoid, which forced the jaw to return towards the midline as opening continued. For all motions, the temporomandibular joint (TMJ) components were maintained in continual apposition and displayed stable pathways despite the absence of constraining ligaments. Compressive TMJ forces were presented in all the cases and increased to maximum at wide gape. Dynamic mathematical modeling appears a useful way to study such events, which as yet are unrecordable in the human craniomandibular system."
},
{
"pmid": "16524337",
"title": "3D finite element simulation of the opening movement of the mandible in healthy and pathologic situations.",
"abstract": "One of the essential causes of disk disorders is the pathologic change in the ligamentous attachments of the disk-condyle complex. In this paper, the response of the soft components of a human temporomandibular joint during mouth opening in healthy and two pathologic situations was studied. A three-dimensional finite element model of this joint comprising the bone components, the articular disk, and the temporomandibular ligaments was developed from a set of medical images. A fiber reinforced porohyperelastic model was used to simulate the behavior of the articular disk, taking into account the orientation of the fibers in each zone of this cartilage component. The condylar movements during jaw opening were introduced as the loading condition in the analysis. In the healthy joint, it was obtained that the highest stresses were located at the lateral part of the intermediate zone of the disk. In this case, the collateral ligaments were subject to high loads, since they are responsible of the attachment of the disk to the condyle during the movement of the mandible. Additionally, two pathologic situations were simulated: damage of the retrodiscal tissue and disruption of the lateral discal ligament. In both cases, the highest stresses moved to the posterior part of the disk since it was displaced in the anterior-medial direction. In conclusion, in the healthy joint, the highest stresses were located in the lateral zone of the disk where perforations are found most often in the clinical experience. On the other hand, the results obtained in the damaged joints suggested that the disruption of the disk attachments may cause an anterior displacement of the disk and instability of the joint."
},
{
"pmid": "16125714",
"title": "Finite element analysis of the temporomandibular joint during lateral excursions of the mandible.",
"abstract": "One of the most significant characteristics of the temporomandibular joint (TMJ) is that it is in fact composed of two joints. Several finite element simulations of the TMJ have been developed but none of them analysed the different responses of its two sides during nonsymmetrical movement. In this paper, a lateral excursion of the mandible was introduced and the biomechanical behaviour of both sides was studied. A three-dimensional finite element model of the joint comprising the bone components, both articular discs, and the temporomandibular ligaments was used. A fibre-reinforced porohyperelastic model was introduced to simulate the behaviour of the articular discs, taking into account the orientation of the fibres in each zone of these cartilage components. The mandible movement during its lateral excursion was introduced as the loading condition in the analysis. As a consequence of the movement asymmetry, the discs were subjected to different load distributions. It was observed that the maximal shear stresses were located in the lateral zone of both discs and that the lateral attachment of the ipsilateral condyle-disc complex suffered a large distortion, due to the compression of this disc against the inferior surface of the temporal bone. These results may be related with possible consequences of a common disorder called bruxism. Although it would be necessary to perform an exhaustive analysis of this disorder, including the contact forces between the teeth during grinding, it could be suggested that a continuous lateral movement of the jaw may lead to perforations of both discs in their lateral part and may damage the lateral attachments of the disc to the condyle."
},
{
"pmid": "29993500",
"title": "Fast Forward-Dynamics Tracking Simulation: Application to Upper Limb and Shoulder Modeling.",
"abstract": "OBJECTIVE\nMusculoskeletal simulation can be used to estimate muscle forces in clinical movement studies. However, such simulations typically only target movement measurements and are not applicable to force exertion tasks which are commonly used in rehabilitation therapy. Simulations can also produce nonphysiological joint forces or be too slow for real-time clinical applications, such as rehabilitation with real-time feedback. The objective of this study is to propose and evaluate a new formulation of forward-dynamics assisted tracking simulation that incorporates measured reaction forces as targets or constraints without any additional computational cost.\n\n\nMETHODS\nWe illustrate our method with idealized proof-of-concept models and evaluate it with two upper limb cases: Tracking of hand reaction forces during an isometric force-generation task and constraining glenohumeral joint reaction forces for stability during arm elevation.\n\n\nRESULTS\nWe show that the addition of reaction force optimization terms within our simulations generates plausible muscle force predictions for these tasks, which are strongly related to reaction forces in addition to movement. Execution times for all models tested were not different when run with or without the reaction force optimization term, ensuring that the simulations are fast enough for real-time clinical applications.\n\n\nCONCLUSION\nOur novel reaction force optimization term leads to more realistic shoulder reaction forces, without any additional computational costs.\n\n\nSIGNIFICANCE\nOur formulation is not only valuable for shoulder simulations, but could be used in various clinical situations (e.g., for different joints and rehabilitation therapy tasks) where the direction and/or magnitude of reaction forces are of interest."
},
{
"pmid": "30786005",
"title": "In vivo prediction of temporomandibular joint disc thickness and position changes for different jaw positions.",
"abstract": "Temporomandibular joint disorders (TMD) are common dysfunctions of the masticatory region and are often linked to dislocation or changes of the temporomandibular joint (TMJ) disc. Magnetic resonance imaging (MRI) is the gold standard for TMJ imaging but standard clinical sequences do not deliver a sufficient resolution and contrast for the creation of detailed meshes of the TMJ disc. Additionally, bony structures cannot be captured appropriately using standard MRI sequences due to their low signal intensity. The objective of this study was to enable researchers to create high resolution representations of all structures of the TMJ and consequently investigate morphological as well as positional changes of the masticatory system. To create meshes of the bony structures, a single computed tomography (CT) scan was acquired. In addition, a high-resolution MRI sequence was produced, which is used to collect the thickness and position change of the disc for various static postures using bite blocks. Changes in thickness of the TMJ disc as well as disc translation were measured. The newly developed workflow successfully allows researchers to create high resolution models of all structures of the TMJ for various static positions, enabling the investigation of TMJ disc translation and deformation. Discs were thinnest in the lateral part and moved mainly anteriorly and slightly medially. The procedure offers the most comprehensive picture of disc positioning and thickness changes reported to date. The presented data can be used for the development of a biomechanical computer model of TMJ anatomy and to investigate dynamic and static loads on the components of the system, which could be useful for the prediction of TMD onset."
},
{
"pmid": "24482784",
"title": "Diagnostic Criteria for Temporomandibular Disorders (DC/TMD) for Clinical and Research Applications: recommendations of the International RDC/TMD Consortium Network* and Orofacial Pain Special Interest Group†.",
"abstract": "AIMS\nThe original Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Axis I diagnostic algorithms have been demonstrated to be reliable. However, the Validation Project determined that the RDC/TMD Axis I validity was below the target sensitivity of ≥ 0.70 and specificity of ≥ 0.95. Consequently, these empirical results supported the development of revised RDC/TMD Axis I diagnostic algorithms that were subsequently demonstrated to be valid for the most common pain-related TMD and for one temporomandibular joint (TMJ) intra-articular disorder. The original RDC/TMD Axis II instruments were shown to be both reliable and valid. Working from these findings and revisions, two international consensus workshops were convened, from which recommendations were obtained for the finalization of new Axis I diagnostic algorithms and new Axis II instruments.\n\n\nMETHODS\nThrough a series of workshops and symposia, a panel of clinical and basic science pain experts modified the revised RDC/TMD Axis I algorithms by using comprehensive searches of published TMD diagnostic literature followed by review and consensus via a formal structured process. The panel's recommendations for further revision of the Axis I diagnostic algorithms were assessed for validity by using the Validation Project's data set, and for reliability by using newly collected data from the ongoing TMJ Impact Project-the follow-up study to the Validation Project. New Axis II instruments were identified through a comprehensive search of the literature providing valid instruments that, relative to the RDC/TMD, are shorter in length, are available in the public domain, and currently are being used in medical settings.\n\n\nRESULTS\nThe newly recommended Diagnostic Criteria for TMD (DC/TMD) Axis I protocol includes both a valid screener for detecting any pain-related TMD as well as valid diagnostic criteria for differentiating the most common pain-related TMD (sensitivity ≥ 0.86, specificity ≥ 0.98) and for one intra-articular disorder (sensitivity of 0.80 and specificity of 0.97). Diagnostic criteria for other common intra-articular disorders lack adequate validity for clinical diagnoses but can be used for screening purposes. Inter-examiner reliability for the clinical assessment associated with the validated DC/TMD criteria for pain-related TMD is excellent (kappa ≥ 0.85). Finally, a comprehensive classification system that includes both the common and less common TMD is also presented. The Axis II protocol retains selected original RDC/TMD screening instruments augmented with new instruments to assess jaw function as well as behavioral and additional psychosocial factors. The Axis II protocol is divided into screening and comprehensive self report instrument sets. The screening instruments' 41 questions assess pain intensity, pain-related disability, psychological distress, jaw functional limitations, and parafunctional behaviors, and a pain drawing is used to assess locations of pain. The comprehensive instruments, composed of 81 questions, assess in further detail jaw functional limitations and psychological distress as well as additional constructs of anxiety and presence of comorbid pain conditions.\n\n\nCONCLUSION\nThe recommended evidence-based new DC/TMD protocol is appropriate for use in both clinical and research settings. More comprehensive instruments augment short and simple screening instruments for Axis I and Axis II. These validated instruments allow for identification of patients with a range of simple to complex TMD presentations."
},
{
"pmid": "18298185",
"title": "Tensile properties of the mandibular condylar cartilage.",
"abstract": "Mandibular condylar cartilage plays a crucial role in temporomandibular joint (TMJ) function, which includes facilitating articulation with the temporomandibular joint disc and reducing loads on the underlying bone. The cartilage experiences considerable tensile forces due to direct compression and shear. However, only scarce information is available about its tensile properties. The present study aims to quantify the biomechanical characteristics of the mandibular condylar cartilage to aid future three-dimensional finite element modeling and tissue engineering studies. Porcine condylar cartilage was tested under uniaxial tension in two directions, anteroposterior and mediolateral, with three regions per direction. Stress relaxation behavior was modeled using the Kelvin model and a second-order generalized Kelvin model, and collagen fiber orientation was determined by polarized light microscopy. The stress relaxation behavior of the tissue was biexponential in nature. The tissue exhibited greater stiffness in the anteroposterior direction than in the mediolateral direction as reflected by higher Young's (2.4 times), instantaneous (1.9 times), and relaxed (1.9 times) moduli. No significant differences were observed among the regional properties in either direction. The predominantly anteroposterior macroscopic fiber orientation in the fibrous zone of condylar cartilage correlated well with the biomechanical findings. The condylar cartilage appears to be less stiff and less anisotropic under tension than the anatomically and functionally related TMJ disc. The anisotropy of the condylar cartilage, as evidenced by tensile behavior and collagen fiber orientation, suggests that the shear environment of the TMJ exposes the condylar cartilage to predominantly but not exclusively anteroposterior loading."
},
{
"pmid": "282342",
"title": "Prevalence of mandibular dysfunction in young adults.",
"abstract": "A sample of students (739) were questioned and examined for symptoms and signs associated with mandibular dysfunction. The most frequently mentioned symptoms were headache, TMJ sounds, and pain in the face or neck. No significant differences were found between men and women with symptoms other than headache. The most common dysfunctional signs were dull occlusal sounds on repeated, firm closure of the teeth, tenderness of muscles in the jaw or head, and sounds on condylar movement. Women had a higher prevalence of these signs. Subjects who were aware of bruxism (7.9%) were more likely to have tenderness of the masseter muscle and limited mouth opening. Limited mouth opening was associated with dull occlusal sounds, pain on opening the mouth, and sounds in TMJs. Headaches were associated with tenderness in muscles and joints. Subclinical signs associated with dysfunction occurred more frequently than did awareness of symptoms."
},
{
"pmid": "23871384",
"title": "Morphological and biomechanical features of the temporomandibular joint disc: an overview of recent findings.",
"abstract": "The temporomandibular joint is a type of synovial joint with unique structure and function. Between the mandibular condyle and the mandibular fossa there is a dense fibrocartilaginous oval articular disc, temporomandibular joint disc. This disc serves as a nonossified bone, thus permitting the complex movements of the joint, and plays a major role in jaw function by providing stress distribution and lubrication in the temporomandibular joint. Pathological mechanical loads are one of the principal causes of temporomandibular joint disc displacement. There is a high frequency of temporomandibular joint disc disorders and treatment options are very limited. For this reason, it is necessary to examine possible alternatives to current treatment options like physiotherapy, drugs, splints or surgical techniques. Recent discoveries in the field of structure and functions of temporomandibular joint disc have created the need for their particular systematization, all in order to create an implant that would be used to replace the damaged disc and be more similar to the natural one. There is a need to more fully meet the morphology and biomechanical properties of the temporomandibular joint disc, and using tissue engineering, make a substitute for it, as faithful as possible, in a case where the natural TMJ disc is damaged so much that the normal function of the joint can be preserved only through implanted disc. Therefore, the aim of this paper was to describe morphology and structure, as well as biomechanical properties of the TMJ disc, in light of the possible applications of this knowledge for the purposes of tissue engineering."
},
{
"pmid": "20635264",
"title": "Predicting muscle patterns for hemimandibulectomy models.",
"abstract": "Deficits in movement and bite force are common in patients following segmental resection of the mandible consequent to oral cancer or injury. We have previously developed a dynamic model to analyse the biomechanics of an ungrafted segmental jaw resection with unilateral muscle and joint loss and post-surgical scarring. Here, we describe an inverse-modelling algorithm for automatically predicting muscle activations in the model for prescribed jaw movement and bite-force production. We present the results of simulations that postulate combined muscle activation patterns that could theoretically be used by patients to overcome post-surgical deficits. Such predictions could be the basis for future muscle retraining in clinical cases."
},
{
"pmid": "11988934",
"title": "Biomechanical response of retrodiscal tissue in the temporomandibular joint under compression.",
"abstract": "PURPOSE\nThe present study was conducted to investigate the biomechanical response of bovine retrodiscal tissue of the temporomandibular joint (TMJ) in compression.\n\n\nPATIENTS AND METHODS\nUsing 10 retrodiscal tissues obtained from 10 cattle, the viscoelastic response of the retrodiscal tissue was evaluated by means of stress-strain analyses. These compressive strains were produced at a high strain rate and were kept constant during 5 minutes for stress-relaxation.\n\n\nRESULTS\nAlthough the stress-strain relationship in the retrodiscal tissue was essentially nonlinear represented by a quadratic or power function of strain, a linear model could reasonably represent its elastic property. In this case, the instantaneous and relaxed moduli were 1.54 and 0.21 MPa, respectively. The stress-relaxation curve showed a marked drop in load during the initial 10 seconds, and the stress reached a steady nonzero level. Furthermore, when using Kelvin's model, a satisfactory agreement can be obtained between the experimental and theoretical stress-relaxation curves.\n\n\nCONCLUSION\nIt is concluded that bovine retrodiscal tissue has a great capacity for energy dissipation during stress-relaxation, although it has little or no function to pull the articular disc back."
},
{
"pmid": "17985243",
"title": "Lubrication of the temporomandibular joint.",
"abstract": "Although tissue engineering of the temporomandibular joint (TMJ) structures is in its infancy, tissue engineering provides the revolutionary possibility for treatment of temporomandibular disorders (TMDs). Recently, several reviews have provided a summary of knowledge of TMJ structure and function at the biochemical, cellular, or mechanical level for tissue engineering of mandibular cartilage, bone and the TMJ disc. As the TMJ enables large relative movements, joint lubrication can be considered of great importance for an understanding of the dynamics of the TMJ. The tribological characteristics of the TMJ are essential for reconstruction and tissue engineering of the joint. The purpose of this review is to provide a summary of advances relevant to the tribological characteristics of the TMJ and to serve as a reference for future research in this field. This review consists of four parts. Part 1 is a brief review of the anatomy and function of the TMJ articular components. In Part 2, the biomechanical and biochemical factors associated with joint lubrication are described: the articular surface topology with microscopic surface roughness and the biomechanical loading during jaw movements. Part 3 includes lubrication theories and possible mechanisms for breakdown of joint lubrication. Finally, in Part 4, the requirement and possibility of tissue engineering for treatment of TMDs with degenerative changes as a future treatment regimen will be discussed in a tribological context."
},
{
"pmid": "3839795",
"title": "Quantitative calculations of temporomandibular joint reaction forces--I. The importance of the magnitude of the jaw muscle forces.",
"abstract": "The effect of measurement errors on quantitative calculation of temporomandibular joint reaction force was investigated in a two-dimensional, two-muscle model. A computer program using the model incremented the magnitude of the bite force and muscle forces and the lengths of their moment arms, and calculated the joint reaction force at each increment. Computation of the joint reaction force is most sensitive to the relative lengths of the bite force and muscle forces moment arms. Absolute values for each muscle force are not required and errors in the magnitudes of the muscle forces have only a minor effect on calculation of the total joint reaction force."
},
{
"pmid": "7657681",
"title": "Modelling of forces in the human masticatory system with optimization of the angulations of the joint loads.",
"abstract": "Numerical models of the human masticatory system were constructed using algorithms which minimized non-linear functions of the muscle forces or the joint loads. However, the predicted solutions for isometric biting were critically dependent upon the modelled angular freedom of the joint loads. The most complete mathematical minimization of any objective function occurs when the joint load angles are predicted. However, the predictions have to be sensible in relation to the actual morphology of the joints. Therefore, the models were tested in terms of the angles of joint load predicted for a dry skull, using muscle vectors reconstructed from the geometry of the skull. The minimizations of muscle force were intrinsically incapable of predicting the angles of joint load. Such models must rely on constrained angles and this produces a restricted minimization and also an indeterminacy. In contrast, the minimizations of joint load predicted angles of joint load which varied appropriately with condylar position. The condylar movement was achieved with a positioning model which adjusted the angulation of the muscle vectors as the jaw was positioned. This model also generated the optimal sagittal shape of the articular eminence. Muscle predictions from the various models were not examined in detail, but the general nature of the predicted muscle force patterns was shown to be reasonable in some of the models and unreasonable in others. The results supported the hypothesis that the temporomandibular joint develops functionally to allow an approximate minimization of the joint loads during isomeric biting. This does not necessarily imply that the neurophysiological control is actually based on a minimization of joint load."
},
{
"pmid": "28536534",
"title": "A Bio-Realistic Finite Element Model to Evaluate the Effect of Masticatory Loadings on Mouse Mandible-Related Tissues.",
"abstract": "Mice are arguably the dominant model organisms for studies investigating the effect of genetic traits on the pathways to mammalian skull and teeth development, thus being integral in exploring craniofacial and dental evolution. The aim of this study is to analyse the functional significance of masticatory loads on the mouse mandible and identify critical stress accumulations that could trigger phenotypic and/or growth alterations in mandible-related structures. To achieve this, a 3D model of mouse skulls was reconstructed based on Micro Computed Tomography measurements. Upon segmenting the main hard tissue components of the mandible such as incisors, molars and alveolar bone, boundary conditions were assigned on the basis of the masticatory muscle architecture. The model was subjected to four loading scenarios simulating different feeding ecologies according to the hard or soft type of food and chewing or gnawing biting movement. Chewing and gnawing resulted in varying loading patterns, with biting type exerting a dominant effect on the stress variations experienced by the mandible and loading intensity correlating linearly to the stress increase. The simulation provided refined insight on the mechanobiology of the mouse mandible, indicating that food consistency could influence micro evolutionary divergence patterns in mandible shape of rodents."
},
{
"pmid": "24182620",
"title": "The effect of kyphoplasty parameters on the dynamic load transfer within the lumbar spine considering the response of a bio-realistic spine segment.",
"abstract": "BACKGROUND\nWith an increasing prevalence of osteoporosis, physicians have to optimize treatment of relevant vertebral compression fractures, which have significant impact on the quality of life in the elder population. Retrospective clinical studies suggest that kyphoplasty, despite being a procedure with promising potential, may be related to an increased fracture risk of the adjacent untreated vertebrae.\n\n\nMETHODS\nA bio-realistic model of a lumbar spine is introduced to determine the morbidity of cemented augmentation. The model was verified and validated for the purpose of the study and subjected to a dynamic finite element analysis. Anisotropic bone properties and solid ligamentous tissue were considered along with α time varying loading scenario.\n\n\nFINDINGS\nThe yielded results merit high clinical interest. Bi-pedicular filling stimulated a symmetrically developing stress field, thus comparing favourably to uni-pedicular augmentation which resulted in a non-uniform loading of the spine segment. An enslavement of the load transfer was also found to both patient bone mineral density and reinforcement-nucleous pulpous superimposition.\n\n\nINTERPRETATION\nThe investigation presented refined insight into the dynamic biomechanical response of a reinforced spine segment. The increase in the calculated occurring stresses was considered as non-critical in most cases, suggesting that prevalent fractures are a symptomatic condition of osteoporosis rather than a sequel of efficiently preformed kyphoplasty."
},
{
"pmid": "20096414",
"title": "Differences in loading of the temporomandibular joint during opening and closing of the jaw.",
"abstract": "Kinematics of the human masticatory system during opening and closing of the jaw have been reported widely. Evidence has been provided that the opening and closing movement of the jaw differ from one another. However, different approaches of movement registration yield divergent expectations with regard to a difference in loading of the temporomandibular joint between these movements. Because of these diverging expectations, it was hypothesized that joint loading is equal during opening and closing. This hypothesis was tested by predicting loading of the temporomandibular joint during an unloaded opening and closing movement of the jaw by means of a three-dimensional biomechanical model of the human masticatory system. Model predictions showed that the joint reaction forces were markedly higher during opening than during closing. The predicted opening trace of the centre of the mandibular condyle was located cranially of the closing trace, with a maximum difference between the traces of 0.45 mm. The hypothesis, postulating similarity of joint loading during unloaded opening and closing of the jaw, therefore, was rejected. Sensitivity analysis showed that the reported differences were not affected in a qualitative sense by muscular activation levels, the thickness of the cartilaginous layers within the temporomandibular joint or the gross morphology of the model. Our predictions indicate that the TMJ is loaded more heavily during unloaded jaw opening than during unloaded jaw closing."
},
{
"pmid": "31150325",
"title": "Individual Differences in Women During Walking Affect Tibial Response to Load Carriage: The Importance of Individualized Musculoskeletal Finite-Element Models.",
"abstract": "Subject-specific features can contribute to the susceptibility of an individual to stress fracture. Here, we incorporated tibial morphology and material properties into a standard musculoskeletal finite-element (M/FE) model and investigated how load carriage influences joint kinetics and tibial mechanics in women. We obtained the morphology and material properties of the tibia from computed tomography images for women of three distinctly different heights, 1.51 m (short), 1.63 m (medium), and 1.75 m (tall), and developed individualized M/FE models for each. Then, we calculated joint and muscle forces, and subsequently, tibial stress/strain for each woman walking at 1.3 m/s under various load conditions (0, 11.3, or 22.7 kg). Among the subjects investigated, using individualized and standard M/FE models, the joint reaction forces (JRFs) differed by up to 4 (hip), 22 (knee), and 26% (ankle), and the 90th percentile von Mises stress by up to 30% (tall woman). Load carriage evoked distinct biomechanical responses, with a 22.7-kg load decreasing the peak hip JRF during late stance by ∼18% in the short woman, while increasing it by ∼39% in the other two women. It also increased peak knee and ankle JRFs by up to ∼48 (tall woman) and ∼36% (short woman). The same load increased the 90th percentile von Mises stress (and corresponding cumulative stress) by 31 (28), 22 (30), and 27% (32%) in the short, medium, and tall woman, respectively. Our findings highlight the critical role of individualized M/FE models to assess mechanical loading in different individuals performing the same physical activity."
}
] |
PLoS Computational Biology | 31525184 | PMC6762205 | 10.1371/journal.pcbi.1007111 | Optimizing spatial allocation of seasonal influenza vaccine under temporal constraints | Prophylactic interventions such as vaccine allocation are some of the most effective public health policy planning tools. The supply of vaccines, however, is limited and an important challenge is to optimally allocate the vaccines to minimize epidemic impact. This resource allocation question (which we refer to as VaccIntDesign) has multiple dimensions: when, where, to whom, etc. Most of the existing literature in this topic deals with the latter (to whom), proposing policies that prioritize individuals by age and disease risk. However, since seasonal influenza spread has a typical spatial trend, and due to the temporal constraints enforced by the availability schedule, the when and where problems become equally, if not more, relevant. In this paper, we study the VaccIntDesign problem in the context of seasonal influenza spread in the United States. We develop a national scale metapopulation model for influenza that integrates both short and long distance human mobility, along with realistic data on vaccine uptake. We also design GreedyAlloc, a greedy algorithm for allocating the vaccine supply at the state level under temporal constraints and show that such a strategy improves over the current baseline of pro-rata allocation, and the improvement is more pronounced for higher vaccine efficacy and moderate flu season intensity. Further, the resulting strategy resembles a ring vaccination applied spatiallyacross the US. | Related workThere is an abundance of literature on the modeling, analysis, and control of epidemics. We briefly mention three areas that are closely related to our paper, namely, mobility modeling, disease modeling, and designing interventions to control the spread of epidemics. We refer to [11] [12] for surveys on these topics.Modeling social contact networks and human mobilityThere is very limited data on social contact networks and human mobility, and so there has been a lot of work on developing realistic models using different kinds of datasets. Eubank et al. [4] developed a first principles based approach for constructing a realistic synthetic population by integrating over a dozen public and commercial datasets. Coarser models for some countries have been constructed using census and Landscan data, e.g., [13] [15]. However, none of these approaches considers long-distance travel outside an urban region. One of the earliest approaches for considering such travel was by Colizza et al. [9], who use information from airline data to construct a network-based representation of cities across the globe. However, airline flow does not account for all of spatial mobility, especially within the US. In [10], the authors construct a radiation model to predict commuter flows in the United States using data on road networks. Especially in the context of national scale disease spread, it is essential to have a model that combines both short-range and long-range human mobility in the United States.Disease modelingThere are a number of variants of the SEIR type of models for disease spread, and their applicability depends on the specific assumptions that hold. The most commonly used models are compartmental in nature, assuming well-mixed populations within each compartment. A number of variants have been proposed, including stochastic models, multiple compartments to represent various subpopulations, branching processes, chain-binomial models, etc. Colizza et al. [9] use a patch model of the form we study in this paper. They study the role of long distance travel in the spread of epidemics, and use it to explain the SARS outbreak and to forecast other outbreaks. A different approach that is more suitable for heterogeneous populations is based on a network abstraction [4]. A lot of data is needed for developing such network-based models, and such models are usually computationally more intensive.Designing interventionsMost compartmental models that have been used for studying optimal interventions are relatively simple, and can be solved using simple black-box optimization methods. An example is the work of Medlock et al. [2], who consider the problem of designing an optimal vaccine allocation for the 2009 H1N1 outbreak. They use an age-based coupled ODE model, and observe that the optimal solution is different from the CDC recommendation at that time. Similar studies have been done for other outbreaks, e.g., [16], who observe that prioritization of high-risk individuals leads to more effective strategies. However, most studies for designing vaccination policies do not take into account the spatiotemporal spread dynamics of seasonal influenza, nor the temporal constraints in vaccine production schedule. Thus there is need for coupling a mechanistic model of national-scale influenza spread with realistic vaccine uptake information for deriving an effective vaccine allocation algorithm. | [
"19696313",
"15141212",
"16574822",
"16461461",
"25373437",
"16079251",
"22939310",
"16079797",
"20463898",
"21415939",
"26489419",
"28187123",
"24245639",
"26450633",
"28518041"
] | [
{
"pmid": "19696313",
"title": "Optimizing influenza vaccine distribution.",
"abstract": "The criteria to assess public health policies are fundamental to policy optimization. Using a model parametrized with survey-based contact data and mortality data from influenza pandemics, we determined optimal vaccine allocation for five outcome measures: deaths, infections, years of life lost, contingent valuation, and economic costs. We find that optimal vaccination is achieved by prioritization of schoolchildren and adults aged 30 to 39 years. Schoolchildren are most responsible for transmission, and their parents serve as bridges to the rest of the population. Our results indicate that consideration of age-specific transmission dynamics is paramount to the optimal allocation of influenza vaccines. We also found that previous and new recommendations from the U.S. Centers for Disease Control and Prevention both for the novel swine-origin influenza and, particularly, for seasonal influenza, are suboptimal for all outcome measures."
},
{
"pmid": "15141212",
"title": "Modelling disease outbreaks in realistic urban social networks.",
"abstract": "Most mathematical models for the spread of disease use differential equations based on uniform mixing assumptions or ad hoc models for the contact process. Here we explore the use of dynamic bipartite graphs to model the physical contact patterns that result from movements of individuals between specific locations. The graphs are generated by large-scale individual-based urban traffic simulations built on actual census, land-use and population-mobility data. We find that the contact network among people is a strongly connected small-world-like graph with a well-defined scale for the degree distribution. However, the locations graph is scale-free, which allows highly efficient outbreak detection by placing sensors in the hubs of the locations network. Within this large-scale simulation framework, we then analyse the relative merits of several proposed mitigation strategies for smallpox spread. Our results suggest that outbreaks can be contained by a strategy of targeted vaccination combined with early detection without resorting to mass vaccination of a population."
},
{
"pmid": "16574822",
"title": "Synchrony, waves, and spatial hierarchies in the spread of influenza.",
"abstract": "Quantifying long-range dissemination of infectious diseases is a key issue in their dynamics and control. Here, we use influenza-related mortality data to analyze the between-state progression of interpandemic influenza in the United States over the past 30 years. Outbreaks show hierarchical spatial spread evidenced by higher pairwise synchrony between more populous states. Seasons with higher influenza mortality are associated with higher disease transmission and more rapid spread than are mild ones. The regional spread of infection correlates more closely with rates of movement of people to and from their workplaces (workflows) than with geographical distance. Workflows are described in turn by a gravity model, with a rapid decay of commuting up to around 100 km and a long tail of rare longer range flow. A simple epidemiological model, based on the gravity formulation, captures the observed increase of influenza spatial synchrony with transmissibility; high transmission allows influenza to spread rapidly beyond local spatial constraints."
},
{
"pmid": "16461461",
"title": "The role of the airline transportation network in the prediction and predictability of global epidemics.",
"abstract": "The systematic study of large-scale networks has unveiled the ubiquitous presence of connectivity patterns characterized by large-scale heterogeneities and unbounded statistical fluctuations. These features affect dramatically the behavior of the diffusion processes occurring on networks, determining the ensuing statistical properties of their evolution pattern and dynamics. In this article, we present a stochastic computational framework for the forecast of global epidemics that considers the complete worldwide air travel infrastructure complemented with census population data. We address two basic issues in global epidemic modeling: (i) we study the role of the large scale properties of the airline transportation network in determining the global diffusion pattern of emerging diseases; and (ii) we evaluate the reliability of forecasts and outbreak scenarios with respect to the intrinsic stochasticity of disease transmission and traffic flows. To address these issues we define a set of quantitative measures able to characterize the level of heterogeneity and predictability of the epidemic pattern. These measures may be used for the analysis of containment policies and epidemic risk assessment."
},
{
"pmid": "25373437",
"title": "Predicting commuter flows in spatial networks using a radiation model based on temporal ranges.",
"abstract": "Understanding network flows such as commuter traffic in large transportation networks is an ongoing challenge due to the complex nature of the transportation infrastructure and human mobility. Here we show a first-principles based method for traffic prediction using a cost-based generalization of the radiation model for human mobility, coupled with a cost-minimizing algorithm for efficient distribution of the mobility fluxes through the network. Using US census and highway traffic data, we show that traffic can efficiently and accurately be computed from a range-limited, network betweenness type calculation. The model based on travel time costs captures the log-normal distribution of the traffic and attains a high Pearson correlation coefficient (0.75) when compared with real traffic. Because of its principled nature, this method can inform many applications related to human mobility driven flows in spatial networks, ranging from transportation, through urban planning to mitigation of the effects of catastrophic events."
},
{
"pmid": "16079251",
"title": "Containing pandemic influenza at the source.",
"abstract": "Highly pathogenic avian influenza A (subtype H5N1) is threatening to cause a human pandemic of potentially devastating proportions. We used a stochastic influenza simulation model for rural Southeast Asia to investigate the effectiveness of targeted antiviral prophylaxis, quarantine, and pre-vaccination in containing an emerging influenza strain at the source. If the basic reproductive number (R0) was below 1.60, our simulations showed that a prepared response with targeted antivirals would have a high probability of containing the disease. In that case, an antiviral agent stockpile on the order of 100,000 to 1 million courses for treatment and prophylaxis would be sufficient. If pre-vaccination occurred, then targeted antiviral prophylaxis could be effective for containing strains with an R0 as high as 2.1. Combinations of targeted antiviral prophylaxis, pre-vaccination, and quarantine could contain strains with an R(0) as high as 2.4."
},
{
"pmid": "22939310",
"title": "Estimating influenza latency and infectious period durations using viral excretion data.",
"abstract": "Influenza infection natural history is often described as a progression through four successive stages: Susceptible-Exposed/Latent-Infectious-Removed (SEIR). The duration of each stage determines the average generation time, the time between infection of a case and infection of his/her infector. Recently, several authors have justified somewhat arbitrary choices in stage durations by how close the resulting generation time distribution was to viral excretion over time after infection. Taking this reasoning one step further, we propose that the viral excretion profile over time can be used directly to estimate the required parameters in an SEIR model. In our approach, the latency and infectious period distributions are estimated by minimizing the Kullback-Leibler divergence between the model-based generation time probability density function and the normalized average viral excretion profile. Following this approach, we estimated that the latency and infectious period last respectively 1.6 and 1.0 days on average using excretion profiles from experimental infections. Interestingly, we find that only 5% of cases are infectious for more than 2.9 days. We also discuss the consequences of these estimates for the evaluation of the efficacy of control measures such as isolation or treatment. We estimate that, under a best-case scenario where symptoms appear at the end of the latency period, index cases must be isolated or treated at most within 16h after symptoms onset to avoid 50% of secondary cases. This study provides the first estimates of latency and infectious period for influenza based directly on viral excretion data. It provides additional evidence that isolation or treatment of cases would be effective only if adopted shortly after symptoms onset, and shows that four days of isolation may be enough to avoid most transmissions."
},
{
"pmid": "16079797",
"title": "Strategies for containing an emerging influenza pandemic in Southeast Asia.",
"abstract": "Highly pathogenic H5N1 influenza A viruses are now endemic in avian populations in Southeast Asia, and human cases continue to accumulate. Although currently incapable of sustained human-to-human transmission, H5N1 represents a serious pandemic threat owing to the risk of a mutation or reassortment generating a virus with increased transmissibility. Identifying public health interventions that might be able to halt a pandemic in its earliest stages is therefore a priority. Here we use a simulation model of influenza transmission in Southeast Asia to evaluate the potential effectiveness of targeted mass prophylactic use of antiviral drugs as a containment strategy. Other interventions aimed at reducing population contact rates are also examined as reinforcements to an antiviral-based containment policy. We show that elimination of a nascent pandemic may be feasible using a combination of geographically targeted prophylaxis and social distancing measures, if the basic reproduction number of the new virus is below 1.8. We predict that a stockpile of 3 million courses of antiviral drugs should be sufficient for elimination. Policy effectiveness depends critically on how quickly clinical cases are diagnosed and the speed with which antiviral drugs can be distributed."
},
{
"pmid": "20463898",
"title": "Optimal pandemic influenza vaccine allocation strategies for the Canadian population.",
"abstract": "BACKGROUND\nThe world is currently confronting the first influenza pandemic of the 21(st) century. Influenza vaccination is an effective preventive measure, but the unique epidemiological features of swine-origin influenza A (H1N1) (pH1N1) introduce uncertainty as to the best strategy for prioritization of vaccine allocation. We sought to determine optimal prioritization of vaccine distribution among different age and risk groups within the Canadian population, to minimize influenza-attributable morbidity and mortality.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nWe developed a deterministic, age-structured compartmental model of influenza transmission, with key parameter values estimated from data collected during the initial phase of the epidemic in Ontario, Canada. We examined the effect of different vaccination strategies on attack rates, hospitalizations, intensive care unit admissions, and mortality. In all scenarios, prioritization of high-risk individuals (those with underlying chronic conditions and pregnant women), regardless of age, markedly decreased the frequency of severe outcomes. When individuals with underlying medical conditions were not prioritized and an age group-based approach was used, preferential vaccination of age groups at increased risk of severe outcomes following infection generally resulted in decreased mortality compared to targeting vaccine to age groups with higher transmission, at a cost of higher population-level attack rates. All simulations were sensitive to the timing of the epidemic peak in relation to vaccine availability, with vaccination having the greatest impact when it was implemented well in advance of the epidemic peak.\n\n\nCONCLUSIONS/SIGNIFICANCE\nOur model simulations suggest that vaccine should be allocated to high-risk groups, regardless of age, followed by age groups at increased risk of severe outcomes. Vaccination may significantly reduce influenza-attributable morbidity and mortality, but the benefits are dependent on epidemic dynamics, time for program roll-out, and vaccine uptake."
},
{
"pmid": "21415939",
"title": "Modeling the spatial spread of infectious diseases: the GLobal Epidemic and Mobility computational model.",
"abstract": "Here we present the Global Epidemic and Mobility (GLEaM) model that integrates sociodemographic and population mobility data in a spatially structured stochastic disease approach to simulate the spread of epidemics at the worldwide scale. We discuss the flexible structure of the model that is open to the inclusion of different disease structures and local intervention policies. This makes GLEaM suitable for the computational modeling and anticipation of the spatio-temporal patterns of global epidemic spreading, the understanding of historical epidemics, the assessment of the role of human mobility in shaping global epidemics, and the analysis of mitigation and containment scenarios."
},
{
"pmid": "26489419",
"title": "SIS and SIR Epidemic Models Under Virtual Dispersal.",
"abstract": "We develop a multi-group epidemic framework via virtual dispersal where the risk of infection is a function of the residence time and local environmental risk. This novel approach eliminates the need to define and measure contact rates that are used in the traditional multi-group epidemic models with heterogeneous mixing. We apply this approach to a general n-patch SIS model whose basic reproduction number [Formula: see text] is computed as a function of a patch residence-time matrix [Formula: see text]. Our analysis implies that the resulting n-patch SIS model has robust dynamics when patches are strongly connected: There is a unique globally stable endemic equilibrium when [Formula: see text], while the disease-free equilibrium is globally stable when [Formula: see text]. Our further analysis indicates that the dispersal behavior described by the residence-time matrix [Formula: see text] has profound effects on the disease dynamics at the single patch level with consequences that proper dispersal behavior along with the local environmental risk can either promote or eliminate the endemic in particular patches. Our work highlights the impact of residence-time matrix if the patches are not strongly connected. Our framework can be generalized in other endemic and disease outbreak models. As an illustration, we apply our framework to a two-patch SIR single-outbreak epidemic model where the process of disease invasion is connected to the final epidemic size relationship. We also explore the impact of disease-prevalence-driven decision using a phenomenological modeling approach in order to contrast the role of constant versus state-dependent [Formula: see text] on disease dynamics."
},
{
"pmid": "28187123",
"title": "Human mobility and the spatial transmission of influenza in the United States.",
"abstract": "Seasonal influenza epidemics offer unique opportunities to study the invasion and re-invasion waves of a pathogen in a partially immune population. Detailed patterns of spread remain elusive, however, due to lack of granular disease data. Here we model high-volume city-level medical claims data and human mobility proxies to explore the drivers of influenza spread in the US during 2002-2010. Although the speed and pathways of spread varied across seasons, seven of eight epidemics likely originated in the Southern US. Each epidemic was associated with 1-5 early long-range transmission events, half of which sparked onward transmission. Gravity model estimates indicate a sharp decay in influenza transmission with the distance between infectious and susceptible cities, consistent with spread dominated by work commutes rather than air traffic. Two early-onset seasons associated with antigenic novelty had particularly localized modes of spread, suggesting that novel strains may spread in a more localized fashion than previously anticipated."
},
{
"pmid": "24245639",
"title": "Optimal strategies of social distancing and vaccination against seasonal influenza.",
"abstract": "Optimal control strategies for controlling seasonal influenza transmission in the US are of high interest, because of the significant epidemiological and economic burden of influenza. To evaluate optimal strategies of vaccination and social distancing, we used an age-structured dynamic model of seasonal influenza. We applied optimal control theory to identify the best way of reducing morbidity and mortality at a minimal cost. In combination with the Pontryagins maximum principle, we calculated time-dependent optimal policies of vaccination and social distancing to minimize the epidemiological and economic burden associated with seasonal influenza. We computed optimal age-specific intervention strategies and analyze them under various costs of interventions and disease transmissibility. Our results show that combined strategies have a stronger impact on the reduction of the final epidemic size. Our results also suggest that the optimal vaccination can be achieved by allocating most vaccines to preschool-age children (age under five) followed by young adults (age 20-39) and school age children (age 6-19). We find that the optimal vaccination rates for all age groups are highest at the beginning of the outbreak, requiring intense effort at the early phase of an epidemic. On the other hand, optimal social distancing of clinical cases tends to last the entire duration of an outbreak, and its intensity is relatively equal for all age groups. Furthermore, with higher transmissibility of the influenza virus (i.e. higher R0), the optimal control strategy needs to include more efforts to increase vaccination rates rather than efforts to encourage social distancing. Taken together, public health agencies need to consider both the transmissibility of the virus and ways to encourage early vaccination as well as voluntary social distancing of symptomatic cases in order to determine optimal intervention strategies against seasonal influenza."
},
{
"pmid": "26450633",
"title": "Assessing the Capacity of the US Health Care System to Use Additional Mechanical Ventilators During a Large-Scale Public Health Emergency.",
"abstract": "OBJECTIVE\nA large-scale public health emergency, such as a severe influenza pandemic, can generate large numbers of critically ill patients in a short time. We modeled the number of mechanical ventilators that could be used in addition to the number of hospital-based ventilators currently in use.\n\n\nMETHODS\nWe identified key components of the health care system needed to deliver ventilation therapy, quantified the maximum number of additional ventilators that each key component could support at various capacity levels (ie, conventional, contingency, and crisis), and determined the constraining key component at each capacity level.\n\n\nRESULTS\nOur study results showed that US hospitals could absorb between 26,200 and 56,300 additional ventilators at the peak of a national influenza pandemic outbreak with robust pre-pandemic planning.\n\n\nCONCLUSIONS\nThe current US health care system may have limited capacity to use additional mechanical ventilators during a large-scale public health emergency. Emergency planners need to understand their health care systems' capability to absorb additional resources and expand care. This methodology could be adapted by emergency planners to determine stockpiling goals for critical resources or to identify alternatives to manage overwhelming critical care need."
},
{
"pmid": "28518041",
"title": "Stockpiling Ventilators for Influenza Pandemics.",
"abstract": "In preparing for influenza pandemics, public health agencies stockpile critical medical resources. Determining appropriate quantities and locations for such resources can be challenging, given the considerable uncertainty in the timing and severity of future pandemics. We introduce a method for optimizing stockpiles of mechanical ventilators, which are critical for treating hospitalized influenza patients in respiratory failure. As a case study, we consider the US state of Texas during mild, moderate, and severe pandemics. Optimal allocations prioritize local over central storage, even though the latter can be deployed adaptively, on the basis of real-time needs. This prioritization stems from high geographic correlations and the slightly lower treatment success assumed for centrally stockpiled ventilators. We developed our model and analysis in collaboration with academic researchers and a state public health agency and incorporated it into a Web-based decision-support tool for pandemic preparedness and response."
}
] |
Frontiers in Neuroinformatics | 31607882 | PMC6769110 | 10.3389/fninf.2019.00065 | A Performant Web-Based Visualization, Assessment, and Collaboration Tool for Multidimensional Biosignals | Biosignal-based research is often multidisciplinary and benefits greatly from multi-site collaboration. This requires appropriate tooling that supports collaboration, is easy to use, and is accessible. However, current software tools do not provide the necessary functionality, usability, and ubiquitous availability. The latter is particularly crucial in environments, such as hospitals, which often restrict users' permissions to install software. This paper introduces a new web-based application for interactive biosignal visualization and assessment. A focus has been placed on performance to allow for handling files of any size. The proposed solution can load local and remote files. It parses data locally on the client, and harmonizes channel labels. The data can then be scored, annotated, pseudonymized and uploaded to a clinical data management system for further analysis. The data and all actions can be interactively shared with a second party. This lowers the barrier to quickly visually examine data, collaborate and make informed decisions. | 3. Related WorkMost commercial PSG devices store recordings in proprietary formats that can be visualized with also proprietary software provided by the manufacturers, e.g., Domino (Somnomedics), Noxturnal (ResMed), or Sleepware G3 (Philips). They all support assessment (c), are performant (b) and can import and export EDF (g) but need to be installed (a) and don't offer collaboration functionality (d).With regard to biosignal viewers there are several open-source tools to visualize EDF files listed on the EDF website. Among them Sleep (Combrisson et al., 2017) and SigViewer for Windows, macOS and Linux, or Polyman for Windows (Kemp and Roessen, 2007).Furthermore, an increasing number of online EDF viewers are available3, including commercial services for sleep scoring training like the AASM Sleep ISR. However, they are all interfaces to online data repositories, require special server-side applications to parse EDF into a custom format, or cannot be used for own data.In 2015 our proof-of-concept version of a purely web-browser-based EDF parser and visualizer was published as part of a biosignal research infrastructure (Beier et al., 2017). It supports requirements (a) and (e)–(g), but is relatively slow and offers no further features. In 2017, Bilal Zonjy released a web browser-based EDF viewer4 that also supports local and remote EDF files, supporting requirements (a), (f), (g), and (h). Justus Schwabedal released a similar application5 in 2018 that additionally offers automatic sleep stage scoring (c). None of those solutions allows for remote collaboration. On the other hand, several solutions for web-based collaboration have been proposed, like CBRAIN (Sherif et al., 2014) for neuroimaging research, P2Care (Maglogiannis et al., 2006; Andriopoulou et al., 2015) for general real-time teleconsultation, or BUCOMAX (Puel et al., 2014) and HERMES (Andrikos et al., 2019) for radiologists. But these solutions do not support biosignal recordings. | [
"29993993",
"28416048",
"28983246",
"29844990",
"12948806",
"1374708",
"20530646",
"16705995",
"17426351",
"24346125",
"24904400",
"29860441"
] | [
{
"pmid": "29993993",
"title": "An Enhanced Device-Transparent Real-Time Teleconsultation Environment for Radiologists.",
"abstract": "This paper describes a novel web-based platform promoting real-time advanced teleconsultation services on medical imaging. Principles of heterogeneous workflow management systems and state-of-the-art technologies such as the microservices architectural pattern, peer-to-peer networking, and the single-page application concept are combined to build a scalable and extensible platform to aid collaboration among geographically distributed healthcare professionals. The real-time communication capabilities are based on the webRTC protocol to enable direct communication among clients. This paper discusses the conceptual and technical details of the system, emphasizing on its innovative elements."
},
{
"pmid": "28983246",
"title": "Sleep: An Open-Source Python Software for Visualization, Analysis, and Staging of Sleep Data.",
"abstract": "We introduce Sleep, a new Python open-source graphical user interface (GUI) dedicated to visualization, scoring and analyses of sleep data. Among its most prominent features are: (1) Dynamic display of polysomnographic data, spectrogram, hypnogram and topographic maps with several customizable parameters, (2) Implementation of several automatic detection of sleep features such as spindles, K-complexes, slow waves, and rapid eye movements (REM), (3) Implementation of practical signal processing tools such as re-referencing or filtering, and (4) Display of main descriptive statistics including publication-ready tables and figures. The software package supports loading and reading raw EEG data from standard file formats such as European Data Format, in addition to a range of commercial data formats. Most importantly, Sleep is built on top of the VisPy library, which provides GPU-based fast and high-level visualization. As a result, it is capable of efficiently handling and displaying large sleep datasets. Sleep is freely available (http://visbrain.org/sleep) and comes with sample datasets and an extensive documentation. Novel functionalities will continue to be added and open-science community efforts are expected to enhance the capacities of this module."
},
{
"pmid": "29844990",
"title": "A survey on sleep assessment methods.",
"abstract": "PURPOSE\nA literature review is presented that aims to summarize and compare current methods to evaluate sleep.\n\n\nMETHODS\nCurrent sleep assessment methods have been classified according to different criteria; e.g., objective (polysomnography, actigraphy…) vs. subjective (sleep questionnaires, diaries…), contact vs. contactless devices, and need for medical assistance vs. self-assessment. A comparison of validation studies is carried out for each method, identifying their sensitivity and specificity reported in the literature. Finally, the state of the market has also been reviewed with respect to customers' opinions about current sleep apps.\n\n\nRESULTS\nA taxonomy that classifies the sleep detection methods. A description of each method that includes the tendencies of their underlying technologies analyzed in accordance with the literature. A comparison in terms of precision of existing validation studies and reports.\n\n\nDISCUSSION\nIn order of accuracy, sleep detection methods may be arranged as follows: Questionnaire < Sleep diary < Contactless devices < Contact devices < PolysomnographyA literature review suggests that current subjective methods present a sensitivity between 73% and 97.7%, while their specificity ranges in the interval 50%-96%. Objective methods such as actigraphy present a sensibility higher than 90%. However, their specificity is low compared to their sensitivity, being one of the limitations of such technology. Moreover, there are other factors, such as the patient's perception of her or his sleep, that can be provided only by subjective methods. Therefore, sleep detection methods should be combined to produce a synergy between objective and subjective methods. The review of the market indicates the most valued sleep apps, but it also identifies problems and gaps, e.g., many hardware devices have not been validated and (especially software apps) should be studied before their clinical use."
},
{
"pmid": "12948806",
"title": "European data format 'plus' (EDF+), an EDF alike standard format for the exchange of physiological data.",
"abstract": "The European data format (EDF) is a widely accepted standard for exchange of electroencephalogram and polysomnogram data between different equipment and labs. But it hardly accommodates other investigations. EDF+ is a more flexible but still simple format which is compatible to EDF except that an EDF+ file may contain interrupted recordings. Also, EDF+ supports time-stamped annotations for the storage of events such as text annotations, stimuli, averaged signals, electrocardiogram parameters, apnoeas and so on. When compared to EDF, EDF+ can not only store annotations but also electromyography, evoked potentials, electroneurography, electrocardiography and many more types of investigations. Further improvements over EDF include the use of standard electrode names. EDF+ is so much like EDF that existing EDF viewers still display the signals in EDF+ files. Software development is limited mainly to implementing the annotations. EDF+ offers a format for a wide range of neurophysiological investigations which can become a standard within a few years."
},
{
"pmid": "1374708",
"title": "A simple format for exchange of digitized polygraphic recordings.",
"abstract": "A simple digital format supporting the technical aspects of exchange and storage of polygraphic signals has been specified. Implementation of the format is simple and independent of hard- or software environments. It allows for any local montages, transducers, prefiltering, sampling frequencies, etc. At present, 7 laboratories in various countries have used the format for exchanging sleep-wake recordings. These exchanges have made it possible to create a common database of sleep records, to compare the analysis algorithms local to the various laboratories to each other by applying these algorithms to identical signals, and to set up a computer-aided interlaboratory evaluation of manual and automatic analysis methods."
},
{
"pmid": "20530646",
"title": "Interprofessional teamwork in medical rehabilitation: a comparison of multidisciplinary and interdisciplinary team approach.",
"abstract": "OBJECTIVE\nTo compare multi- and interdisciplinary team approaches concerning team process (teamwork) and team effectiveness (team performance and staff satisfaction) in German medical rehabilitation clinics.\n\n\nDESIGN\nA cross-sectional study with a descriptive-explorative design.\n\n\nSETTING\nEighteen medical rehabilitation clinics divided into two groups (somatic and psychosomatic indication fields).\n\n\nSUBJECTS\nThe 18 head physicians or psychotherapists in the clinics and their complete rehabilitation teams (n = 824).\n\n\nMAIN MEASURES\nAn interview guide was designed to determine the team approach in a telephone interview. A staff questionnaire for team members measured teamwork and team effectiveness with psychometrically validated questionnaires and self-administered items.\n\n\nRESULTS\nAll 18 head physicians took part in the telephone interview. The response rate of the employee attitude survey averaged 46% (n = 378). Eight teams were categorized as multidisciplinary and seven teams as interdisciplinary. In three cases the results were ambiguous. These teams were not considered in the further study. As expected, the interdisciplinary team approach showed significantly better results for nearly all aspects of teamwork and team effectiveness in comparison with the multidisciplinary team approach. The differences between multi- and interdisciplinary approach concerning teamwork and team effectiveness were higher in the somatic (8 teams, n = 183) than in the psychosomatic indication fields (7 teams, n = 195).\n\n\nCONCLUSIONS\nTeamwork and team effectiveness are higher in teams working with the interdisciplinary team approach. Therefore the interdisciplinary approach can be recommended, particularly for clinics in the somatic indication field. Team development can help to move from the multidisciplinary to the interdisciplinary approach."
},
{
"pmid": "16705995",
"title": "Enabling collaborative medical diagnosis over the Internet via peer-to-peer distribution of electronic health records.",
"abstract": "Recent developments in networking and computing technologies and the expansion of the electronic health record system have enabled the possibility of online collaboration between geographically distributed medical personnel. In this context, the paper presents a Web-based application, which implements a collaborative working environment for physicians by enabling the peer-to-peer exchange of electronic health records. The paper treats technological issues such as Video, Audio and Message Communication, Workspace Management, Distributed Medical Data Management and exchange, while it emphasizes on the Security issues arisen, due to the sensitive and private nature of the medical information. In the paper, we present initial results from the system in practice and measurements regarding transmission times and bandwidth requirements. A wavelet based image compression scheme is also introduced for reducing network delays. A number of physicians were asked to use the platform for testing purposes and for measuring user acceptance. The system was considered by them to be very useful, as they found that the platform simulated very well the personal contact between them and their colleagues during medical meetings."
},
{
"pmid": "17426351",
"title": "The Extensible Neuroimaging Archive Toolkit: an informatics platform for managing, exploring, and sharing neuroimaging data.",
"abstract": "The Extensible Neuroimaging Archive Toolkit (XNAT) is a software platform designed to facilitate common management and productivity tasks for neuroimaging and associated data. In particular, XNAT enables qualitycontrol procedures and provides secure access to and storage of data. XNAT follows a threetiered architecture that includes a data archive, user interface, and middleware engine. Data can be entered into the archive as XML or through data entry forms. Newly added data are stored in a virtual quarantine until an authorized user has validated it. XNAT subsequently maintains a history profile to track all changes made to the managed data. User access to the archive is provided by a secure web application. The web application provides a number of quality control and productivity features, including data entry forms, data-type-specific searches, searches that combine across data types, detailed reports, and listings of experimental data, upload/download tools, access to standard laboratory workflows, and administration and security tools. XNAT also includes an online image viewer that supports a number of common neuroimaging formats, including DICOM and Analyze. The viewer can be extended to support additional formats and to generate custom displays. By managing data with XNAT, laboratories are prepared to better maintain the long-term integrity of their data, to explore emergent relations across data types, and to share their data with the broader neuroimaging community."
},
{
"pmid": "24346125",
"title": "A review of signals used in sleep analysis.",
"abstract": "This article presents a review of signals used for measuring physiology and activity during sleep and techniques for extracting information from these signals. We examine both clinical needs and biomedical signal processing approaches across a range of sensor types. Issues with recording and analysing the signals are discussed, together with their applicability to various clinical disorders. Both univariate and data fusion (exploiting the diverse characteristics of the primary recorded signals) approaches are discussed, together with a comparison of automated methods for analysing sleep."
},
{
"pmid": "24904400",
"title": "CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research.",
"abstract": "The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction."
},
{
"pmid": "29860441",
"title": "The National Sleep Research Resource: towards a sleep data commons.",
"abstract": "Objective\nThe gold standard for diagnosing sleep disorders is polysomnography, which generates extensive data about biophysical changes occurring during sleep. We developed the National Sleep Research Resource (NSRR), a comprehensive system for sharing sleep data. The NSRR embodies elements of a data commons aimed at accelerating research to address critical questions about the impact of sleep disorders on important health outcomes.\n\n\nApproach\nWe used a metadata-guided approach, with a set of common sleep-specific terms enforcing uniform semantic interpretation of data elements across three main components: (1) annotated datasets; (2) user interfaces for accessing data; and (3) computational tools for the analysis of polysomnography recordings. We incorporated the process for managing dataset-specific data use agreements, evidence of Institutional Review Board review, and the corresponding access control in the NSRR web portal. The metadata-guided approach facilitates structural and semantic interoperability, ultimately leading to enhanced data reusability and scientific rigor.\n\n\nResults\nThe authors curated and deposited retrospective data from 10 large, NIH-funded sleep cohort studies, including several from the Trans-Omics for Precision Medicine (TOPMed) program, into the NSRR. The NSRR currently contains data on 26 808 subjects and 31 166 signal files in European Data Format. Launched in April 2014, over 3000 registered users have downloaded over 130 terabytes of data.\n\n\nConclusions\nThe NSRR offers a use case and an example for creating a full-fledged data commons. It provides a single point of access to analysis-ready physiological signals from polysomnography obtained from multiple sources, and a wide variety of clinical data to facilitate sleep research."
}
] |
International Journal of Information Security | 31632229 | PMC6777511 | 10.1007/s10207-019-00442-1 | DOMtegrity: ensuring web page integrity against malicious browser extensions | In this paper, we address an unsolved problem in the real world: how to ensure the integrity of the web content in a browser in the presence of malicious browser extensions? The problem of exposing confidential user credentials to malicious extensions has been widely understood, which has prompted major banks to deploy two-factor authentication. However, the importance of the “integrity” of the web content has received little attention. We implement two attacks on real-world online banking websites and show that ignoring the “integrity” of the web content can fundamentally defeat two-factor solutions. To address this problem, we propose a cryptographic protocol called DOMtegrity to ensure the end-to-end integrity of the DOM structure of a web page from delivering at a web server to the rendering of the page in the user’s browser. DOMtegrity is the first solution that protects DOM integrity without modifying the browser architecture or requiring extra hardware. It works by exploiting subtle yet important differences between browser extensions and in-line JavaScript code. We show how DOMtegrity prevents the earlier attacks and a whole range of man-in-the-browser attacks. We conduct extensive experiments on more than 14,000 real-world extensions to evaluate the effectiveness of DOMtegrity. | Related workThis section reviews related work on countering the threats imposed by malicious browser extensions. Existing countermeasures can be categorized into four types: modifying browsers, strengthening the vetting process, requiring another trusted extension and using external hardware.Modifying browsers Proposals in this category require their system to be integrated natively within the browser. Ter Louw et al. design systems for protecting code integrity and user data [24]. The latter is a mechanism that augments the browser to support policy-based run-time monitoring of extension behaviour. The goal is to protect sensitive user data from being accessed or modified by the extension. Dhawan et al. proposed “Sabre”, an in-browser information-flow monitor to detect malicious activities of JavaScript-based extensions during run-time [6]. Sabre associates an appropriate label to all in-memory JavaScript objects based on whether they carry sensitive information. Then, it monitors the objects carrying sensitive information for any insecure access. Wang et al. proposed an extension access control framework [27], which dynamically analyses the behaviour of extensions at run-time and controls policies to restrict their access to resources. All the proposals in this category require modification of browser code base. Unfortunately, none of these proposals have been adopted by mainstream browsers so far. In fact, some of these proposals are based on the XPCOM model for creating extensions in Firefox which is due to be deprecated in favour of WebExtensions.Strengthening the vetting process Proposals in this category involve various techniques to improve detection rates of malicious extensions during the vetting process. Jagpal et al. shared their three years of experience in fighting with malicious browser extensions in Chrome Web Store [13]. They developed a detection system called WebEval to vet the extensions in the market. WebEval combines both static and dynamic analysis of the source code, as well as taking into consideration of the reputation of the extension’s developer, and involving human experts in manual reviews whenever necessary. Their method was able to identify real-world malicious extensions with a success rate of 96.5%.Besides methods adopted by the industry, academic researchers also propose various techniques to strengthen the vetting process. Kashyap et al. proposed a framework to automate the vetting process in official extension repositories [15]. They proposed a notion of add-on security signature which provides detailed information on its data flow and API usages. Kapravelos et al. presented Hulk as a dynamic analysis system to detect malicious extensions [14]. They monitored the execution and network activities of extensions to detect their malicious intentions. The had an extensive collection of real-world extensions from Chrome Web Store, and one of their findings was discovering a malicious extension that affected 5.5 million users. Guha et al. proposed an IBEX framework for authoring, analysing, verifying and deploying secure browser extensions [11]. They suggested a high level programming language to develop extensions. They also proposed Datalog to specify fine-grained access control to restrict the extension’s access to security-specific web content. Bandhakavi et al. presented the VEX framework for highlighting potential security vulnerabilities in browser extension [4]. They applied static information-flow analysis to catch malicious JavaScript code in the extension implementation.Requiring another trusted extension Proposals in this category require users to trust one particular extension and install it consciously. Marouf et al. proposed a run-time framework called REM that monitors the access made by extensions and provides customized permission [17]. They developed an extension for monitoring other extension based on REM. They monitored API calls from an extension to the browser and enforced their policies on the extension. They notified users about the latest activities of other extensions and allowed them to block future such activities. Liu et al. demonstrated the same threat in Chrome [16]. They also implemented an extension to enforce more fine-grained privileges to extensions in Chrome. They proposed HTML elements to use another attribute called “sensitivity” to differentiate DOM elements and enforce the policy that they call micro-privilege management.Using external hardware Cronto9 is a commercial hardware-based solution to address MITB attacks specifically for online banking. It was initially developed by a spin-off company from the University of Cambridge in 2005 and was later acquired by VASCO Data Security International for £17m in 2013. The product has been widely deployed by major banks in Chile, Switzerland and Germany to secure online banking. The Cronto solution works by using a special client device, which shares a secret key with the sever. When the user performs transactions during online banking, the server sends a 2-D barcode to display on the client’s web page, which encodes the encrypted transaction details such as the amount, timestamp and account number. The 2-D barcode is then read and verified by the Cronto device that has the decryption key. Upon successful verification, Cronto generates a one-time password (OTP), which the user can enter in the browser to authenticate the transaction. Here, the Cronto device can be either custom-built hardware with an embedded camera or a smart phone.DOMtegrity is similar to Cronto in preventing malicious modifications on the client side against MITB attacks. However, ours is a JavaScript-based software solution and does not require an external hardware token. We note that although the main design aim of Cronto is to ensure the integrity of transactions, it has a secondary function as a second factor for authentication since the device has a shared secret key with the server. DOMtegrity does not have this function, but it can be used in combination with any existing two-factor authentication scheme, e.g. the Chip Authentication Program (CAP) currently used by HSBC and Barclays.Other related work Reis et al. proposed the idea of ensuring web content integrity by JavaScript [21]. Their method was inspired by the Linux integrity check and AEGIS [3]. The authors developed a client-side JavaScript framework named TripWire, which detects unexpected modifications done by ISPs and other intermediate nodes over HTTP communication. Once the page rendering is complete, the code requested the page’s source code from the server through AJAX requests, then the internal source code is compared with the server’s one at the client side. Tripwire did not consider browser extensions in their attack model because it considers them as “trusted”. They discussed that their method was comparable to HTTPS with better performance. Patil [19] proposed another method to isolate DOM from content script. They used shadow DOM to present an encrypted view of the page data to the content script. They developed a proof-of-concept prototype in their research. | [] | [] |
Journal of Clinical Medicine | 31514466 | PMC6780110 | 10.3390/jcm8091446 | Aiding the Diagnosis of Diabetic and Hypertensive Retinopathy Using Artificial Intelligence-Based Semantic Segmentation | Automatic segmentation of retinal images is an important task in computer-assisted medical image analysis for the diagnosis of diseases such as hypertension, diabetic and hypertensive retinopathy, and arteriosclerosis. Among the diseases, diabetic retinopathy, which is the leading cause of vision detachment, can be diagnosed early through the detection of retinal vessels. The manual detection of these retinal vessels is a time-consuming process that can be automated with the help of artificial intelligence with deep learning. The detection of vessels is difficult due to intensity variation and noise from non-ideal imaging. Although there are deep learning approaches for vessel segmentation, these methods require many trainable parameters, which increase the network complexity. To address these issues, this paper presents a dual-residual-stream-based vessel segmentation network (Vess-Net), which is not as deep as conventional semantic segmentation networks, but provides good segmentation with few trainable parameters and layers. The method takes advantage of artificial intelligence for semantic segmentation to aid the diagnosis of retinopathy. To evaluate the proposed Vess-Net method, experiments were conducted with three publicly available datasets for vessel segmentation: digital retinal images for vessel extraction (DRIVE), the Child Heart Health Study in England (CHASE-DB1), and structured analysis of retina (STARE). Experimental results show that Vess-Net achieved superior performance for all datasets with sensitivity (Se), specificity (Sp), area under the curve (AUC), and accuracy (Acc) of 80.22%, 98.1%, 98.2%, and 96.55% for DRVIE; 82.06%, 98.41%, 98.0%, and 97.26% for CHASE-DB1; and 85.26%, 97.91%, 98.83%, and 96.97% for STARE dataset. | 2. Related WorksVessel segmentation can be divided into two main groups: techniques based on conventional handcrafted local features using typical image-processing schemes and techniques that use machine learning or deep-learning features.2.1. Vessel Segmentation Based on Conventional Handcrafted Local FeaturesThese methods use conventional image-processing schemes to identify vessels in fundus images. The usual schemes are color-based segmentation, adaptive thresholding, morphological schemes, and other local handcrafted feature-based methods that use image enhancement prior to the segmentation. Akram et al. used a 2D Gabor filter for retinal image enhancement and the multi-layer thresholding approach to detect blood vessels [23]. Fraz et al. used quantitative analysis of retinal vessel topology and size (QUARTZ), where vessel segmentation is carried out using a line detection scheme in combination with hysteresis morphological reconstruction based on a bi-threshold procedure [24]. Kar et al. used automatic blood vessel extraction using a matched filtering-based integrated system, which uses a curvelet transform and fuzzy c-means algorithm to separate vessels from the background [25]. Another recent example of an unsupervised approach was illustrated by Zhao et al., who used a framework with three steps. In the first step, a non-local total variation model adapted to the Retinex theory is used. In the second step, the image is divided into super-pixels to locate the object of interest. Finally, the segmentation task is performed using an infinite active contour model [26]. Pandey et al. used two separate approaches to segment thin and thick blood vessels. To segment thin blood vessels, local phase-preserving de-noising is used in combination with line detection, local normalization, and entropy thresholding. Thick vessels are segmented by maximum entropy thresholding [27]. Neto et al. proposed an unsupervised coarse-to-fine method for the blood vessel segmentation. Image enhancement schemes, such as Gaussian smoothing, morphological top-hat filtering, and contrast enhancement, are first used to increase the contrast and reduce the noise, and then the segmentation task is carried out via adaptive local thresholding [28]. Sundaram et al. proposed a hybrid segmentation approach that uses techniques such as morphology, multi-scale vessel enhancement, and image fusion i.e., area-based morphology and thresholding are used to highlight blood vessels [29]. Zhao et al. proposed an infinite active contour model to automatically segment retinal blood vessels, where hybrid region information of the image is used for small vasculature structure [30]. To detect vessels rapidly and accurately, Jiang et al. proposed a global thresholding-based morphological method, where capillaries are detected using centerline detection [31]. Rodrigues et al. performed vessel segmentation based on the wavelet transform and mathematical morphology, where tubular properties of blood vessels were used to detect retinal veins and arteries [32]. Sazak et al. proposed a retinal blood vessel image-enhancement method in order to increase the segmentation accuracy. They used the multi-scale bowler-hat transform based on mathematical morphology, where vessel-like structures are detected by thresholding after combining different structuring elements [33]. Chalakkal et al. proposed a retinal vessel segmentation method using the curvelet transform in combination with line operators to enhance the contrast between the background and blood vessels; they used multiple steps of conventional image processing such as adaptive histogram equalization, diffusion filtering, and color space transformations [34]. Wahid et al. used multiple levels to enhance retinal images for segmentation. In their technique, the enhanced image is subtracted from the input image iteratively, resultant images are fused to create one image, and this image is then enhanced using contrast-limited adaptive histogram equalization (CLAHE) and fuzzy histogram-based equalization (FHBE). Finally, thresholding is used to segment the enhanced image [35]. Ahamed et al. also applied CLAHE with the green channel of fundus images and used a multiscale line detection approach in combination with hysteresis thresholding; the results in this technique are refined by morphology [36]. 2.2. Vessel Segmentation Using Machine Learning or Deep Learning (CNN)Methods based on handcrafted local features have a limited performance. In addition, the performance is affected by the type of database. Therefore, machine learning or deep learning-based methods have been researched as an alternative. Zhang et al. used a supervised learning method for vessel segmentation. They used the anisotropic wavelet transform, where a 2D image is lifted to a 3D image that provides orientation and position information. Then, a random forest classifier is trained to segment retinal vessels from the background [37]. Tan et al. proposed a single neural network to segment optic discs, fovea, and blood vessels from retinal images. The algorithm passes the three channels of input from the point’s neighborhood to the seven-layer convolutional neural network (CNN) to classify the candidate class [38]. Zhu et al. proposed a supervised method based on extreme machine learning (EML), which utilizes a 39D vector with features such as morphology, divergence field, hessian features, phase congruency, and discriminative features. These features are then classified by EML, which extracts the vasculature from the background [39]. Wang et al. proposed a cascade classification method for retinal vessel segmentation. They iteratively trained a Mahalanobis distance classifier with a one-pass feed-forward process to classify the vessels and background [40]. Tuba et al. proposed support vector machine (SVM)-based classification using chromaticity and coefficients of the discrete cosine transform as features. The green channel from retinal images was used as the base of these features as it has maximum vessel information [41]. Savelli et al. presented a novel approach to segment vessels that corrected the illumination. Dehazing was used as a pre-processing technique to avoid haze and shadow noise, and classification was performed by a CNN that was trained on 800,000 patches with a dimension of 27 × 27 (the center pixel was considered the decision pixel) [42]. Girard et al. proposed a fast deep learning method to segment vessels using a U-Net-inspired CNN for semantic segmentation, where the encoder and decoder provide the down-sampling and up-sampling of the image, respectively [43]. Hu et al. proposed a method for retinal vessel segmentation based on a CNN and conditional random fields (CRFs). Basically, there are two phases in this method; in the first phase, a multiscale CNN architecture with improved cross-entropy loss function was applied to the image, then CRFs were applied to obtain the refined final result [44]. Fu et al. proposed DeepVessel, a program that uses deep learning in combination with CRFs. A multi-scale and multi-level CNN is used to learn rich hierarchical representations from images [45]. Soomro et al. proposed a deep-learning-based semantic segmentation network inspired by the famous SegNet architecture. In the first step, grayscale data were prepared by principle component analysis (PCA). In the second step, deep-learning-based semantic segmentation was applied to extract the vessels. Finally, post-processing was used to refine the segmentation [46]. Guo et al. proposed a multi-level and multi-scale approach, where short-cut connections were used for the semantic segmentation of vessels and semantic information was passed to forward layers to improve the performance [47]. Chudzik et al. proposed a two-stage method to segment retinal vessels. In the first step, the CNN is utilized to correlate the image with corresponding ground truth by random tree embedding. In the second stage, a codebook is created by passing the training patches through the CNN in the previous step; this codebook is used to arrange a generative nearest-neighbor search space for the feature vector [48]. Hajabdollahi et al. proposed a simple CNN-based segmentation with fully connected layers. These fully connected layers are quantized and the convolutional layers are pruned to increase the efficiency of the network [49]. Yan et al. proposed a three-stage CNN approach for vessel segmentation to improve the capability of vessel detection. The thick and thin vessels are treated by separate CNNs and the results are fused to produce a single image by a third CNN [50]. Soomro et al. proposed a semantic segmentation network based on modified U-Net, where the pooling layers are replaced by progressive convolution and deeper layers. In addition, dice loss is used as a loss function with stochastic gradient descent (SGD) [51]. Jin et al. proposed a deformable U-Net-based deep neural network. The deformable convolutions are integrated in the network and an up-sampling operator is used to increase the resolution of the image to extract more precise feature information [52]. Leopold et al. presented Pixel CNN with batch normalization (PixelBNN), which is based on U-Net and pixelCNN, where pre-processing is used to resize, reduce the dimension, and enhance the image [53]. Wang et al. used Dense U-Net as a semantic segmentation network for vessel segmentation, where random transformations are used for data augmentation in order to boost the effective patch-based training of the dense network [54]. Feng et al. proposed a cross-connected CNN (CcNet) for retinal vessel segmentation. The CcNet is trained on only the green channel of the fundus image; cross connections and fusion of multi-scale features improve the performance of the network [55]. However, in these previous works, deep networks were used, which included many trainable parameters that increased the network complexity. To address these issues, this paper presents a dual residual stream-based Vess-Net, which is not as deep as conventional semantic segmentation networks but provides good segmentation with few trainable parameters and layers. The method takes advantage of artificial intelligence in the process of semantic segmentation to aid the diagnosis of retinopathy.Table 1 shows a comparison between existing methods and Vess-Net for vessel segmentation. | [
"31108551",
"30669482",
"30889868",
"18270057",
"20515707",
"21146228",
"22525589",
"29852952",
"26265491",
"28126242",
"31323939",
"31284687",
"29843416",
"30959798",
"26848729",
"25769147",
"27289537",
"30871687",
"31029251",
"30281503",
"28060704",
"29748495",
"15084075",
"22736688",
"10875704"
] | [
{
"pmid": "31108551",
"title": "Factors Associated With Retinal Vessel Diameters in an Elderly Population: the Thessaloniki Eye Study.",
"abstract": "Purpose\nTo identify the factors associated with retinal vessel diameters in the population of the Thessaloniki Eye Study.\n\n\nMethods\nCross-sectional population-based study (age ≥ 60 years). Subjects with glaucoma, late age-related macular degeneration, and diabetic retinopathy were excluded from the analyses. Retinal vessel diameters were measured using the IVAN software, and measurements were summarized to central retinal artery equivalent (CRAE), central retinal vein equivalent (CRVE), and arteriole to venule ratio (AVR).\n\n\nResults\nThe analysis included 1614 subjects. The hypertensive group showed lower values of CRAE (P = 0.033) and AVR (P = 0.0351) compared to the normal blood pressure (BP) group. On the contrary, the group having normal BP under antihypertensive treatment did not have different values compared to the normal BP group. Diastolic BP (per mm Hg) was negatively associated with CRAE (P < 0.0001) and AVR (P < 0.0001), while systolic BP (per mm Hg) was positively associated with CRAE (P = 0.001) and AVR (P = 0.0096). Other factors significantly associated included age, sex, alcohol, smoking, cardiovascular disease history, ophthalmic medication, weight, and IOP; differences were observed in a stratified analysis based on BP medication use.\n\n\nConclusions\nOur study confirms previous reports about the association of age and BP with vessel diameters. The negative correlation between BP and CRAE seems to be guided by the effect of diastolic BP as higher systolic BP is independently associated with higher values of CRAE. The association of BP status with retinal vessel diameters is determined by diastolic BP status in our population. Multiple other factors are also independently associated with retinal vessel diameters."
},
{
"pmid": "30669482",
"title": "Homocysteine: A Potential Biomarker for Diabetic Retinopathy.",
"abstract": "Diabetic retinopathy (DR) is the most common cause of blindness in people under the age of 65. Unfortunately, the current screening process for DR restricts the population that can be evaluated and the disease goes undetected until irreversible damage occurs. Herein, we aimed to evaluate homocysteine (Hcy) as a biomarker for DR screening. Hcy levels were measured by enzyme-linked immuno sorbent assay (ELISA) and immunolocalization methods in the serum, vitreous and retina of diabetic patients as well as in serum and retina of different animal models of DM representing type 1 diabetes (streptozotocin (STZ) mice, Akita mice and STZ rats) and db/db mice which exhibit features of human type 2 diabetes. Our results revealed increased Hcy levels in the serum, vitreous and retina of diabetic patients and experimental animal models of diabetes. Moreover, optical coherence tomography (OCT) and fluorescein angiography (FA) were used to evaluate the retinal changes in mice eyes after Hcy-intravitreal injection into normal wild-type (WT) and diabetic (STZ) mice. Hcy induced changes in mice retina which were aggravated under diabetic conditions. In conclusion, our data reported Hcy as a strong candidate for use as a biomarker in DR screening. Targeting the clearance of Hcy could also be a future therapeutic target for DR."
},
{
"pmid": "30889868",
"title": "Poorer Quality of Life and Treatment Satisfaction is Associated with Diabetic Retinopathy in Patients with Type 1 Diabetes without Other Advanced Late Complications.",
"abstract": "Diabetic retinopathy (DR) may potentially cause vision loss and affect the patient's quality of life (QoL) and treatment satisfaction (TS). Using specific tools, we aimed to assess the impact of DR and clinical factors on the QoL and TS in patients with type 1 diabetes. This was a cross-sectional, two-centre study. A sample of 102 patients with DR and 140 non-DR patients were compared. The Audit of Diabetes-Dependent Quality of Life (ADDQoL-19) and Diabetes Treatment Satisfaction Questionnaire (DTSQ-s) were administered. Data analysis included bivariate and multivariable analysis. Patients with DR showed a poorer perception of present QoL (p = 0.039), work life (p = 0.037), dependence (p = 0.010), and had a lower average weighted impact (AWI) score (p = 0.045). The multivariable analysis showed that DR was associated with a lower present QoL (p = 0.040), work life (p = 0.036) and dependence (p = 0.016). With regards to TS, DR was associated with a higher perceived frequency of hypoglycaemia (p = 0.019). In patients with type 1 diabetes, the presence of DR is associated with a poorer perception of their QoL. With regard to TS, these subjects also show a higher perceived frequency of hypoglycaemia."
},
{
"pmid": "18270057",
"title": "Optic disc detection from normalized digital fundus images by means of a vessels' direction matched filter.",
"abstract": "Optic disc (OD) detection is a main step while developing automated screening systems for diabetic retinopathy. We present in this paper a method to automatically detect the position of the OD in digital retinal fundus images. The method starts by normalizing luminosity and contrast through out the image using illumination equalization and adaptive histogram equalization methods respectively. The OD detection algorithm is based on matching the expected directional pattern of the retinal blood vessels. Hence, a simple matched filter is proposed to roughly match the direction of the vessels at the OD vicinity. The retinal vessels are segmented using a simple and standard 2-D Gaussian matched filter. Consequently, a vessels direction map of the segmented retinal vessels is obtained using the same segmentation algorithm. The segmented vessels are then thinned, and filtered using local intensity, to represent finally the OD-center candidates. The difference between the proposed matched filter resized into four different sizes, and the vessels' directions at the surrounding area of each of the OD-center candidates is measured. The minimum difference provides an estimate of the OD-center coordinates. The proposed method was evaluated using a subset of the STARE project's dataset, containing 81 fundus images of both normal and diseased retinas, and initially used by literature OD detection methods. The OD-center was detected correctly in 80 out of the 81 images (98.77%). In addition, the OD-center was detected correctly in all of the 40 images (100%) using the publicly available DRIVE dataset."
},
{
"pmid": "20515707",
"title": "Modeling the tortuosity of retinal vessels: does caliber play a role?",
"abstract": "The tortuosity of retinal blood vessels is a diagnostic parameter assessed by ophthalmologists on the basis of examples and experience; no quantitative model is specified in clinical practice. All quantitative measures proposed to date for automatic image analysis purposes are functions of the curvature of the vessel skeleton. We suggest in this paper that curvature may not be the only quantity involved in modeling tortuosity, and that vessel thickness, or caliber, may also play a role. To support this statement, we devise a novel measure of tortuosity, depending on both curvature and thickness, and test it with 200 vessels selected by our clinical author from the public digital retinal images for vessel extraction database. Results are in good accordance with clinical judgment. Comparative experiments show performance similar to or better than that of four measures reported in the literature. We conclude that there is reasonable evidence supporting the investigation of tortuosity models incorporating more measurements than just skeleton curvature, and specifically vessel caliber."
},
{
"pmid": "21146228",
"title": "Retinal vascular tortuosity, blood pressure, and cardiovascular risk factors.",
"abstract": "OBJECTIVE\nTo examine the relationship of retinal vascular tortuosity to age, blood pressure, and other cardiovascular risk factors.\n\n\nDESIGN\nPopulation-based, cross-sectional study.\n\n\nPARTICIPANTS\nA total of 3280 participants aged 40 to 80 years from the Singapore Malay Eye Study (78.7% response rate).\n\n\nMETHODS\nRetinal arteriolar and venular (vascular) tortuosity were quantitatively measured from fundus images using a computer-assisted program. Retinal vascular tortuosity was defined as the integral of the curvature square along the path of the vessel, normalized by the total path length. Data on blood pressure and major cardiovascular disease (CVD) risk factors were collected from all participants.\n\n\nMEAN OUTCOME MEASURES\nRetinal arteriolar and venular tortuosity.\n\n\nRESULTS\nA total of 2915 participants contributed data to this study. The mean (standard deviation) and median were 2.99 (1.40) and 2.73 for retinal arteriolar tortuosity (×10(4)), and 4.64 (2.39) and 4.19 for retinal venular tortuosity (×10(4)), respectively. Retinal venules were significantly more tortuous than retinal arterioles (P<0.001). In multivariable-adjusted linear regression models, less arteriolar tortuosity was independently associated with older age, higher blood pressure, higher body mass index (BMI), and narrower retinal arteriolar caliber (all P<0.05); greater venular tortuosity was independently associated with younger age, higher blood pressure, lower high-density lipoprotein (HDL) cholesterol level, and wider retinal venular caliber (all P<0.05).\n\n\nCONCLUSIONS\nRetinal arteriolar tortuosity was associated with older age and higher levels of blood pressure and BMI, whereas venular tortuosity was also associated with lower HDL level. The quantitative assessment of retinal vascular tortuosity from retinal images may provide further information regarding effects of cardiovascular risk factors on the retinal vasculature."
},
{
"pmid": "22525589",
"title": "Blood vessel segmentation methodologies in retinal images--a survey.",
"abstract": "Retinal vessel segmentation algorithms are a fundamental component of automatic retinal disease screening systems. This work examines the blood vessel segmentation methodologies in two dimensional retinal images acquired from a fundus camera and a survey of techniques is presented. The aim of this paper is to review, analyze and categorize the retinal vessel extraction algorithms, techniques and methodologies, giving a brief description, highlighting the key points and the performance measures. We intend to give the reader a framework for the existing research; to introduce the range of retinal vessel segmentation algorithms; to discuss the current trends and future directions and summarize the open problems. The performance of algorithms is compared and analyzed on two publicly available databases (DRIVE and STARE) of retinal images using a number of measures which include accuracy, true positive rate, false positive rate, sensitivity, specificity and area under receiver operating characteristic (ROC) curve."
},
{
"pmid": "29852952",
"title": "Deep learning for healthcare applications based on physiological signals: A review.",
"abstract": "BACKGROUND AND OBJECTIVE\nWe have cast the net into the ocean of knowledge to retrieve the latest scientific research on deep learning methods for physiological signals. We found 53 research papers on this topic, published from 01.01.2008 to 31.12.2017.\n\n\nMETHODS\nAn initial bibliometric analysis shows that the reviewed papers focused on Electromyogram(EMG), Electroencephalogram(EEG), Electrocardiogram(ECG), and Electrooculogram(EOG). These four categories were used to structure the subsequent content review.\n\n\nRESULTS\nDuring the content review, we understood that deep learning performs better for big and varied datasets than classic analysis and machine classification methods. Deep learning algorithms try to develop the model by using all the available input.\n\n\nCONCLUSIONS\nThis review paper depicts the application of various deep learning algorithms used till recently, but in future it will be used for more healthcare areas to improve the quality of diagnosis."
},
{
"pmid": "26265491",
"title": "Thirty years of artificial intelligence in medicine (AIME) conferences: A review of research themes.",
"abstract": "BACKGROUND\nOver the past 30 years, the international conference on Artificial Intelligence in MEdicine (AIME) has been organized at different venues across Europe every 2 years, establishing a forum for scientific exchange and creating an active research community. The Artificial Intelligence in Medicine journal has published theme issues with extended versions of selected AIME papers since 1998.\n\n\nOBJECTIVES\nTo review the history of AIME conferences, investigate its impact on the wider research field, and identify challenges for its future.\n\n\nMETHODS\nWe analyzed a total of 122 session titles to create a taxonomy of research themes and topics. We classified all 734 AIME conference papers published between 1985 and 2013 with this taxonomy. We also analyzed the citations to these conference papers and to 55 special issue papers.\n\n\nRESULTS\nWe identified 30 research topics across 12 themes. AIME was dominated by knowledge engineering research in its first decade, while machine learning and data mining prevailed thereafter. Together these two themes have contributed about 51% of all papers. There have been eight AIME papers that were cited at least 10 times per year since their publication.\n\n\nCONCLUSIONS\nThere has been a major shift from knowledge-based to data-driven methods while the interest for other research themes such as uncertainty management, image and signal processing, and natural language processing has been stable since the early 1990s. AIME papers relating to guidelines and protocols are among the most highly cited."
},
{
"pmid": "28126242",
"title": "Artificial intelligence in medicine.",
"abstract": "Artificial Intelligence (AI) is a general term that implies the use of a computer to model intelligent behavior with minimal human intervention. AI is generally accepted as having started with the invention of robots. The term derives from the Czech word robota, meaning biosynthetic machines used as forced labor. In this field, Leonardo Da Vinci's lasting heritage is today's burgeoning use of robotic-assisted surgery, named after him, for complex urologic and gynecologic procedures. Da Vinci's sketchbooks of robots helped set the stage for this innovation. AI, described as the science and engineering of making intelligent machines, was officially born in 1956. The term is applicable to a broad range of items in medicine such as robotics, medical diagnosis, medical statistics, and human biology-up to and including today's \"omics\". AI in medicine, which is the focus of this review, has two main branches: virtual and physical. The virtual branch includes informatics approaches from deep learning information management to control of health management systems, including electronic health records, and active guidance of physicians in their treatment decisions. The physical branch is best represented by robots used to assist the elderly patient or the attending surgeon. Also embodied in this branch are targeted nanorobots, a unique new drug delivery system. The societal and ethical complexities of these applications require further reflection, proof of their medical utility, economic value, and development of interdisciplinary strategies for their wider application."
},
{
"pmid": "31323939",
"title": "Artificial Intelligence Prediction Model for the Cost and Mortality of Renal Replacement Therapy in Aged and Super-Aged Populations in Taiwan.",
"abstract": "BACKGROUND\nPrognosis of the aged population requiring maintenance dialysis has been reportedly poor. We aimed to develop prediction models for one-year cost and one-year mortality in aged individuals requiring dialysis to assist decision-making for deciding whether aged people should receive dialysis or not.\n\n\nMETHODS\nWe used data from the National Health Insurance Research Database (NHIRD). We identified patients first enrolled in the NHIRD from 2000-2011 for end-stage renal disease (ESRD) who underwent regular dialysis. A total of 48,153 Patients with ESRD aged ≥65 years with complete age and sex information were included in the ESRD cohort. The total medical cost per patient (measured in US dollars) within one year after ESRD diagnosis was our study's main outcome variable. We were also concerned with mortality as another outcome. In this study, we compared the performance of the random forest prediction model and of the artificial neural network prediction model for predicting patient cost and mortality.\n\n\nRESULTS\nIn the cost regression model, the random forest model outperforms the artificial neural network according to the mean squared error and mean absolute error. In the mortality classification model, the receiver operating characteristic (ROC) curves of both models were significantly better than the null hypothesis area of 0.5, and random forest model outperformed the artificial neural network. Random forest model outperforms the artificial neural network models achieved similar performance in the test set across all data.\n\n\nCONCLUSIONS\nApplying artificial intelligence modeling could help to provide reliable information about one-year outcomes following dialysis in the aged and super-aged populations; those with cancer, alcohol-related disease, stroke, chronic obstructive pulmonary disease (COPD), previous hip fracture, osteoporosis, dementia, and previous respiratory failure had higher medical costs and a high mortality rate."
},
{
"pmid": "31284687",
"title": "Artificial Intelligence-Based Classification of Multiple Gastrointestinal Diseases Using Endoscopy Videos for Clinical Diagnosis.",
"abstract": "Various techniques using artificial intelligence (AI) have resulted in a significant contribution to field of medical image and video-based diagnoses, such as radiology, pathology, and endoscopy, including the classification of gastrointestinal (GI) diseases. Most previous studies on the classification of GI diseases use only spatial features, which demonstrate low performance in the classification of multiple GI diseases. Although there are a few previous studies using temporal features based on a three-dimensional convolutional neural network, only a specific part of the GI tract was involved with the limited number of classes. To overcome these problems, we propose a comprehensive AI-based framework for the classification of multiple GI diseases by using endoscopic videos, which can simultaneously extract both spatial and temporal features to achieve better classification performance. Two different residual networks and a long short-term memory model are integrated in a cascaded mode to extract spatial and temporal features, respectively. Experiments were conducted on a combined dataset consisting of one of the largest endoscopic videos with 52,471 frames. The results demonstrate the effectiveness of the proposed classification framework for multi-GI diseases. The experimental results of the proposed model (97.057% area under the curve) demonstrate superior performance over the state-of-the-art methods and indicate its potential for clinical applications."
},
{
"pmid": "29843416",
"title": "Identifying Degenerative Brain Disease Using Rough Set Classifier Based on Wavelet Packet Method.",
"abstract": "Population aging has become a worldwide phenomenon, which causes many serious problems. The medical issues related to degenerative brain disease have gradually become a concern. Magnetic Resonance Imaging is one of the most advanced methods for medical imaging and is especially suitable for brain scans. From the literature, although the automatic segmentation method is less laborious and time-consuming, it is restricted in several specific types of images. In addition, hybrid techniques segmentation improves the shortcomings of the single segmentation method. Therefore, this study proposed a hybrid segmentation combined with rough set classifier and wavelet packet method to identify degenerative brain disease. The proposed method is a three-stage image process method to enhance accuracy of brain disease classification. In the first stage, this study used the proposed hybrid segmentation algorithms to segment the brain ROI (region of interest). In the second stage, wavelet packet was used to conduct the image decomposition and calculate the feature values. In the final stage, the rough set classifier was utilized to identify the degenerative brain disease. In verification and comparison, two experiments were employed to verify the effectiveness of the proposed method and compare with the TV-seg (total variation segmentation) algorithm, Discrete Cosine Transform, and the listing classifiers. Overall, the results indicated that the proposed method outperforms the listing methods."
},
{
"pmid": "30959798",
"title": "Effective Diagnosis and Treatment through Content-Based Medical Image Retrieval (CBMIR) by Using Artificial Intelligence.",
"abstract": "Medical-image-based diagnosis is a tedious task' and small lesions in various medical images can be overlooked by medical experts due to the limited attention span of the human visual system, which can adversely affect medical treatment. However, this problem can be resolved by exploring similar cases in the previous medical database through an efficient content-based medical image retrieval (CBMIR) system. In the past few years, heterogeneous medical imaging databases have been growing rapidly with the advent of different types of medical imaging modalities. Recently, a medical doctor usually refers to various types of imaging modalities all together such as computed tomography (CT), magnetic resonance imaging (MRI), X-ray, and ultrasound, etc of various organs in order for the diagnosis and treatment of specific disease. Accurate classification and retrieval of multimodal medical imaging data is the key challenge for the CBMIR system. Most previous attempts use handcrafted features for medical image classification and retrieval, which show low performance for a massive collection of multimodal databases. Although there are a few previous studies on the use of deep features for classification, the number of classes is very small. To solve this problem, we propose the classification-based retrieval system of the multimodal medical images from various types of imaging modalities by using the technique of artificial intelligence, named as an enhanced residual network (ResNet). Experimental results with 12 databases including 50 classes demonstrate that the accuracy and F1.score by our method are respectively 81.51% and 82.42% which are higher than those by the previous method of CBMIR (the accuracy of 69.71% and F1.score of 69.63%)."
},
{
"pmid": "26848729",
"title": "Blood vessel extraction and optic disc removal using curvelet transform and kernel fuzzy c-means.",
"abstract": "This paper proposes an automatic blood vessel extraction method on retinal images using matched filtering in an integrated system design platform that involves curvelet transform and kernel based fuzzy c-means. Since curvelet transform represents the lines, the edges and the curvatures very well and in compact form (by less number of coefficients) compared to other multi-resolution techniques, this paper uses curvelet transform for enhancement of the retinal vasculature. Matched filtering is then used to intensify the blood vessels' response which is further employed by kernel based fuzzy c-means algorithm that extracts the vessel silhouette from the background through non-linear mapping. For pathological images, in addition to matched filtering, Laplacian of Gaussian filter is also employed to distinguish the step and the ramp like signal from that of vessel structure. To test the efficacy of the proposed method, the algorithm has also been applied to images in presence of additive white Gaussian noise where the curvelet transform has been used for image denoising. Performance is evaluated on publicly available DRIVE, STARE and DIARETDB1 databases and is compared with the large number of existing blood vessel extraction methodologies. Simulation results demonstrate that the proposed method is very much efficient in detecting the long and the thick as well as the short and the thin vessels with an average accuracy of 96.16% for the DRIVE and 97.35% for the STARE database wherein the existing methods fail to extract the tiny and the thin vessels."
},
{
"pmid": "25769147",
"title": "Automated Vessel Segmentation Using Infinite Perimeter Active Contour Model with Hybrid Region Information with Application to Retinal Images.",
"abstract": "Automated detection of blood vessel structures is becoming of crucial interest for better management of vascular disease. In this paper, we propose a new infinite active contour model that uses hybrid region information of the image to approach this problem. More specifically, an infinite perimeter regularizer, provided by using L(2) Lebesgue measure of the γ -neighborhood of boundaries, allows for better detection of small oscillatory (branching) structures than the traditional models based on the length of a feature's boundaries (i.e., H(1) Hausdorff measure). Moreover, for better general segmentation performance, the proposed model takes the advantage of using different types of region information, such as the combination of intensity information and local phase based enhancement map. The local phase based enhancement map is used for its superiority in preserving vessel edges while the given image intensity information will guarantee a correct feature's segmentation. We evaluate the performance of the proposed model by applying it to three public retinal image datasets (two datasets of color fundus photography and one fluorescein angiography dataset). The proposed model outperforms its competitors when compared with other widely used unsupervised and supervised methods. For example, the sensitivity (0.742), specificity (0.982) and accuracy (0.954) achieved on the DRIVE dataset are very close to those of the second observer's annotations."
},
{
"pmid": "27289537",
"title": "Retinal vessel segmentation in colour fundus images using Extreme Learning Machine.",
"abstract": "Attributes of the retinal vessel play important role in systemic conditions and ophthalmic diagnosis. In this paper, a supervised method based on Extreme Learning Machine (ELM) is proposed to segment retinal vessel. Firstly, a set of 39-D discriminative feature vectors, consisting of local features, morphological features, phase congruency, Hessian and divergence of vector fields, is extracted for each pixel of the fundus image. Then a matrix is constructed for pixel of the training set based on the feature vector and the manual labels, and acts as the input of the ELM classifier. The output of classifier is the binary retinal vascular segmentation. Finally, an optimization processing is implemented to remove the region less than 30 pixels which is isolated from the retinal vascilar. The experimental results testing on the public Digital Retinal Images for Vessel Extraction (DRIVE) database demonstrate that the proposed method is much faster than the other methods in segmenting the retinal vessels. Meanwhile the average accuracy, sensitivity, and specificity are 0.9607, 0.7140 and 0.9868, respectively. Moreover the proposed method exhibits high speed and robustness on a new Retinal Images for Screening (RIS) database. Therefore it has potential applications for real-time computer-aided diagnosis and disease screening."
},
{
"pmid": "30871687",
"title": "Joint segmentation and classification of retinal arteries/veins from fundus images.",
"abstract": "OBJECTIVE\nAutomatic artery/vein (A/V) segmentation from fundus images is required to track blood vessel changes occurring with many pathologies including retinopathy and cardiovascular pathologies. One of the clinical measures that quantifies vessel changes is the arterio-venous ratio (AVR) which represents the ratio between artery and vein diameters. This measure significantly depends on the accuracy of vessel segmentation and classification into arteries and veins. This paper proposes a fast, novel method for semantic A/V segmentation combining deep learning and graph propagation.\n\n\nMETHODS\nA convolutional neural network (CNN) is proposed to jointly segment and classify vessels into arteries and veins. The initial CNN labeling is propagated through a graph representation of the retinal vasculature, whose nodes are defined as the vessel branches and edges are weighted by the cost of linking pairs of branches. To efficiently propagate the labels, the graph is simplified into its minimum spanning tree.\n\n\nRESULTS\nThe method achieves an accuracy of 94.8% for vessels segmentation. The A/V classification achieves a specificity of 92.9% with a sensitivity of 93.7% on the CT-DRIVE database compared to the state-of-the-art-specificity and sensitivity, both of 91.7%.\n\n\nCONCLUSION\nThe results show that our method outperforms the leading previous works on a public dataset for A/V classification and is by far the fastest.\n\n\nSIGNIFICANCE\nThe proposed global AVR calculated on the whole fundus image using our automatic A/V segmentation method can better track vessel changes associated to diabetic retinopathy than the standard local AVR calculated only around the optic disc."
},
{
"pmid": "31029251",
"title": "BTS-DSN: Deeply supervised neural network with short connections for retinal vessel segmentation.",
"abstract": "BACKGROUND AND OBJECTIVE\nThe condition of vessel of the human eye is an important factor for the diagnosis of ophthalmological diseases. Vessel segmentation in fundus images is a challenging task due to complex vessel structure, the presence of similar structures such as microaneurysms and hemorrhages, micro-vessel with only one to several pixels wide, and requirements for finer results.\n\n\nMETHODS\nIn this paper, we present a multi-scale deeply supervised network with short connections (BTS-DSN) for vessel segmentation. We used short connections to transfer semantic information between side-output layers. Bottom-top short connections pass low level semantic information to high level for refining results in high-level side-outputs, and top-bottom short connection passes much structural information to low level for reducing noises in low-level side-outputs. In addition, we employ cross-training to show that our model is suitable for real world fundus images.\n\n\nRESULTS\nThe proposed BTS-DSN has been verified on DRIVE, STARE and CHASE_DB1 datasets, and showed competitive performance over other state-of-the-art methods. Specially, with patch level input, the network achieved 0.7891/0.8212 sensitivity, 0.9804/0.9843 specificity, 0.9806/0.9859 AUC, and 0.8249/0.8421 F1-score on DRIVE and STARE, respectively. Moreover, our model behaves better than other methods in cross-training experiments.\n\n\nCONCLUSIONS\nBTS-DSN achieves competitive performance in vessel segmentation task on three public datasets. It is suitable for vessel segmentation. The source code of our method is available at: https://github.com/guomugong/BTS-DSN."
},
{
"pmid": "30281503",
"title": "A Three-Stage Deep Learning Model for Accurate Retinal Vessel Segmentation.",
"abstract": "Automatic retinal vessel segmentation is a fundamental step in the diagnosis of eye-related diseases, in which both thick vessels and thin vessels are important features for symptom detection. All existing deep learning models attempt to segment both types of vessels simultaneously by using a unified pixel-wise loss that treats all vessel pixels with equal importance. Due to the highly imbalanced ratio between thick vessels and thin vessels (namely the majority of vessel pixels belong to thick vessels), the pixel-wise loss would be dominantly guided by thick vessels and relatively little influence comes from thin vessels, often leading to low segmentation accuracy for thin vessels. To address the imbalance problem, in this paper, we explore to segment thick vessels and thin vessels separately by proposing a three-stage deep learning model. The vessel segmentation task is divided into three stages, namely thick vessel segmentation, thin vessel segmentation, and vessel fusion. As better discriminative features could be learned for separate segmentation of thick vessels and thin vessels, this process minimizes the negative influence caused by their highly imbalanced ratio. The final vessel fusion stage refines the results by further identifying nonvessel pixels and improving the overall vessel thickness consistency. The experiments on public datasets DRIVE, STARE, and CHASE_DB1 clearly demonstrate that the proposed three-stage deep learning model outperforms the current state-of-the-art vessel segmentation methods."
},
{
"pmid": "28060704",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.",
"abstract": "We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet."
},
{
"pmid": "29748495",
"title": "IrisDenseNet: Robust Iris Segmentation Using Densely Connected Fully Convolutional Networks in the Images by Visible Light and Near-Infrared Light Camera Sensors.",
"abstract": "The recent advancements in computer vision have opened new horizons for deploying biometric recognition algorithms in mobile and handheld devices. Similarly, iris recognition is now much needed in unconstraint scenarios with accuracy. These environments make the acquired iris image exhibit occlusion, low resolution, blur, unusual glint, ghost effect, and off-angles. The prevailing segmentation algorithms cannot cope with these constraints. In addition, owing to the unavailability of near-infrared (NIR) light, iris recognition in visible light environment makes the iris segmentation challenging with the noise of visible light. Deep learning with convolutional neural networks (CNN) has brought a considerable breakthrough in various applications. To address the iris segmentation issues in challenging situations by visible light and near-infrared light camera sensors, this paper proposes a densely connected fully convolutional network (IrisDenseNet), which can determine the true iris boundary even with inferior-quality images by using better information gradient flow between the dense blocks. In the experiments conducted, five datasets of visible light and NIR environments were used. For visible light environment, noisy iris challenge evaluation part-II (NICE-II selected from UBIRIS.v2 database) and mobile iris challenge evaluation (MICHE-I) datasets were used. For NIR environment, the institute of automation, Chinese academy of sciences (CASIA) v4.0 interval, CASIA v4.0 distance, and IIT Delhi v1.0 iris datasets were used. Experimental results showed the optimal segmentation of the proposed IrisDenseNet and its excellent performance over existing algorithms for all five datasets."
},
{
"pmid": "15084075",
"title": "Ridge-based vessel segmentation in color images of the retina.",
"abstract": "A method is presented for automated segmentation of vessels in two-dimensional color images of the retina. This method can be used in computer analyses of retinal images, e.g., in automated screening for diabetic retinopathy. The system is based on extraction of image ridges, which coincide approximately with vessel centerlines. The ridges are used to compose primitives in the form of line elements. With the line elements an image is partitioned into patches by assigning each image pixel to the closest line element. Every line element constitutes a local coordinate frame for its corresponding patch. For every pixel, feature vectors are computed that make use of properties of the patches and the line elements. The feature vectors are classified using a kappaNN-classifier and sequential forward feature selection. The algorithm was tested on a database consisting of 40 manually labeled images. The method achieves an area under the receiver operating characteristic curve of 0.952. The method is compared with two recently published rule-based methods of Hoover et al. and Jiang et al. The results show that our method is significantly better than the two rule-based methods (p < 0.01). The accuracy of our method is 0.944 versus 0.947 for a second observer."
},
{
"pmid": "22736688",
"title": "An ensemble classification-based approach applied to retinal blood vessel segmentation.",
"abstract": "This paper presents a new supervised method for segmentation of blood vessels in retinal photographs. This method uses an ensemble system of bagged and boosted decision trees and utilizes a feature vector based on the orientation analysis of gradient vector field, morphological transformation, line strength measures, and Gabor filter responses. The feature vector encodes information to handle the healthy as well as the pathological retinal image. The method is evaluated on the publicly available DRIVE and STARE databases, frequently used for this purpose and also on a new public retinal vessel reference dataset CHASE_DB1 which is a subset of retinal images of multiethnic children from the Child Heart and Health Study in England (CHASE) dataset. The performance of the ensemble system is evaluated in detail and the incurred accuracy, speed, robustness, and simplicity make the algorithm a suitable tool for automated retinal image analysis."
},
{
"pmid": "10875704",
"title": "Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response.",
"abstract": "We describe an automated method to locate and outline blood vessels in images of the ocular fundus. Such a tool should prove useful to eye care specialists for purposes of patient screening, treatment evaluation, and clinical study. Our method differs from previously known methods in that it uses local and global vessel features cooperatively to segment the vessel network. We evaluate our method using hand-labeled ground truth segmentations of 20 images. A plot of the operating characteristic shows that our method reduces false positives by as much as 15 times over basic thresholding of a matched filter response (MFR), at up to a 75% true positive rate. For a baseline, we also compared the ground truth against a second hand-labeling, yielding a 90% true positive and a 4% false positive detection rate, on average. These numbers suggest there is still room for a 15% true positive rate improvement, with the same false positive rate, over our method. We are making all our images and hand labelings publicly available for interested researchers to use in evaluating related methods."
}
] |
Scientific Reports | 31594961 | PMC6783538 | 10.1038/s41598-019-50234-9 | Generalizing the inverse FFT off the unit circle | This paper describes the first algorithm for computing the inverse chirp z-transform (ICZT) in O(n log n) time. This matches the computational complexity of the chirp z-transform (CZT) algorithm that was discovered 50 years ago. Despite multiple previous attempts, an efficient ICZT algorithm remained elusive until now. Because the ICZT can be viewed as a generalization of the inverse fast Fourier transform (IFFT) off the unit circle in the complex plane, it has numerous practical applications in a wide variety of disciplines. This generalization enables exponentially growing or exponentially decaying frequency components, which cannot be done with the IFFT. The ICZT algorithm was derived using the properties of structured matrices and its numerical accuracy was evaluated using automated tests. A modification of the CZT algorithm, which improves its numerical stability for a subset of the parameter space, is also described and evaluated. | Related WorkSeveral attempts to derive an efficient ICZT algorithm have been made15–18. In some cases15, a modified version of the forward CZT algorithm, in which the logarithmic spiral contour was traversed in the opposite direction, was described as the ICZT algorithm. However, this method does not really invert the CZT. It works only in some special cases, e.g., when \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$A=1$$\end{document}A=1 and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$W={e}^{-\frac{2\pi i}{n}}$$\end{document}W=e−2πin. That is, in the cases when the CZT reduces to the DFT. In the general case, i.e., when \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$A,W\in {\mathbb{C}}\backslash \{0\}$$\end{document}A,W∈ℂ\{0}, that method generates a transform that does not invert the CZT.This paper describes an \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$O(n\,\log \,n)$$\end{document}O(nlogn) algorithm that computes the ICZT. The algorithm was derived by expressing the CZT formula using structured matrix multiplication and then finding a way to efficiently invert the matrices in the underlying matrix equation. The essence of the ICZT computation reduces to inverting a specially constructed Vandermonde matrix W. This problem, in turn, reduces to inverting a symmetric Toeplitz matrix \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\hat{{\boldsymbol{W}}}$$\end{document}Wˆ that is derived from W.The Gohberg–Semencul formula19–21 expresses the inverse of a Toeplitz matrix as the difference of two products of Toeplitz matrices. Each of the four matrices in this formula is either an upper-triangular or a lower-triangular Toeplitz matrix that is generated by either a vector u or a vector v. In the case of the ICZT, a symmetric Toeplitz matrix needs to be inverted. This leads to a simplified formula that expresses the inverse using only one generating vector that is also called u.In the ICZT case, it turned out that each element of the generating vector u can be expressed as a function of the transform parameter W. This formula led to an efficient ICZT algorithm. One building block of this algorithm is the multiplication of a Toeplitz matrix by a vector, which can be done in \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$O(n\,\log \,n)$$\end{document}O(nlogn), without storing the full Toeplitz matrix in memory22–26. The supplementary information for this paper gives the pseudo-code for two different algorithms — based on these references — that can compute a Toeplitz–vector product in \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$O(n\,\log \,n)$$\end{document}O(nlogn) time. Each of these algorithms can be used as a subroutine by the ICZT algorithm. | [
"17812072"
] | [
{
"pmid": "17812072",
"title": "Numerical transforms.",
"abstract": "Numerical computation of transforms is now widely practiced in science and industry and has been revolutionized by the development of fast transforms that make feasible computing projects that once could not be contemplated. The article discusses the significance of transforms in numerical work, defines the modern forms of several common transforms and their inverses, gives examples, and describes and gives references to methods of numerical evaluation."
}
] |
Biomimetics | 31394826 | PMC6784304 | 10.3390/biomimetics4030055 | What Does a Hand-Over Tell?—Individuality of Short Motion Sequences | How much information with regard to identity and further individual participant characteristics are revealed by relatively short spatio-temporal motion trajectories of a person? We study this question by selecting a set of individual participant characteristics and analysing motion captured trajectories of an exemplary class of familiar movements, namely handover of an object to another person. The experiment is performed with different participants under different, predefined conditions. A selection of participant characteristics, such as the Big Five personality traits, gender, weight, or sportiness, are assessed and we analyse the impact of the three factor groups “participant identity”, “participant characteristics”, and “experimental conditions” on the observed hand trajectories. The participants’ movements are recorded via optical marker-based hand motion capture. One participant, the giver, hands over an object to the receiver. The resulting time courses of three-dimensional positions of markers are analysed. Multidimensional scaling is used to project trajectories to points in a dimension-reduced feature space. Supervised learning is also applied. We find that “participant identity” seems to have the highest correlation with the trajectories, with factor group “experimental conditions” ranking second. On the other hand, it is not possible to find a correlation between the “participant characteristics” and the hand trajectory features. | 1.5. Related WorkOther studies have already shown that it is possible both for humans and for artificial systems to derive information about the identity or properties like the gender of people or the task they had to perform, from their movements, e.g., from their head and facial motions (human observers: Hill and Johnston [12], Girges et al. [10]), from their eye motions (artificial identification: Bednarik et al. [6]), from their gait (human observers: Cutting and Kozlowski [3], Troje et al. [8], artificial identification: Little and Boyd [5], Lu et al. [15], comparison of human and artificial identification: Troje [13]), from traditional Japanese dance sequences (artificial identification: Perera et al. [2]), or from a database with different types of motions (human observers: Loula et al. [7], database for artificial identification: Ma et al. [14]).Human movements can be used to identify the person, who performed them, and can even be used as part of an authentication system (Cunado et al. [4], BenAbdelkader et al. [16], Green and Guan [17], Bednarik et al. [6], Lin et al. [9], Neverova et al. [11]). A high correlation between individuality and the characteristics of hand-over motions in our study could show that also hand-over can be used as a kind of motion finger print.Some of the methods we use are also applied in some of the mentioned studies, like the data acquisition via motion capturing (e.g., Hill and Johnston [12], Girges et al. [10], Perera et al. [2], Ma et al. [14], Lin et al. [9]) or the classification by a Self-Organizing Map (Lin et al. [9]).For example, how knowledge about non-verbal communication can be used in robotics was presented by Hameeteman [21]. He tried to enhance the acceptance of robots by equipping them with more human-like movements. The motion sequences collected in our study could also serve as examples of natural human motions for the same purpose as a side effect. We are focussing on interactive motions, and, in this case, it can be very helpful to equip an artificial system with human-like motions, as this enhances the predictability of the movements and therefore the efficiency of the interaction. | [
"15709874",
"16134460",
"25687732",
"11516651",
"12678652",
"16817522",
"24524040",
"12757147",
"12243384",
"21171787"
] | [
{
"pmid": "15709874",
"title": "Recognizing people from their movement.",
"abstract": "Human observers demonstrate impressive visual sensitivity to human movement. What defines this sensitivity? If motor experience influences the visual analysis of action, then observers should be most sensitive to their own movements. If view-dependent visual experience determines visual sensitivity to human movement, then observers should be most sensitive to the movements of their friends. To test these predictions, participants viewed sagittal displays of point-light depictions of themselves, their friends, and strangers performing various actions. In actor identification and discrimination tasks, sensitivity to one's own motion was highest. Visual sensitivity to friends', but not strangers', actions was above chance. Performance was action dependent. Control studies yielded chance performance with inverted and static displays, suggesting that form and low-motion cues did not define performance. These results suggest that both motor and visual experience define visual sensitivity to human action."
},
{
"pmid": "16134460",
"title": "Person identification from biological motion: effects of structural and kinematic cues.",
"abstract": "Human observers are able to identify a person based on his or her gait. However, little is known about the underlying mechanisms and the kind of information used to accomplish such a task. In this study, participants learned to discriminate seven male walkers shown as point-light displays from frontal, half-profile, or profile view. The displays were gradually normalized with respect to size, shape, and walking frequency, and identification performance was measured. All observers quickly learned to discriminate the walkers, but there was an overall advantage in favor of the frontal view. No effect of size normalization was found, but performance deteriorated when shape or walking frequency was normalized. Presenting the walkers from novel viewpoints resulted in a further decrease in performance. However, even after applying all normalization steps and rotating the walker by 90 degrees, recognition performance was still nearly three times higher than chance level."
},
{
"pmid": "25687732",
"title": "Categorizing identity from facial motion.",
"abstract": "Advances in marker-less motion capture technology now allow the accurate replication of facial motion and deformation in computer-generated imagery (CGI). A forced-choice discrimination paradigm using such CGI facial animations showed that human observers can categorize identity solely from facial motion cues. Animations were generated from motion captures acquired during natural speech, thus eliciting both rigid (head rotations and translations) and nonrigid (expressional changes) motion. To limit interferences from individual differences in facial form, all animations shared the same appearance. Observers were required to discriminate between different videos of facial motion and between the facial motions of different people. Performance was compared to the control condition of orientation-inverted facial motion. The results show that observers are able to make accurate discriminations of identity in the absence of all cues except facial motion. A clear inversion effect in both tasks provided consistency with previous studies, supporting the configural view of human face perception. The accuracy of this motion capture technology thus allowed stimuli to be generated that closely resembled real moving faces. Future studies may wish to implement such methodology when studying human face perception."
},
{
"pmid": "11516651",
"title": "Categorizing sex and identity from the biological motion of faces.",
"abstract": "Head and facial movements can provide valuable cues to identity in addition to their primary roles in communicating speech and expression [1-8]. Here we report experiments in which we have used recent motion capture and animation techniques to animate an average head [9]. These techniques have allowed the isolation of motion from other cues and have enabled us to separate rigid translations and rotations of the head from nonrigid facial motion. In particular, we tested whether human observers can judge sex and identity on the basis of this information. Results show that people can discriminate both between individuals and between males and females from motion-based information alone. Rigid head movements appear particularly useful for categorization on the basis of identity, while nonrigid motion is more useful for categorization on the basis of sex. Accuracy for both sex and identity judgements is reduced when faces are presented upside down, and this finding shows that performance is not based on low-level motion cues alone and suggests that the information is represented in an object-based motion-encoding system specialized for upright faces. Playing animations backward also reduced performance for sex judgements and emphasized the importance of direction specificity in admitting access to stored representations of characteristic male and female movements."
},
{
"pmid": "12678652",
"title": "Decomposing biological motion: a framework for analysis and synthesis of human gait patterns.",
"abstract": "Biological motion contains information about the identity of an agent as well as about his or her actions, intentions, and emotions. The human visual system is highly sensitive to biological motion and capable of extracting socially relevant information from it. Here we investigate the question of how such information is encoded in biological motion patterns and how such information can be retrieved. A framework is developed that transforms biological motion into a representation allowing for analysis using linear methods from statistics and pattern recognition. Using gender classification as an example, simple classifiers are constructed and compared to psychophysical data from human observers. The analysis reveals that the dynamic part of the motion contains more information about gender than motion-mediated structural cues. The proposed framework can be used not only for analysis of biological motion but also to synthesize new motion patterns. A simple motion modeler is presented that can be used to visualize and exaggerate the differences in male and female walking patterns."
},
{
"pmid": "16817522",
"title": "A motion capture library for the study of identity, gender, and emotion perception from biological motion.",
"abstract": "We present the methods that were used in capturing a library of human movements for use in computer-animated displays of human movement. The library is an attempt to systematically tap into and represent the wide range of personal properties, such as identity, gender, and emotion, that are available in a person's movements. The movements from a total of 30 nonprofessional actors (15 of them female) were captured while they performed walking, knocking, lifting, and throwing actions, as well as their combination in angry, happy, neutral, and sad affective styles. From the raw motion capture data, a library of 4,080 movements was obtained, using techniques based on Character Studio (plug-ins for 3D Studio MAX, AutoDesk, Inc.), MATLAB The MathWorks, Inc.), or a combination of these two. For the knocking, lifting, and throwing actions, 10 repetitions of the simple action unit were obtained for each affect, and for the other actions, two longer movement recordings were obtained for each affect. We discuss the potential use of the library for computational and behavioral analyses of movement variability, of human character animation, and of how gender, emotion, and identity are encoded and decoded from human movement."
},
{
"pmid": "24524040",
"title": "Evaluation of factors influencing grip strength in elderly koreans.",
"abstract": "OBJECTIVES\nGrip strength has been used as a measure of function in various health-related conditions. Although grip strength is known to be affected by both physical and psychological factors, few studies have looked at those factors comprehensively in a population-based cohort regarding elderly Koreans. The aim of this study was to evaluate potential factors influencing grip strength in elderly Koreans.\n\n\nMETHODS\nWe evaluated dominant hand grip strengths in 143 men and 123 women older than 65 years who participated in a population-based cohort study, the Korean Longitudinal Study on Health and Aging (KLoSHA). Individuals who had a history of surgery for musculoskeletal disease or trauma in the upper extremity were excluded. Factors assessed for potential association with grip strength were; 1) demographics such as age and gender, 2) body constructs such as height, body mass index (BMI), and bone mineral density (BMD), 3) upper extremity functional status using disabilities of the arm, shoulder and hand (DASH) scores, and 4) mental health status using a depression scale and the short form-36 (SF36) mental health score. Multivariate analyses were performed in order to identify factors independently associated with grip strength.\n\n\nRESULTS\nGrip strengths of dominant hands in elderly Koreans were found to generally decrease with aging, and were significantly different between men and women, as expected. Multivariate analyses indicated that grip strength was independently associated with age, height and BMI in men (R(2) = 21.3%), and age and height (R(2) = 19.7%) in women. BMD, upper extremity functional status, or mental health status were not found to be associated with grip strength.\n\n\nCONCLUSIONS\nThis study demonstrates that in elderly Koreans, grip strength is mainly influenced by age and height in both men and women, and additionally by BMI in men. BMD or self-reported physical or mental health status was not found to influence grip strength in elderly Koreans. This information may be helpful in future studies using grip strength as a measure of function in elderly Koreans."
},
{
"pmid": "12757147",
"title": "Development of personality in early and middle adulthood: set like plaster or persistent change?",
"abstract": "Different theories make different predictions about how mean levels of personality traits change in adulthood. The biological view of the Five-factor theory proposes the plaster hypothesis: All personality traits stop changing by age 30. In contrast, contextualist perspectives propose that changes should be more varied and should persist throughout adulthood. This study compared these perspectives in a large (N = 132,515) sample of adults aged 21-60 who completed a Big Five personality measure on the Internet. Conscientiousness and Agreeableness increased throughout early and middle adulthood at varying rates; Neuroticism declined among women but did not change among men. The variety in patterns of change suggests that the Big Five traits are complex phenomena subject to a variety of developmental influences."
},
{
"pmid": "12243384",
"title": "Global self-esteem across the life span.",
"abstract": "This study provides a comprehensive picture of age differences in self-esteem from age 9 to 90 years using cross-sectional data collected from 326,641 individuals over the Internet. Self-esteem levels were high in childhood, dropped during adolescence, rose gradually throughout adulthood, and declined sharply in old age. This trajectory generally held across gender, socioeconomic status, ethnicity, and nationality (U.S. citizens vs. non-U.S. citizens). Overall, these findings support previous research, help clarify inconsistencies in the literature, and document new trends that require further investigation."
},
{
"pmid": "21171787",
"title": "Age differences in personality traits from 10 to 65: Big Five domains and facets in a large cross-sectional sample.",
"abstract": "Hypotheses about mean-level age differences in the Big Five personality domains, as well as 10 more specific facet traits within those domains, were tested in a very large cross-sectional sample (N = 1,267,218) of children, adolescents, and adults (ages 10-65) assessed over the World Wide Web. The results supported several conclusions. First, late childhood and adolescence were key periods. Across these years, age trends for some traits (a) were especially pronounced, (b) were in a direction different from the corresponding adult trends, or (c) first indicated the presence of gender differences. Second, there were some negative trends in psychosocial maturity from late childhood into adolescence, whereas adult trends were overwhelmingly in the direction of greater maturity and adjustment. Third, the related but distinguishable facet traits within each broad Big Five domain often showed distinct age trends, highlighting the importance of facet-level research for understanding life span age differences in personality."
}
] |
Frontiers in Neurorobotics | 31632262 | PMC6786305 | 10.3389/fnbot.2019.00081 | Embodied Synaptic Plasticity With Online Reinforcement Learning | The endeavor to understand the brain involves multiple collaborating research fields. Classically, synaptic plasticity rules derived by theoretical neuroscientists are evaluated in isolation on pattern classification tasks. This contrasts with the biological brain which purpose is to control a body in closed-loop. This paper contributes to bringing the fields of computational neuroscience and robotics closer together by integrating open-source software components from these two fields. The resulting framework allows to evaluate the validity of biologically-plausibe plasticity models in closed-loop robotics environments. We demonstrate this framework to evaluate Synaptic Plasticity with Online REinforcement learning (SPORE), a reward-learning rule based on synaptic sampling, on two visuomotor tasks: reaching and lane following. We show that SPORE is capable of learning to perform policies within the course of simulated hours for both tasks. Provisional parameter explorations indicate that the learning rate and the temperature driving the stochastic processes that govern synaptic learning dynamics need to be regulated for performance improvements to be retained. We conclude by discussing the recent deep reinforcement learning techniques which would be beneficial to increase the functionality of SPORE on visuomotor tasks. | 2. Related WorkThe year 2015 marked a significant breakthrough in deep reinforcement learning. Artificial neural networks of analog neurons are now capable of solving a variety of tasks ranging from playing video games (Mnih et al., 2015), to controlling multi-joints robots (Lillicrap et al., 2015; Schulman et al., 2017), and lane following (Wolf et al., 2017). Most recent methods (Lillicrap et al., 2015; Schulman et al., 2015, 2017; Mnih et al., 2016) are based on policy-gradients. Specifically, policy parameters are updated by performing ascending gradient steps with backpropagation to maximize the probability of taking rewarding actions. While functional, these methods are not based on biologically plausible processes. First, a large part of neural dynamics are ignored. Importantly, unlike SPORE, these methods do not learn online—weight updates are performed with respect to entire trajectories stored in rollout memory. Second, learning is based on backpropagation which is not biologically plausible learning mechanism, as stated in Bengio et al. (2015).Spiking network models inspired by deep reinforcement learning techniques were introduced in Bellec et al. (2018) and Tieck et al. (2018). In both papers, the spiking networks are implemented with deep learning frameworks (PyTorch and TensorFlow, respectively) and rely on automatic differentiation. Their policy-gradient approach is based on (PPO; Schulman et al., 2017). As the learning mechanism consists of backpropagating the Proximal Policy Optimization (PPO) loss (through-time in the case of Bellec et al., 2018), most biological constraints stated in Bengio et al. (2015) are still violated. Indeed, the computations are based on spikes (4), but the backpropagation is purely linear (1), the feedback paths require precise knowledge of the derivatives (2) and weights (3) of the corresponding feedforward paths, and the feedforward and feedback phases alternate synchronously (5) (the enumeration refers to Bengio et al., 2015).Only a small body of work focused on reinforcement learning with spiking neural networks, while addressing the previous points. Groundwork of reinforcement learning with spiking networks was presented in Florian (2007), Izhikevich (2007), and Legenstein et al. (2008). In these works, a mathematical formalization is introduced characterizing how dopamine modulated spike-timing-dependent plasticity (DA-STDP) solves the distal reward problem with eligibility traces. Specifically, since the reward is received only after a rewarding action is performed, the brain needs a form of memory to reinforce previously chosen actions. This problem is solved with the introduction eligibility traces, which assign credit to recently active synapses. This concept has been observed in the brain (Frey and Morris, 1997; Pan et al., 2005), and SPORE also relies on eligibility traces. Fewer works evaluated DA-STDP in an embodiment for reward maximization—a recent survey encompassing this topic is available in Bing et al. (2018b).The closest previous work related to this paper are Daucé (2009), Kaiser et al. (2016), and Bing et al. (2018a). In Kaiser et al. (2016), a neurorobotic lane following task is presented, where a simulated vehicle is controlled end-to-end from event-based vision to motor command. The task is solved with an hard-coded spiking network of 16 neurons implementing a simple Braitenberg vehicle. The performance is evaluated with respect to distance and orientation differences to the middle of the lane. In this paper, these performance metrics are combined into a reward signal which the spiking network maximizes with the SPORE learning rule.In Bing et al. (2018a), the authors evaluate DA-STDP (referred to as R-STDP for reward-modulated STDP) in a similar lane following environment. Their approach outperforms the hard-coded Braitenberg vehicle presented in Kaiser et al. (2016). The two motor neurons controlling the steering receive different (mirrored) reward signals whether the vehicle is on the left or on the right of the lane. This way, the reward provides the information of what motor command should be taken, similar to a supervised learning setup. Conversely, the approach presented in this paper is more generic since a global reward is distributed to all synapses and does not indicate which action the agent should take.A similar plasticity rule implementing a policy-gradient approach is derived in Daucé (2009). Also relying on eligibility traces, this reward-learning rule uses a “slow” noise term to drive the exploration. This rule is demonstrated on a target reaching task comparable to the one discussed in section 4.1.1 and achieves impressive learning times (in the order of 100s) with proper tuning of the noise term.In Nakano et al. (2015), a spiking version of the free-energy-based reinforcement learning framework proposed in Otsuka et al. (2010) is introduced. In this framework, a spiking Restricted Boltzmann Machine (RBM) is trained with a reward-modulated plasticity rule which decreases the free-energy of rewarding state-action pairs. The approach is evaluated on discrete-actions tasks where the observations consist of MNIST digits processed by a pre-trained feature extractor. However, some characteristics of RBM are biologically implausible and make their implementation cumbersome: symmetric synapses and clocked network activity. With our approach, network activity does not have to be manually synchronized into observation and action phases of arbitrary duration for learning to take place.In Gilra and Gerstner (2017), a supervised synaptic learning rule named Feedback-based Online Local Learning Of Weights (FOLLOW) is introduced. This rule is used to learn the inverse dynamics of a two-link arm—the model predicts control commands (torques) for a given arm trajectory. The loop is closed in Gilra and Gerstner (2018) by feeding the predicted torques as control commands. In contrast, SPORE learns from a reward signal and can solve a variety of tasks. | [
"30034334",
"20195795",
"28179882",
"17444757",
"9020359",
"29173280",
"26595651",
"17220510",
"26545099",
"29696150",
"24675787",
"23787340",
"18846203",
"25719670",
"25734662",
"15987953",
"16764506",
"26601905",
"24507189",
"27536234",
"29652587"
] | [
{
"pmid": "30034334",
"title": "A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks.",
"abstract": "Biological intelligence processes information using impulses or spikes, which makes those living creatures able to perceive and act in the real world exceptionally well and outperform state-of-the-art robots in almost every aspect of life. To make up the deficit, emerging hardware technologies and software knowledge in the fields of neuroscience, electronics, and computer science have made it possible to design biologically realistic robots controlled by spiking neural networks (SNNs), inspired by the mechanism of brains. However, a comprehensive review on controlling robots based on SNNs is still missing. In this paper, we survey the developments of the past decade in the field of spiking neural networks for control tasks, with particular focus on the fast emerging robotics-related applications. We first highlight the primary impetuses of SNN-based robotics tasks in terms of speed, energy efficiency, and computation capabilities. We then classify those SNN-based robotic applications according to different learning rules and explicate those learning rules with their corresponding robotic applications. We also briefly present some existing platforms that offer an interaction between SNNs and robotics simulations for exploration and exploitation. Finally, we conclude our survey with a forecast of future challenges and some associated potential research topics in terms of controlling robots based on SNNs."
},
{
"pmid": "20195795",
"title": "Run-time interoperability between neuronal network simulators based on the MUSIC framework.",
"abstract": "MUSIC is a standard API allowing large scale neuron simulators to exchange data within a parallel computer during runtime. A pilot implementation of this API has been released as open source. We provide experiences from the implementation of MUSIC interfaces for two neuronal network simulators of different kinds, NEST and MOOSE. A multi-simulation of a cortico-striatal network model involving both simulators is performed, demonstrating how MUSIC can promote inter-operability between models written for different simulators and how these can be re-used to build a larger model system. Benchmarks show that the MUSIC pilot implementation provides efficient data transfer in a cluster computer with good scaling. We conclude that MUSIC fulfills the design goal that it should be simple to adapt existing simulators to use MUSIC. In addition, since the MUSIC API enforces independence of the applications, the multi-simulation could be built from pluggable component modules without adaptation of the components to each other in terms of simulation time-step or topology of connections between the modules."
},
{
"pmid": "28179882",
"title": "Connecting Artificial Brains to Robots in a Comprehensive Simulation Framework: The Neurorobotics Platform.",
"abstract": "Combined efforts in the fields of neuroscience, computer science, and biology allowed to design biologically realistic models of the brain based on spiking neural networks. For a proper validation of these models, an embodiment in a dynamic and rich sensory environment, where the model is exposed to a realistic sensory-motor task, is needed. Due to the complexity of these brain models that, at the current stage, cannot deal with real-time constraints, it is not possible to embed them into a real-world task. Rather, the embodiment has to be simulated as well. While adequate tools exist to simulate either complex neural networks or robots and their environments, there is so far no tool that allows to easily establish a communication between brain and body models. The Neurorobotics Platform is a new web-based environment that aims to fill this gap by offering scientists and technology developers a software infrastructure allowing them to connect brain models to detailed simulations of robot bodies and environments and to use the resulting neurorobotic systems for in silico experimentation. In order to simplify the workflow and reduce the level of the required programming skills, the platform provides editors for the specification of experimental sequences and conditions, environments, robots, and brain-body connectors. In addition to that, a variety of existing robots and environments are provided. This work presents the architecture of the first release of the Neurorobotics Platform developed in subproject 10 \"Neurorobotics\" of the Human Brain Project (HBP). At the current state, the Neurorobotics Platform allows researchers to design and run basic experiments in neurorobotics using simulated robots and simulated environments linked to simplified versions of brain models. We illustrate the capabilities of the platform with three example experiments: a Braitenberg task implemented on a mobile robot, a sensory-motor learning task based on a robotic controller, and a visual tracking embedding a retina model on the iCub humanoid robot. These use-cases allow to assess the applicability of the Neurorobotics Platform for robotic tasks as well as in neuroscientific experiments."
},
{
"pmid": "17444757",
"title": "Reinforcement learning through modulation of spike-timing-dependent synaptic plasticity.",
"abstract": "The persistent modification of synaptic efficacy as a function of the relative timing of pre- and postsynaptic spikes is a phenomenon known as spike-timing-dependent plasticity (STDP). Here we show that the modulation of STDP by a global reward signal leads to reinforcement learning. We first derive analytically learning rules involving reward-modulated spike-timing-dependent synaptic and intrinsic plasticity, by applying a reinforcement learning algorithm to the stochastic spike response model of spiking neurons. These rules have several features common to plasticity mechanisms experimentally found in the brain. We then demonstrate in simulations of networks of integrate-and-fire neurons the efficacy of two simple learning rules involving modulated STDP. One rule is a direct extension of the standard STDP model (modulated STDP), and the other one involves an eligibility trace stored at each synapse that keeps a decaying memory of the relationships between the recent pairs of pre- and postsynaptic spike pairs (modulated STDP with eligibility trace). This latter rule permits learning even if the reward signal is delayed. The proposed rules are able to solve the XOR problem with both rate coded and temporally coded input and to learn a target output firing-rate pattern. These learning rules are biologically plausible, may be used for training generic artificial spiking neural networks, regardless of the neural model used, and suggest the experimental investigation in animals of the existence of reward-modulated STDP."
},
{
"pmid": "9020359",
"title": "Synaptic tagging and long-term potentiation.",
"abstract": "Repeated stimulation of hippocampal neurons can induce an immediate and prolonged increase in synaptic strength that is called long-term potentiation (LTP)-the primary cellular model of memory in the mammalian brain. An early phase of LTP (lasting less than three hours) can be dissociated from late-phase LTP by using inhibitors of transcription and translation, Because protein synthesis occurs mainly in the cell body, whereas LTP is input-specific, the question arises of how the synapse specificity of late LTP is achieved without elaborate intracellular protein trafficking. We propose that LTP initiates the creation of a short-lasting protein-synthesis-independent 'synaptic tag' at the potentiated synapse which sequesters the relevant protein(s) to establish late LTP. In support of this idea, we now show that weak tetanic stimulation, which ordinarily leads only to early LTP, or repeated tetanization in the presence of protein-synthesis inhibitors, each results in protein-synthesis-dependent late LTP, provided repeated tetanization has already been applied at another input to the same population of neurons. The synaptic tag decays in less than three hours. These findings indicate that the persistence of LTP depends not only on local events during its induction, but also on the prior activity of the neuron."
},
{
"pmid": "29173280",
"title": "Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network.",
"abstract": "The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically."
},
{
"pmid": "26595651",
"title": "Mesolimbic dopamine signals the value of work.",
"abstract": "Dopamine cell firing can encode errors in reward prediction, providing a learning signal to guide future behavior. Yet dopamine is also a key modulator of motivation, invigorating current behavior. Existing theories propose that fast (phasic) dopamine fluctuations support learning, whereas much slower (tonic) dopamine changes are involved in motivation. We examined dopamine release in the nucleus accumbens across multiple time scales, using complementary microdialysis and voltammetric methods during adaptive decision-making. We found that minute-by-minute dopamine levels covaried with reward rate and motivational vigor. Second-by-second dopamine release encoded an estimate of temporally discounted future reward (a value function). Changing dopamine immediately altered willingness to work and reinforced preceding action choices by encoding temporal-difference reward prediction errors. Our results indicate that dopamine conveys a single, rapidly evolving decision variable, the available reward for investment of effort, which is employed for both learning and motivational functions."
},
{
"pmid": "17220510",
"title": "Solving the distal reward problem through linkage of STDP and dopamine signaling.",
"abstract": "In Pavlovian and instrumental conditioning, reward typically comes seconds after reward-triggering actions, creating an explanatory conundrum known as \"distal reward problem\": How does the brain know what firing patterns of what neurons are responsible for the reward if 1) the patterns are no longer there when the reward arrives and 2) all neurons and synapses are active during the waiting period to the reward? Here, we show how the conundrum is resolved by a model network of cortical spiking neurons with spike-timing-dependent plasticity (STDP) modulated by dopamine (DA). Although STDP is triggered by nearly coincident firing patterns on a millisecond timescale, slow kinetics of subsequent synaptic plasticity is sensitive to changes in the extracellular DA concentration during the critical period of a few seconds. Random firings during the waiting period to the reward do not affect STDP and hence make the network insensitive to the ongoing activity-the key feature that distinguishes our approach from previous theoretical studies, which implicitly assume that the network be quiet during the waiting period or that the patterns be preserved until the reward arrives. This study emphasizes the importance of precise firing patterns in brain dynamics and suggests how a global diffusive reinforcement signal in the form of extracellular DA can selectively influence the right synapses at the right time."
},
{
"pmid": "26545099",
"title": "Network Plasticity as Bayesian Inference.",
"abstract": "General results from statistical learning theory suggest to understand not only brain computations, but also brain plasticity as probabilistic inference. But a model for that has been missing. We propose that inherently stochastic features of synaptic plasticity and spine motility enable cortical networks of neurons to carry out probabilistic inference by sampling from a posterior distribution of network configurations. This model provides a viable alternative to existing models that propose convergence of parameters to maximum likelihood values. It explains how priors on weight distributions and connection probabilities can be merged optimally with learned experience, how cortical networks can generalize learned information so well to novel experiences, and how they can compensate continuously for unforeseen disturbances of the network. The resulting new theory of network plasticity explains from a functional perspective a number of experimental data on stochastic aspects of synaptic plasticity that previously appeared to be quite puzzling."
},
{
"pmid": "29696150",
"title": "A Dynamic Connectome Supports the Emergence of Stable Computational Function of Neural Circuits through Reward-Based Learning.",
"abstract": "Synaptic connections between neurons in the brain are dynamic because of continuously ongoing spine dynamics, axonal sprouting, and other processes. In fact, it was recently shown that the spontaneous synapse-autonomous component of spine dynamics is at least as large as the component that depends on the history of pre- and postsynaptic neural activity. These data are inconsistent with common models for network plasticity and raise the following questions: how can neural circuits maintain a stable computational function in spite of these continuously ongoing processes, and what could be functional uses of these ongoing processes? Here, we present a rigorous theoretical framework for these seemingly stochastic spine dynamics and rewiring processes in the context of reward-based learning tasks. We show that spontaneous synapse-autonomous processes, in combination with reward signals such as dopamine, can explain the capability of networks of neurons in the brain to configure themselves for specific computational tasks, and to compensate automatically for later changes in the network or task. Furthermore, we show theoretically and through computer simulations that stable computational performance is compatible with continuously ongoing synapse-autonomous changes. After reaching good computational performance it causes primarily a slow drift of network architecture and dynamics in task-irrelevant dimensions, as observed for neural activity in motor cortex and other areas. On the more abstract level of reinforcement learning the resulting model gives rise to an understanding of reward-driven network plasticity as continuous sampling of network configurations."
},
{
"pmid": "24675787",
"title": "STDP installs in Winner-Take-All circuits an online approximation to hidden Markov model learning.",
"abstract": "In order to cross a street without being run over, we need to be able to extract very fast hidden causes of dynamically changing multi-modal sensory stimuli, and to predict their future evolution. We show here that a generic cortical microcircuit motif, pyramidal cells with lateral excitation and inhibition, provides the basis for this difficult but all-important information processing capability. This capability emerges in the presence of noise automatically through effects of STDP on connections between pyramidal cells in Winner-Take-All circuits with lateral excitation. In fact, one can show that these motifs endow cortical microcircuits with functional properties of a hidden Markov model, a generic model for solving such tasks through probabilistic inference. Whereas in engineering applications this model is adapted to specific tasks through offline learning, we show here that a major portion of the functionality of hidden Markov models arises already from online applications of STDP, without any supervision or rewards. We demonstrate the emergent computing capabilities of the model through several computer simulations. The full power of hidden Markov model learning can be attained through reward-gated STDP. This is due to the fact that these mechanisms enable a rejection sampling approximation to theoretically optimal learning. We investigate the possible performance gain that can be achieved with this more accurate learning method for an artificial grammar task."
},
{
"pmid": "23787340",
"title": "Deep hierarchies in the primate visual cortex: what can we learn for computer vision?",
"abstract": "Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research."
},
{
"pmid": "18846203",
"title": "A learning theory for reward-modulated spike-timing-dependent plasticity with application to biofeedback.",
"abstract": "Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how behaviorally relevant adaptive changes in complex networks of spiking neurons could be achieved in a self-organizing manner through local synaptic plasticity. However, the capabilities and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allows us to predict under which conditions reward-modulated STDP will achieve a desired learning effect. These analytical results imply that neurons can learn through reward-modulated STDP to classify not only spatial but also temporal firing patterns of presynaptic neurons. They also can learn to respond to specific presynaptic firing patterns with particular spike patterns. Finally, the resulting learning theory predicts that even difficult credit-assignment problems, where it is very hard to tell which synaptic weights should be modified in order to increase the global reward for the system, can be solved in a self-organizing manner through reward-modulated STDP. This yields an explanation for a fundamental experimental result on biofeedback in monkeys by Fetz and Baker. In this experiment monkeys were rewarded for increasing the firing rate of a particular neuron in the cortex and were able to solve this extremely difficult credit assignment problem. Our model for this experiment relies on a combination of reward-modulated STDP with variable spontaneous firing activity. Hence it also provides a possible functional explanation for trial-to-trial variability, which is characteristic for cortical networks of neurons but has no analogue in currently existing artificial computing systems. In addition our model demonstrates that reward-modulated STDP can be applied to all synapses in a large recurrent neural network without endangering the stability of the network dynamics."
},
{
"pmid": "25719670",
"title": "Human-level control through deep reinforcement learning.",
"abstract": "The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks."
},
{
"pmid": "25734662",
"title": "A spiking neural network model of model-free reinforcement learning with high-dimensional sensory input and perceptual ambiguity.",
"abstract": "A theoretical framework of reinforcement learning plays an important role in understanding action selection in animals. Spiking neural networks provide a theoretically grounded means to test computational hypotheses on neurally plausible algorithms of reinforcement learning through numerical simulation. However, most of these models cannot handle observations which are noisy, or occurred in the past, even though these are inevitable and constraining features of learning in real environments. This class of problem is formally known as partially observable reinforcement learning (PORL) problems. It provides a generalization of reinforcement learning to partially observable domains. In addition, observations in the real world tend to be rich and high-dimensional. In this work, we use a spiking neural network model to approximate the free energy of a restricted Boltzmann machine and apply it to the solution of PORL problems with high-dimensional observations. Our spiking network model solves maze tasks with perceptually ambiguous high-dimensional observations without knowledge of the true environment. An extended model with working memory also solves history-dependent tasks. The way spiking neural networks handle PORL problems may provide a glimpse into the underlying laws of neural information processing which can only be discovered through such a top-down approach."
},
{
"pmid": "15987953",
"title": "Dopamine cells respond to predicted events during classical conditioning: evidence for eligibility traces in the reward-learning network.",
"abstract": "Behavioral conditioning of cue-reward pairing results in a shift of midbrain dopamine (DA) cell activity from responding to the reward to responding to the predictive cue. However, the precise time course and mechanism underlying this shift remain unclear. Here, we report a combined single-unit recording and temporal difference (TD) modeling approach to this question. The data from recordings in conscious rats showed that DA cells retain responses to predicted reward after responses to conditioned cues have developed, at least early in training. This contrasts with previous TD models that predict a gradual stepwise shift in latency with responses to rewards lost before responses develop to the conditioned cue. By exploring the TD parameter space, we demonstrate that the persistent reward responses of DA cells during conditioning are only accurately replicated by a TD model with long-lasting eligibility traces (nonzero values for the parameter lambda) and low learning rate (alpha). These physiological constraints for TD parameters suggest that eligibility traces and low per-trial rates of plastic modification may be essential features of neural circuits for reward learning in the brain. Such properties enable rapid but stable initiation of learning when the number of stimulus-reward pairings is limited, conferring significant adaptive advantages in real-world environments."
},
{
"pmid": "16764506",
"title": "Optimal spike-timing-dependent plasticity for precise action potential firing in supervised learning.",
"abstract": "In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes by gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of up- and downregulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired postsynaptic firing. If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on spike timing directly reflects the time course of an excitatory postsynaptic potential. However, our approach gives no unique reason for synaptic depression under reversed spike timing. In fact, the presence and amplitude of depression of synaptic efficacies for reversed spike timing depend on how constraints are implemented in the optimization problem. Two different constraints, control of postsynaptic rates and control of temporal locality, are studied. The relation of our results to spike-timing-dependent plasticity and reinforcement learning is discussed."
},
{
"pmid": "26601905",
"title": "Neurotransmitters and Novelty: A Systematic Review.",
"abstract": "Our brains are highly responsive to novelty. However, how novelty is processed in the brain, and what neurotransmitter systems play a role therein, remains elusive. Here, we systematically review studies on human participants that have looked at the neuromodulatory basis of novelty detection and processing. While theoretical models and studies on nonhuman animals have pointed to a role of the dopaminergic, cholinergic, noradrenergic and serotonergic systems, the human literature has focused almost exclusively on the first two. Dopamine was found to affect electrophysiological responses to novelty early in time after stimulus presentation, but evidence on its effects on later processing was found to be contradictory: While neuropharmacological studies mostly yielded null effects, gene studies did point to an important role for dopamine. Acetylcholine seems to dampen novelty signals in the medial temporal lobe, but boost them in frontal cortex. Findings on 5-HT (serotonin) were found to be mostly contradictory. Two large gaps were identified in the literature. First, few studies have looked at neuromodulatory influences on behavioral effects of novelty. Second, no study has looked at the involvement of the noradrenergic system in novelty processing."
},
{
"pmid": "24507189",
"title": "Learning by the dendritic prediction of somatic spiking.",
"abstract": "Recent modeling of spike-timing-dependent plasticity indicates that plasticity involves as a third factor a local dendritic potential, besides pre- and postsynaptic firing times. We present a simple compartmental neuron model together with a non-Hebbian, biologically plausible learning rule for dendritic synapses where plasticity is modulated by these three factors. In functional terms, the rule seeks to minimize discrepancies between somatic firings and a local dendritic potential. Such prediction errors can arise in our model from stochastic fluctuations as well as from synaptic input, which directly targets the soma. Depending on the nature of this direct input, our plasticity rule subserves supervised or unsupervised learning. When a reward signal modulates the learning rate, reinforcement learning results. Hence a single plasticity rule supports diverse learning paradigms."
},
{
"pmid": "27536234",
"title": "Closed Loop Interactions between Spiking Neural Network and Robotic Simulators Based on MUSIC and ROS.",
"abstract": "In order to properly assess the function and computational properties of simulated neural systems, it is necessary to account for the nature of the stimuli that drive the system. However, providing stimuli that are rich and yet both reproducible and amenable to experimental manipulations is technically challenging, and even more so if a closed-loop scenario is required. In this work, we present a novel approach to solve this problem, connecting robotics and neural network simulators. We implement a middleware solution that bridges the Robotic Operating System (ROS) to the Multi-Simulator Coordinator (MUSIC). This enables any robotic and neural simulators that implement the corresponding interfaces to be efficiently coupled, allowing real-time performance for a wide range of configurations. This work extends the toolset available for researchers in both neurorobotics and computational neuroscience, and creates the opportunity to perform closed-loop experiments of arbitrary complexity to address questions in multiple areas, including embodiment, agency, and reinforcement learning."
},
{
"pmid": "29652587",
"title": "SuperSpike: Supervised Learning in Multilayer Spiking Neural Networks.",
"abstract": "A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in silico. Here we revisit the problem of supervised learning in temporally coding multilayer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three-factor learning rule capable of training multilayer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric, and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike time patterns."
}
] |
Diagnostics | 31295856 | PMC6787581 | 10.3390/diagnostics9030072 | Skin Lesion Segmentation in Dermoscopic Images with Combination of YOLO and GrabCut Algorithm | Skin lesion segmentation has a critical role in the early and accurate diagnosis of skin cancer by computerized systems. However, automatic segmentation of skin lesions in dermoscopic images is a challenging task owing to difficulties including artifacts (hairs, gel bubbles, ruler markers), indistinct boundaries, low contrast and varying sizes and shapes of the lesion images. This paper proposes a novel and effective pipeline for skin lesion segmentation in dermoscopic images combining a deep convolutional neural network named as You Only Look Once (YOLO) and the GrabCut algorithm. This method performs lesion segmentation using a dermoscopic image in four steps: 1. Removal of hairs on the lesion, 2. Detection of the lesion location, 3. Segmentation of the lesion area from the background, 4. Post-processing with morphological operators. The method was evaluated on two publicly well-known datasets, that is the PH2 and the ISBI 2017 (Skin Lesion Analysis Towards Melanoma Detection Challenge Dataset). The proposed pipeline model has achieved a 90% sensitivity rate on the ISBI 2017 dataset, outperforming other deep learning-based methods. The method also obtained close results according to the results obtained from other methods in the literature in terms of metrics of accuracy, specificity, Dice coefficient, and Jaccard index. | 2. Related WorksSpecific and prominent features of lesion images play a critical role in the classification of melanoma. These features can only be obtained by proper segmentation of the skin lesion from surrounding tissue. Segmenting the lesion from the surrounding normal tissue and extracting more representative features are essential for a robust and effective diagnosis [14,15]. There are several segmentation methods developed to segment skin lesions automatically or semi-automatically [16,17,18,19]. These segmentation methods can be grouped in to five. Histogram thresholding methods try to identify a threshold value for the segmentation of lesion from the surrounding tissue [20,21,22]. Unsupervised clustering approaches use the color space properties of RGB dermoscopic images to obtain homogenous regions [23,24,25,26,27,28,29]. Edge-based and region-based methods take advantage of the edge operator and different algorithms such as region splitting or merging [28,30,31]. Active contour methods utilize metaheuristic algorithms, genetic algorithms and snake algorithms, etc., for segmentation of lesion area [24,32,33,34]. The last group is the supervised segmentation methods. These methods segment the skin lesion by training the recognizers, such support vector machines (SVMs), decision trees (DTs), and artificial neural networks (ANNs) [24,35]. More detailed information on these methods can be found in the most comprehensive and current reviews of segmentation techniques used in skin lesions [17,36,37]. All these techniques use low level features that rely on pixel level features. Therefore, these classical segmentation techniques are unable to achieve satisfactory results and cannot overcome difficulties such as fuzzy lesion boundaries, hair artifacts, low contrast, and other artifacts. Nowadays, deep learning-based methods especially CNNs have obtained significant success in image classification, object detection, and segmentation problems [13,38,39]. The main reason behind the success of CNNs is their capability of hierarchical feature learning and extracting more high level and robust features from the raw image data. There are different types of CNN architectures for different purposes such as classification, segmentation, object detection and localization [13,40,41]. In addition to the natural image classification, CNNs also achieved great success in diversified medical problems such as detection of mitosis in histology images [42], brain tumor segmentation in MR images [43], breast cancer detection in mammography images etc. [44]. A detailed review is presented by Litjens et al. [45]. Also, CNNs achieved state-of-the-art results in semantic segmentation. Various deep CNN architectures have been proposed for semantic segmentation such as Fully Convolutional Neural Network (FCN) [44], U-Net [41], SegNet [46], and DeepLab [47]. More about these semantic segmentation methods can be found in a detailed review [48]. New developments in CNN architectures with the capability of semantic segmentation have been used in segmentation of skin lesions by researchers in recent years. For instance, in 2017, Yu et al. presented an end-to-end deep network consisting of two stages called segmentation and classification [49]. They developed a fully convolutional residual network (FCRN) utilizing the power of deep residual networks and the took second place with the accuracy of 94.9% in the segmentation category of the International Symposium on Biomedical Imaging (ISBI) 2016 Challenge [50]. Another type of a deep residual network (DRN) introduced by the same team took first place with the accuracy of 85.5% in the classification category of the same challenge. In another study, Yuan et al., introduced a skin lesion segmentation technique by utilizing FCN [51]. They improved the FCN model by using an unusual loss function named as Jaccard distance. In this way, they solved the imbalance problem between the surrounding skin and lesion pixels. Evaluated on two publicly well-known datasets, the ISBI 2016 and the PH2, it achieved an accuracy of 95.5% and 93.7% respectively. Bi et al. developed a multistage FCN and a parallel integration (PI) method to segment skin lesions in dermoscopic images [10]. The PI method combined with the FCN helps further improve the boundaries of the segmented skin lesions. It was evaluated on two publicly available datasets, the ISBI 2016 and the PH2 [52], attaining 95.51% and 94.24% of accuracy rates, 91.18% and 90.66% of Dice coefficient indices respectively. In another study in 2017, Goya et al. introduced a deep network for multi-class semantic skin lesion segmentation by means of FCN. Their deep FCN architecture succeeded in segmentation of three class of skin lesions including melanoma, benign nevi, and seborrheic keratoses. They used the ISBI 2017 Challenge dataset for evaluation of the method. This deep FCN architecture attained Dice coefficient indices of 55.7%, 65.3%, and 78.5%, for and seborrheic keratosis, melanoma, and benign lesions, respectively [53]. Lin et al. compared the performance of two skin lesion segmentation approaches. One of them is deep convolutional neural network-based U-Net and another one is C-Means Clustering method [54]. This comparison tested on the ISBI 2017 Challenge dataset [50]. The U-Net achieved 77% Dice coefficient indices while clustering method remained at 61%. The results show that U-net method significantly outperformed the clustering method. Yuan et al. presented a deep neural network architecture consisting of convolutional and deconvolutional layers (CDNN) for skin lesion segmentation in 2017 [55]. They trained their model with the ISBI 2017 dataset using dermoscopic images different color spaces. Their CDNN architecture took first place in the ISBI 2017 Challenge with the 76.5% Jaccard index. In 2018, Al-Masni et al. proposed a novel skin lesion segmentation approach called a full resolution convolutional network (FrCN) [56]. The advantage of this model is to eliminate subsampling layers and use full resolution input in the architecture during the training. In this way, the desired specific features could be obtained from the input image easily. They evaluated their deep model on two publicly available datasets, the PH2 and the ISBI 2017. Test results for sensitivity, specificity, accuracy on the ISBI 2017 dataset were 85.40%, 96.69%, 94.03%, respectively, while on the PH2 dataset, they were 93.72%, 95.65%, 95.08%, respectively. In 2018, Hang Li et al. presented a deep model called dense deconvolutional network (DNN) for the segmentation of skin lesions. Their model consists of dense deconvolutional layers (DDL), chained residual pooling (CRP) and hierarchical supervision (HS) [57]. They trained DDL for maintaining the same resolutions of input and output images without prior knowledge or complicated post-processing procedures. They used CRP for extracting rich contextual information by combining local and global contextual feature combination and used HS to serve as a loss helper as well as to improve the prediction mask. They used the ISBI 2017 dataset for evaluation of their model and obtained the results 0.866%, 0.765%, 0.939%, for Dice coefficient, Jaccard index, accuracy, respectively. In 2018, Peng et al. used a segmentation architecture based on adversarial networks. They utilized generative adversarial network (GAN) to assist segmentation of skin lesions [58]. They used a U-net based network as generator and a CNN network as discriminator to discriminate the ground truth and generated mask. They evaluated their model on the ISBI 2016 dataset and achieved an average segmentation accuracy rate of 0.97% and a 0.94% the Dice coefficient rate. Recently, in 2019, Yuan et al. recommended a segmentation method [59], an improved version of their last study [51]. They developed a deeper network architecture with 29 layers and used small kernel filters for attaining more detailed features and increasing the discrimination capacity of their architecture. They evaluated their method on the ISBI 2017 dataset and achieved 0.76% Jaccard index rate. | [
"28369739",
"26476255",
"24958263",
"30620402",
"29313949",
"12074856",
"28941871",
"28600236",
"11341712",
"19121917",
"23063256",
"26411929",
"27265054",
"22676490",
"24081839",
"20923456",
"20529909",
"21507072",
"21226876",
"19159382",
"20970307",
"26151933",
"28778026",
"28463186",
"30130171",
"28436853",
"29903489",
"30047917",
"29990146",
"27295650",
"26886976",
"29439500",
"29047032"
] | [
{
"pmid": "28369739",
"title": "The global burden of melanoma: results from the Global Burden of Disease Study 2015.",
"abstract": "BACKGROUND\nDespite recent improvements in prevention, diagnosis and treatment, vast differences in melanoma burden still exist between populations. Comparative data can highlight these differences and lead to focused efforts to reduce the burden of melanoma.\n\n\nOBJECTIVES\nTo assess global, regional and national melanoma incidence, mortality and disability-adjusted life year (DALY) estimates from the Global Burden of Disease Study 2015.\n\n\nMETHODS\nVital registration system and cancer registry data were used for melanoma mortality modelling. Incidence and prevalence were estimated using separately modelled mortality-to-incidence ratios. Total prevalence was divided into four disease phases and multiplied by disability weights to generate years lived with disability (YLDs). Deaths in each age group were multiplied by the reference life expectancy to generate years of life lost (YLLs). YLDs and YLLs were added to estimate DALYs.\n\n\nRESULTS\nThe five world regions with the greatest melanoma incidence, DALY and mortality rates were Australasia, North America, Eastern Europe, Western Europe and Central Europe. With the exception of regions in sub-Saharan Africa, DALY and mortality rates were greater in men than in women. DALY rate by age was highest in those aged 75-79 years, 70-74 years and ≥ 80 years.\n\n\nCONCLUSIONS\nThe greatest burden from melanoma falls on Australasian, North American, European, elderly and male populations, which is consistent with previous investigations. These substantial disparities in melanoma burden worldwide highlight the need for aggressive prevention efforts. The Global Burden of Disease Study results can help shape melanoma research and public policy."
},
{
"pmid": "26476255",
"title": "Skin Cancer Epidemiology, Detection, and Management.",
"abstract": "Although the signs and symptoms of the 3 most common skin malignancies are well known to physicians, any new or changing lesions should be monitored and worked up to rule out varying forms of cutaneous malignancy. Classic presenting features of each condition exist, but patients may present with overlapping or atypical features, and a biopsy is almost always required to definitively determine the true nature of each disorder. Given the intense psychosocial ramifications of skin cancer diagnosis and treatment, early detection remains the hallmark in producing favorable outcomes."
},
{
"pmid": "24958263",
"title": "Studies of Secondary Melanoma on C57BL/6J Mouse Liver Using 1H NMR Metabolomics.",
"abstract": "NMR metabolomics, consisting of solid state high resolution magic angle spinning (HR-MAS) 1H-NMR, liquid state high resolution 1H-NMR, and principal components analysis (PCA) has been used to study secondary metastatic B16-F10 melanoma in C57BL/6J mouse liver. The melanoma group can be differentiated from its control group by PCA analysis of the estimates of absolute concentrations from liquid state 1H-NMR spectra on liver tissue extracts or by the estimates of absolute peak intensities of metabolites from 1H HR-MAS-NMR data on intact liver tissues. In particular, we found that the estimates of absolute concentrations of glutamate, creatine, fumarate and cholesterol are elevated in the melanoma group as compared to controls, while the estimates of absolute concentrations of succinate, glycine, glucose, and the family of linear lipids including long chain fatty acids, total choline and acyl glycerol are decreased. The ratio of glycerophosphocholine (GPC) to phosphocholine (PCho) is increased by about 1.5 fold in the melanoma group, while the estimate of absolute concentration of total choline is actually lower in melanoma mice. These results suggest the following picture in secondary melanoma metastasis: Linear lipid levels are decreased by beta oxidation in the melanoma group, which contributes to an increase in the synthesis of cholesterol, and also provides an energy source input for TCA cycle. These findings suggest a link between lipid oxidation, the TCA cycle and the hypoxia-inducible factors (HIF) signal pathway in tumor metastases. Thus, this study indicates that the metabolic profile derived from NMR analysis can provide a valuable bio-signature of malignancy and cell hypoxia in metastatic melanoma."
},
{
"pmid": "30620402",
"title": "Cancer statistics, 2019.",
"abstract": "Each year, the American Cancer Society estimates the numbers of new cancer cases and deaths that will occur in the United States and compiles the most recent data on cancer incidence, mortality, and survival. Incidence data, available through 2015, were collected by the Surveillance, Epidemiology, and End Results Program; the National Program of Cancer Registries; and the North American Association of Central Cancer Registries. Mortality data, available through 2016, were collected by the National Center for Health Statistics. In 2019, 1,762,450 new cancer cases and 606,880 cancer deaths are projected to occur in the United States. Over the past decade of data, the cancer incidence rate (2006-2015) was stable in women and declined by approximately 2% per year in men, whereas the cancer death rate (2007-2016) declined annually by 1.4% and 1.8%, respectively. The overall cancer death rate dropped continuously from 1991 to 2016 by a total of 27%, translating into approximately 2,629,200 fewer cancer deaths than would have been expected if death rates had remained at their peak. Although the racial gap in cancer mortality is slowly narrowing, socioeconomic inequalities are widening, with the most notable gaps for the most preventable cancers. For example, compared with the most affluent counties, mortality rates in the poorest counties were 2-fold higher for cervical cancer and 40% higher for male lung and liver cancers during 2012-2016. Some states are home to both the wealthiest and the poorest counties, suggesting the opportunity for more equitable dissemination of effective cancer prevention, early detection, and treatment strategies. A broader application of existing cancer control knowledge with an emphasis on disadvantaged groups would undoubtedly accelerate progress against cancer."
},
{
"pmid": "29313949",
"title": "Cancer statistics, 2018.",
"abstract": "Each year, the American Cancer Society estimates the numbers of new cancer cases and deaths that will occur in the United States and compiles the most recent data on cancer incidence, mortality, and survival. Incidence data, available through 2014, were collected by the Surveillance, Epidemiology, and End Results Program; the National Program of Cancer Registries; and the North American Association of Central Cancer Registries. Mortality data, available through 2015, were collected by the National Center for Health Statistics. In 2018, 1,735,350 new cancer cases and 609,640 cancer deaths are projected to occur in the United States. Over the past decade of data, the cancer incidence rate (2005-2014) was stable in women and declined by approximately 2% annually in men, while the cancer death rate (2006-2015) declined by about 1.5% annually in both men and women. The combined cancer death rate dropped continuously from 1991 to 2015 by a total of 26%, translating to approximately 2,378,600 fewer cancer deaths than would have been expected if death rates had remained at their peak. Of the 10 leading causes of death, only cancer declined from 2014 to 2015. In 2015, the cancer death rate was 14% higher in non-Hispanic blacks (NHBs) than non-Hispanic whites (NHWs) overall (death rate ratio [DRR], 1.14; 95% confidence interval [95% CI], 1.13-1.15), but the racial disparity was much larger for individuals aged <65 years (DRR, 1.31; 95% CI, 1.29-1.32) compared with those aged ≥65 years (DRR, 1.07; 95% CI, 1.06-1.09) and varied substantially by state. For example, the cancer death rate was lower in NHBs than NHWs in Massachusetts for all ages and in New York for individuals aged ≥65 years, whereas for those aged <65 years, it was 3 times higher in NHBs in the District of Columbia (DRR, 2.89; 95% CI, 2.16-3.91) and about 50% higher in Wisconsin (DRR, 1.78; 95% CI, 1.56-2.02), Kansas (DRR, 1.51; 95% CI, 1.25-1.81), Louisiana (DRR, 1.49; 95% CI, 1.38-1.60), Illinois (DRR, 1.48; 95% CI, 1.39-1.57), and California (DRR, 1.45; 95% CI, 1.38-1.54). Larger racial inequalities in young and middle-aged adults probably partly reflect less access to high-quality health care. CA Cancer J Clin 2018;68:7-30. © 2018 American Cancer Society."
},
{
"pmid": "28941871",
"title": "Accuracy of dermatoscopy for the diagnosis of nonpigmented cancers of the skin.",
"abstract": "BACKGROUND\nNonpigmented skin cancer is common, and diagnosis with the unaided eye is error prone.\n\n\nOBJECTIVE\nTo investigate whether dermatoscopy improves the diagnostic accuracy for nonpigmented (amelanotic) cutaneous neoplasms.\n\n\nMETHODS\nWe collected a sample of 2072 benign and malignant neoplastic lesions and inflammatory conditions and presented close-up images taken with and without dermatoscopy to 95 examiners with different levels of experience.\n\n\nRESULTS\nThe area under the curve was significantly higher with than without dermatoscopy (0.68 vs 0.64, P < .001). Among 51 possible diagnoses, the correct diagnosis was selected in 33.1% of cases with and 26.4% of cases without dermatoscopy (P < .001). For experts, the frequencies of correct specific diagnoses of a malignant lesion improved from 40.2% without to 51.3% with dermatoscopy. For all malignant neoplasms combined, the frequencies of appropriate management strategies increased from 78.1% without to 82.5% with dermatoscopy.\n\n\nLIMITATIONS\nThe study deviated from a real-life clinical setting and was potentially affected by verification and selection bias.\n\n\nCONCLUSIONS\nDermatoscopy improves the diagnosis and management of nonpigmented skin cancer and should be used as an adjunct to examination with the unaided eye."
},
{
"pmid": "28600236",
"title": "Dermoscopic Image Segmentation via Multistage Fully Convolutional Networks.",
"abstract": "OBJECTIVE\nSegmentation of skin lesions is an important step in the automated computer aided diagnosis of melanoma. However, existing segmentation methods have a tendency to over- or under-segment the lesions and perform poorly when the lesions have fuzzy boundaries, low contrast with the background, inhomogeneous textures, or contain artifacts. Furthermore, the performance of these methods are heavily reliant on the appropriate tuning of a large number of parameters as well as the use of effective preprocessing techniques, such as illumination correction and hair removal.\n\n\nMETHODS\nWe propose to leverage fully convolutional networks (FCNs) to automatically segment the skin lesions. FCNs are a neural network architecture that achieves object detection by hierarchically combining low-level appearance information with high-level semantic information. We address the issue of FCN producing coarse segmentation boundaries for challenging skin lesions (e.g., those with fuzzy boundaries and/or low difference in the textures between the foreground and the background) through a multistage segmentation approach in which multiple FCNs learn complementary visual characteristics of different skin lesions; early stage FCNs learn coarse appearance and localization information while late-stage FCNs learn the subtle characteristics of the lesion boundaries. We also introduce a new parallel integration method to combine the complementary information derived from individual segmentation stages to achieve a final segmentation result that has accurate localization and well-defined lesion boundaries, even for the most challenging skin lesions.\n\n\nRESULTS\nWe achieved an average Dice coefficient of 91.18% on the ISBI 2016 Skin Lesion Challenge dataset and 90.66% on the PH2 dataset.\n\n\nCONCLUSION AND SIGNIFICANCE\nOur extensive experimental results on two well-established public benchmark datasets demonstrate that our method is more effective than other state-of-the-art methods for skin lesion segmentation."
},
{
"pmid": "11341712",
"title": "Automated melanoma recognition.",
"abstract": "A system for the computerized analysis of images obtained from ELM has been developed to enhance the early recognition of malignant melanoma. As an initial step, the binary mask of the skin lesion is determined by several basic segmentation algorithms together with a fusion strategy. A set of features containing shape and radiometric features as well as local and global parameters is calculated to describe the malignancy of a lesion. Significant features are then selected from this set by application of statistical feature subset selection methods. The final kNN classification delivers a sensitivity of 87% with a specificity of 92%."
},
{
"pmid": "19121917",
"title": "Lesion border detection in dermoscopy images.",
"abstract": "BACKGROUND\nDermoscopy is one of the major imaging modalities used in the diagnosis of melanoma and other pigmented skin lesions. Due to the difficulty and subjectivity of human interpretation, computerized analysis of dermoscopy images has become an important research area. One of the most important steps in dermoscopy image analysis is the automated detection of lesion borders.\n\n\nMETHODS\nIn this article, we present a systematic overview of the recent border detection methods in the literature paying particular attention to computational issues and evaluation aspects.\n\n\nCONCLUSION\nCommon problems with the existing approaches include the acquisition, size, and diagnostic distribution of the test image set, the evaluation of the results, and the inadequate description of the employed methods. Border determination by dermatologists appears to depend upon higher-level knowledge, therefore it is likely that the incorporation of domain knowledge in automated methods will enable them to perform better, especially in sets of images with a variety of diagnoses."
},
{
"pmid": "23063256",
"title": "Computerized analysis of pigmented skin lesions: a review.",
"abstract": "OBJECTIVE\nComputerized analysis of pigmented skin lesions (PSLs) is an active area of research that dates back over 25years. One of its main goals is to develop reliable automatic instruments for recognizing skin cancer from images acquired in vivo. This paper presents a review of this research applied to microscopic (dermoscopic) and macroscopic (clinical) images of PSLs. The review aims to: (1) provide an extensive introduction to and clarify ambiguities in the terminology used in the literature and (2) categorize and group together relevant references so as to simplify literature searches on a specific sub-topic.\n\n\nMETHODS AND MATERIAL\nThe existing literature was classified according to the nature of publication (clinical or computer vision articles) and differentiating between individual and multiple PSL image analysis. We also emphasize the importance of the difference in content between dermoscopic and clinical images.\n\n\nRESULTS\nVarious approaches for implementing PSL computer-aided diagnosis systems and their standard workflow components are reviewed and summary tables provided. An extended categorization of PSL feature descriptors is also proposed, associating them with the specific methods for diagnosing melanoma, separating images of the two modalities and discriminating references according to our classification of the literature.\n\n\nCONCLUSIONS\nThere is a large discrepancy in the number of articles published on individual and multiple PSL image analysis and a scarcity of reported material on the automation of lesion change detection. At present, computer-aided diagnosis systems based on individual PSL image analysis cannot yet be used to provide the best diagnostic results. Furthermore, the absence of benchmark datasets for standardized algorithm evaluation is a barrier to a more dynamic development of this research area."
},
{
"pmid": "26411929",
"title": "A Review of the Quantification and Classification of Pigmented Skin Lesions: From Dedicated to Hand-Held Devices.",
"abstract": "In recent years, the incidence of skin cancer cases has risen, worldwide, mainly due to the prolonged exposure to harmful ultraviolet radiation. Concurrently, the computer-assisted medical diagnosis of skin cancer has undergone major advances, through an improvement in the instrument and detection technology, and the development of algorithms to process the information. Moreover, because there has been an increased need to store medical data, for monitoring, comparative and assisted-learning purposes, algorithms for data processing and storage have also become more efficient in handling the increase of data. In addition, the potential use of common mobile devices to register high-resolution images of skin lesions has also fueled the need to create real-time processing algorithms that may provide a likelihood for the development of malignancy. This last possibility allows even non-specialists to monitor and follow-up suspected skin cancer cases. In this review, we present the major steps in the pre-processing, processing and post-processing of skin lesion images, with a particular emphasis on the quantification and classification of pigmented skin lesions. We further review and outline the future challenges for the creation of minimum-feature, automated and real-time algorithms for the detection of skin cancer from images acquired via common mobile devices."
},
{
"pmid": "27265054",
"title": "Computational methods for the image segmentation of pigmented skin lesions: A review.",
"abstract": "BACKGROUND AND OBJECTIVES\nBecause skin cancer affects millions of people worldwide, computational methods for the segmentation of pigmented skin lesions in images have been developed in order to assist dermatologists in their diagnosis. This paper aims to present a review of the current methods, and outline a comparative analysis with regards to several of the fundamental steps of image processing, such as image acquisition, pre-processing and segmentation.\n\n\nMETHODS\nTechniques that have been proposed to achieve these tasks were identified and reviewed. As to the image segmentation task, the techniques were classified according to their principle.\n\n\nRESULTS\nThe techniques employed in each step are explained, and their strengths and weaknesses are identified. In addition, several of the reviewed techniques are applied to macroscopic and dermoscopy images in order to exemplify their results.\n\n\nCONCLUSIONS\nThe image segmentation of skin lesions has been addressed successfully in many studies; however, there is a demand for new methodologies in order to improve the efficiency."
},
{
"pmid": "22676490",
"title": "Lesion border detection in dermoscopy images using ensembles of thresholding methods.",
"abstract": "BACKGROUND\nDermoscopy is one of the major imaging modalities used in the diagnosis of melanoma and other pigmented skin lesions. Due to the difficulty and subjectivity of human interpretation, automated analysis of dermoscopy images has become an important research area. Border detection is often the first step in this analysis. In many cases, the lesion can be roughly separated from the background skin using a thresholding method applied to the blue channel. However, no single thresholding method appears to be robust enough to successfully handle the wide variety of dermoscopy images encountered in clinical practice.\n\n\nMETHODS\nIn this article, we present an automated method for detecting lesion borders in dermoscopy images using ensembles of thres holding methods.\n\n\nCONCLUSION\nExperiments on a difficult set of 90 images demonstrate that the proposed method is robust, fast, and accurate when compared to nine state-of-the-art methods."
},
{
"pmid": "24081839",
"title": "Simpler, faster, more accurate melanocytic lesion segmentation through MEDS.",
"abstract": "We present a new technique for melanocytic lesion segmentation, Mimicking Expert Dermatologists' Segmentations (MEDS), and extensive tests of its accuracy, speed, and robustness. MEDS combines a thresholding scheme reproducing the cognitive process of dermatologists with a number of optimizations that may be of independent interest. MEDS is simple, with a single parameter tuning its “tightness”. It is extremely fast, segmenting medium-resolution images in a fraction of a second even with the modest computational resources of a cell phone-an improvement of an order of magnitude or more over state-of-the-art techniques. And it is extremely accurate: very experienced dermatologists disagree with its segmentations less than they disagree with the segmentations of state-of-the-art techniques, and in fact less than they disagree with the segmentations of dermatologists of moderate experience."
},
{
"pmid": "20923456",
"title": "Unsupervised segmentation for digital dermoscopic images.",
"abstract": "BACKGROUND\nSkin cancer is among the most common types of cancer. Melanoma is the most fatal of all skin cancer types. The only effective treatment is early excision. Recognising melanoma is challenging both for general physicians and for expert dermatologists. A computer-aided diagnostic system improving diagnostic accuracy would be of great importance. Segmenting the lesion from the skin is the first step in this process.\n\n\nMETHODS\nThe present segmentation algorithm uses a multiscale approach for density analysis. Only the skin mode is found by density analysis and then the location of the lesion mode is estimated. The density estimates are attained by Gaussian kernel smoothing with several bandwidths. A new algorithm for hair recognition based on morphological operations on binary images is incorporated into the segmentation algorithm.\n\n\nRESULTS\nThe algorithm provides correct segmentation for both unimodal and multimodal densities. The segmentation is totally unsupervised, with a digital image as the only input. The algorithm has been tested on an independent set of images collected in dermatological practice, and the segmentation is verified by three dermatologists.\n\n\nCONCLUSION\nThe present segmentation algorithm is fast and intuitive. It gives correct segmentation for most types of skin lesions, but fails when the lesion is brighter than the surrounding skin."
},
{
"pmid": "20529909",
"title": "A soft kinetic data structure for lesion border detection.",
"abstract": "MOTIVATION\nThe medical imaging and image processing techniques, ranging from microscopic to macroscopic, has become one of the main components of diagnostic procedures to assist dermatologists in their medical decision-making processes. Computer-aided segmentation and border detection on dermoscopic images is one of the core components of diagnostic procedures and therapeutic interventions for skin cancer. Automated assessment tools for dermoscopic images have become an important research field mainly because of inter- and intra-observer variations in human interpretations. In this study, a novel approach-graph spanner-for automatic border detection in dermoscopic images is proposed. In this approach, a proximity graph representation of dermoscopic images in order to detect regions and borders in skin lesion is presented.\n\n\nRESULTS\nGraph spanner approach is examined on a set of 100 dermoscopic images whose manually drawn borders by a dermatologist are used as the ground truth. Error rates, false positives and false negatives along with true positives and true negatives are quantified by digitally comparing results with manually determined borders from a dermatologist. The results show that the highest precision and recall rates obtained to determine lesion boundaries are 100%. However, accuracy of assessment averages out at 97.72% and borders errors' mean is 2.28% for whole dataset."
},
{
"pmid": "21507072",
"title": "Skin tumor area extraction using an improved dynamic programming approach.",
"abstract": "BACKGROUND/PURPOSE\nBorder (B) description of melanoma and other pigmented skin lesions is one of the most important tasks for the clinical diagnosis of dermoscopy images using the ABCD rule. For an accurate description of the border, there must be an effective skin tumor area extraction (STAE) method. However, this task is complicated due to uneven illumination, artifacts present in the lesions and smooth areas or fuzzy borders of the desired regions.\n\n\nMETHODS\nIn this paper, a novel STAE algorithm based on improved dynamic programming (IDP) is presented. The STAE technique consists of the following four steps: color space transform, pre-processing, rough tumor area detection and refinement of the segmented area. The procedure is performed in the CIE L(*) a(*) b(*) color space, which is approximately uniform and is therefore related to dermatologist's perception. After pre-processing the skin lesions to reduce artifacts, the DP algorithm is improved by introducing a local cost function, which is based on color and texture weights.\n\n\nRESULTS\nThe STAE method is tested on a total of 100 dermoscopic images. In order to compare the performance of STAE with other state-of-the-art algorithms, various statistical measures based on dermatologist-drawn borders are utilized as a ground truth. The proposed method outperforms the others with a sensitivity of 96.64%, a specificity of 98.14% and an error probability of 5.23%.\n\n\nCONCLUSION\nThe results demonstrate that this STAE method by IDP is an effective solution when compared with other state-of-the-art segmentation techniques. The proposed method can accurately extract tumor borders in dermoscopy images."
},
{
"pmid": "21226876",
"title": "Lesion border detection in dermoscopy images using dynamic programming.",
"abstract": "BACKGROUND/PURPOSE\nAutomated border detection is an important and challenging task in the computerized analysis of dermoscopy images. However, dermoscopic images often contain artifacts such as illumination, dermoscopic gel, and outline (hair, skin lines, ruler markings, and blood vessels). As a result, there is a need for robust methods to remove artifacts and detect lesion borders in dermoscopy images.\n\n\nMETHODS\nThis automated method consists of three main steps: (1) preprocessing, (2) edge candidate point detection, and (3) tumor outline delineation. First, algorithms to reduce artifacts were used. Second, a least-squares method (LSM) was performed to acquire edge points. Third, dynamic programming (DP) technique was used to find the optimal boundary of the lesion. Statistical measures based on dermatologist-drawn borders were utilized as ground-truth to evaluate the performance of the proposed method.\n\n\nRESULTS\nThe method is tested on a total of 240 dermoscopic images: 30 benign melanocytic, 50 malignant melanomas, 50 basal cell carcinomas, 20 Merkel cell carcinomas, 60 seborrheic keratosis, and 30 atypical naevi. We obtained mean border detection error of 8.6%, 5.04%, 9.0%, 7.02%, 2.01%, and 3.24%, respectively.\n\n\nCONCLUSIONS\nThe results demonstrate that border detection combined with artifact removal increases sensitivity and specificity for segmentation of lesions in dermoscopy images."
},
{
"pmid": "19159382",
"title": "Border detection in dermoscopy images using statistical region merging.",
"abstract": "BACKGROUND\nAs a result of advances in skin imaging technology and the development of suitable image processing techniques, during the last decade, there has been a significant increase of interest in the computer-aided diagnosis of melanoma. Automated border detection is one of the most important steps in this procedure, because the accuracy of the subsequent steps crucially depends on it.\n\n\nMETHODS\nIn this article, we present a fast and unsupervised approach to border detection in dermoscopy images of pigmented skin lesions based on the statistical region merging algorithm.\n\n\nRESULTS\nThe method is tested on a set of 90 dermoscopy images. The border detection error is quantified by a metric in which three sets of dermatologist-determined borders are used as the ground-truth. The proposed method is compared with four state-of-the-art automated methods (orientation-sensitive fuzzy c-means, dermatologist-like tumor extraction algorithm, meanshift clustering, and the modified JSEG method).\n\n\nCONCLUSION\nThe results demonstrate that the method presented here achieves both fast and accurate border detection in dermoscopy images."
},
{
"pmid": "20970307",
"title": "Modified watershed technique and post-processing for segmentation of skin lesions in dermoscopy images.",
"abstract": "In previous research, a watershed-based algorithm was shown to be useful for automatic lesion segmentation in dermoscopy images, and was tested on a set of 100 benign and malignant melanoma images with the average of three sets of dermatologist-drawn borders used as the ground truth, resulting in an overall error of 15.98%. In this study, to reduce the border detection errors, a neural network classifier was utilized to improve the first-pass watershed segmentation; a novel \"edge object value (EOV) threshold\" method was used to remove large light blobs near the lesion boundary; and a noise removal procedure was applied to reduce the peninsula-shaped false-positive areas. As a result, an overall error of 11.09% was achieved."
},
{
"pmid": "26151933",
"title": "Simultaneous Multi-Structure Segmentation and 3D Nonrigid Pose Estimation in Image-Guided Robotic Surgery.",
"abstract": "In image-guided robotic surgery, segmenting the endoscopic video stream into meaningful parts provides important contextual information that surgeons can exploit to enhance their perception of the surgical scene. This information provides surgeons with real-time decision-making guidance before initiating critical tasks such as tissue cutting. Segmenting endoscopic video is a challenging problem due to a variety of complications including significant noise attributed to bleeding and smoke from cutting, poor appearance contrast between different tissue types, occluding surgical tools, and limited visibility of the objects' geometries on the projected camera views. In this paper, we propose a multi-modal approach to segmentation where preoperative 3D computed tomography scans and intraoperative stereo-endoscopic video data are jointly analyzed. The idea is to segment multiple poorly visible structures in the stereo/multichannel endoscopic videos by fusing reliable prior knowledge captured from the preoperative 3D scans. More specifically, we estimate and track the pose of the preoperative models in 3D and consider the models' non-rigid deformations to match with corresponding visual cues in multi-channel endoscopic video and segment the objects of interest. Further, contrary to most augmented reality frameworks in endoscopic surgery that assume known camera parameters, an assumption that is often violated during surgery due to non-optimal camera calibration and changes in camera focus/zoom, our method embeds these parameters into the optimization hence correcting the calibration parameters within the segmentation process. We evaluate our technique on synthetic data, ex vivo lamb kidney datasets, and in vivo clinical partial nephrectomy surgery with results demonstrating high accuracy and robustness."
},
{
"pmid": "28778026",
"title": "A survey on deep learning in medical image analysis.",
"abstract": "Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research."
},
{
"pmid": "28463186",
"title": "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.",
"abstract": "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed \"DeepLab\" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online."
},
{
"pmid": "30130171",
"title": "Melanoma Recognition in Dermoscopy Images via Aggregated Deep Convolutional Features.",
"abstract": "In this paper, we present a novel framework for dermoscopy image recognition via both a deep learning method and a local descriptor encoding strategy. Specifically, deep representations of a rescaled dermoscopy image are first extracted via a very deep residual neural network pretrained on a large natural image dataset. Then these local deep descriptors are aggregated by orderless visual statistic features based on Fisher vector (FV) encoding to build a global image representation. Finally, the FV encoded representations are used to classify melanoma images using a support vector machine with a Chi-squared kernel. Our proposed method is capable of generating more discriminative features to deal with large variations within melanoma classes, as well as small variations between melanoma and nonmelanoma classes with limited training data. Extensive experiments are performed to demonstrate the effectiveness of our proposed method. Comparisons with state-of-the-art methods show the superiority of our method using the publicly available ISBI 2016 Skin lesion challenge dataset."
},
{
"pmid": "28436853",
"title": "Automatic Skin Lesion Segmentation Using Deep Fully Convolutional Networks With Jaccard Distance.",
"abstract": "Automatic skin lesion segmentation in dermoscopic images is a challenging task due to the low contrast between lesion and the surrounding skin, the irregular and fuzzy lesion borders, the existence of various artifacts, and various imaging acquisition conditions. In this paper, we present a fully automatic method for skin lesion segmentation by leveraging 19-layer deep convolutional neural networks that is trained end-to-end and does not rely on prior knowledge of the data. We propose a set of strategies to ensure effective and efficient learning with limited training data. Furthermore, we design a novel loss function based on Jaccard distance to eliminate the need of sample re-weighting, a typical procedure when using cross entropy as the loss function for image segmentation due to the strong imbalance between the number of foreground and background pixels. We evaluated the effectiveness, efficiency, as well as the generalization capability of the proposed framework on two publicly available databases. One is from ISBI 2016 skin lesion analysis towards melanoma detection challenge, and the other is the PH2 database. Experimental results showed that the proposed method outperformed other state-of-the-art algorithms on these two databases. Our method is general enough and only needs minimum pre- and post-processing, which allows its adoption in a variety of medical image segmentation tasks."
},
{
"pmid": "29903489",
"title": "Skin lesion segmentation in dermoscopy images via deep full resolution convolutional networks.",
"abstract": "BACKGROUND AND OBJECTIVE\nAutomatic segmentation of skin lesions in dermoscopy images is still a challenging task due to the large shape variations and indistinct boundaries of the lesions. Accurate segmentation of skin lesions is a key prerequisite step for any computer-aided diagnostic system to recognize skin melanoma.\n\n\nMETHODS\nIn this paper, we propose a novel segmentation methodology via full resolution convolutional networks (FrCN). The proposed FrCN method directly learns the full resolution features of each individual pixel of the input data without the need for pre- or post-processing operations such as artifact removal, low contrast adjustment, or further enhancement of the segmented skin lesion boundaries. We evaluated the proposed method using two publicly available databases, the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 Challenge and PH2 datasets. To evaluate the proposed method, we compared the segmentation performance with the latest deep learning segmentation approaches such as the fully convolutional network (FCN), U-Net, and SegNet.\n\n\nRESULTS\nOur results showed that the proposed FrCN method segmented the skin lesions with an average Jaccard index of 77.11% and an overall segmentation accuracy of 94.03% for the ISBI 2017 test dataset and 84.79% and 95.08%, respectively, for the PH2 dataset. In comparison to FCN, U-Net, and SegNet, the proposed FrCN outperformed them by 4.94%, 15.47%, and 7.48% for the Jaccard index and 1.31%, 3.89%, and 2.27% for the segmentation accuracy, respectively. Furthermore, the proposed FrCN achieved a segmentation accuracy of 95.62% for some representative clinical benign cases, 90.78% for the melanoma cases, and 91.29% for the seborrheic keratosis cases in the ISBI 2017 test dataset, exhibiting better performance than those of FCN, U-Net, and SegNet.\n\n\nCONCLUSIONS\nWe conclude that using the full spatial resolutions of the input image could enable to learn better specific and prominent features, leading to an improvement in the segmentation performance."
},
{
"pmid": "30047917",
"title": "Dense Deconvolutional Network for Skin Lesion Segmentation.",
"abstract": "Automatic delineation of skin lesion contours from dermoscopy images is a basic step in the process of diagnosis and treatment of skin lesions. However, it is a challenging task due to the high variation of appearances and sizes of skin lesions. In order to deal with such challenges, we propose a new dense deconvolutional network (DDN) for skin lesion segmentation based on residual learning. Specifically, the proposed network consists of dense deconvolutional layers (DDLs), chained residual pooling (CRP), and hierarchical supervision (HS). First, unlike traditional deconvolutional layers, DDLs are adopted to maintain the dimensions of the input and output images unchanged. The DDNs are trained in an end-to-end manner without the need of prior knowledge or complicated postprocessing procedures. Second, the CRP aims to capture rich contextual background information and to fuse multilevel features. By combining the local and global contextual information via multilevel feature fusion, the high-resolution prediction output is obtained. Third, HS is added to serve as an auxiliary loss and to refine the prediction mask. Extensive experiments based on the public ISBI 2016 and 2017 skin lesion challenge datasets demonstrate the superior segmentation results of our proposed method over the state-of-the-art methods."
},
{
"pmid": "29990146",
"title": "Improving Dermoscopic Image Segmentation with Enhanced Convolutional-Deconvolutional Networks.",
"abstract": "Automatic skin lesion segmentation on dermoscopic images is an essential step in computer-aided diagnosis of melanoma. However, this task is challenging due to significant variations of lesion appearances across different patients. This challenge is further exacerbated when dealing with a large amount of image data. In this paper, we extended our previous work by developing a deeper network architecture with smaller kernels to enhance its discriminant capacity. In addition, we explicitly included color information from multiple color spaces to facilitate network training and thus to further improve the segmentation performance. We participated and extensively evaluated our method on the ISBI 2017 skin lesion segmentation challenge. By training with the 2000 challenge training images, our method achieved an average Jaccard Index (JA) of 0:765 on the 600 challenge testing images, which ranked itself in the first place among 21 final submissions in the challenge."
},
{
"pmid": "27295650",
"title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.",
"abstract": "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available."
},
{
"pmid": "26886976",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning.",
"abstract": "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks."
},
{
"pmid": "29439500",
"title": "Skin Lesion Analysis towards Melanoma Detection Using Deep Learning Network.",
"abstract": "Skin lesions are a severe disease globally. Early detection of melanoma in dermoscopy images significantly increases the survival rate. However, the accurate recognition of melanoma is extremely challenging due to the following reasons: low contrast between lesions and skin, visual similarity between melanoma and non-melanoma lesions, etc. Hence, reliable automatic detection of skin tumors is very useful to increase the accuracy and efficiency of pathologists. In this paper, we proposed two deep learning methods to address three main tasks emerging in the area of skin lesion image processing, i.e., lesion segmentation (task 1), lesion dermoscopic feature extraction (task 2) and lesion classification (task 3). A deep learning framework consisting of two fully convolutional residual networks (FCRN) is proposed to simultaneously produce the segmentation result and the coarse classification result. A lesion index calculation unit (LICU) is developed to refine the coarse classification results by calculating the distance heat-map. A straight-forward CNN is proposed for the dermoscopic feature extraction task. The proposed deep learning frameworks were evaluated on the ISIC 2017 dataset. Experimental results show the promising accuracies of our frameworks, i.e., 0.753 for task 1, 0.848 for task 2 and 0.912 for task 3 were achieved."
},
{
"pmid": "29047032",
"title": "Rethinking Skin Lesion Segmentation in a Convolutional Classifier.",
"abstract": "Melanoma is a fatal form of skin cancer when left undiagnosed. Computer-aided diagnosis systems powered by convolutional neural networks (CNNs) can improve diagnostic accuracy and save lives. CNNs have been successfully used in both skin lesion segmentation and classification. For reasons heretofore unclear, previous works have found image segmentation to be, conflictingly, both detrimental and beneficial to skin lesion classification. We investigate the effect of expanding the segmentation border to include pixels surrounding the target lesion. Ostensibly, segmenting a target skin lesion will remove inessential information, non-lesion skin, and artifacts to aid in classification. Our results indicate that segmentation border enlargement produces, to a certain degree, better results across all metrics of interest when using a convolutional based classifier built using the transfer learning paradigm. Consequently, preprocessing methods which produce borders larger than the actual lesion can potentially improve classifier performance, more than both perfect segmentation, using dermatologist created ground truth masks, and no segmentation altogether."
}
] |
Health Information Science and Systems | 31656594 | PMC6790203 | 10.1007/s13755-019-0084-2 | Neural attention with character embeddings for hay fever detection from twitter | The paper aims to leverage the highly unstructured user-generated content in the context of pollen allergy surveillance using neural networks with character embeddings and the attention mechanism. Currently, there is no accurate representation of hay fever prevalence, particularly in real-time scenarios. Social media serves as an alternative to extract knowledge about the condition, which is valuable for allergy sufferers, general practitioners, and policy makers. Despite tremendous potential offered, conventional natural language processing methods prove limited when exposed to the challenging nature of user-generated content. As a result, the detection of actual hay fever instances among the number of false positives, as well as the correct identification of non-technical expressions as pollen allergy symptoms poses a major problem. We propose a deep architecture enhanced with character embeddings and neural attention to improve the performance of hay fever-related content classification from Twitter data. Improvement in prediction is achieved due to the character-level semantics introduced, which effectively addresses the out-of-vocabulary problem in our dataset where the rate is approximately 9%. Overall, the study is a step forward towards improved real-time pollen allergy surveillance from social media with state-of-art technology. | Related workHealth surveillance from social mediaIndividuals often prefer to share health-related experiences with peers, rather than during clinical studies, or even physicians [10]. In addition, the knowledge based solely on health practitioners’ reports and patients’ surveys tend to be generic and often limited in scope [8]. Furthermore, Cvetkovski et al. [9] reported that hay fever tend to be self-managed and availability of over-the-counter medications leads to bypassing the health care professionals, putting additional pressure on complementary data sources about the condition surveillance.Given the limitations, social media has opened an enormous opportunity for public health surveillance from directly affected users. In particular, Twitter has recorded approximately three million active accounts since January 2018 in Australia [7]. Due to its short format, Twitter also encourages the high frequency of updates [18]. This in turn generates an abundance of data, commonly concerning the health-related matters [5]. As a result, Twitter has drawn attention from public health communities to answer numerous health-related questions [2].In the case of pollen allergy surveillance, De Quincey et al. [11, 12] demonstrated that Twitter enables researchers to access information regarding the specific pollen allergy symptoms, as well as the medications usage and effectiveness. The comparison with UK Pollen Hotzones further proved that geolocated Twitter data is a good proxy for the condition prevalence estimation due to the similar distribution [12]. In the other study, Gesualdo et al. [14] observed the high correlation between pollen counts and tweets reporting hay fever incidents in the study conducted in the US. The results obtained serve as a proof of concept of the potential role of social media in signalling allergic symptoms and drug consumption trends.Hay fever prevalence in AustraliaThree million Australian adults struggle through spring and summer with symptoms such as watery eyes, running nose, itchy throat, sneezing or irritability. Pollen allergy is also considered the most common chronic respiratory disease in Australia [1], posing a significant health and economic burden [9]. The quality of life of allergy sufferers is substantially reduced, affecting physical, psychological, and social functioning [35]. According to the National Health Survey conducted by the Australian Bureau of Statistics, which is shown in Fig. 1, the prevalence of allergic rhinitis among Australians has been measured over the past 15 years and indicated growth over time.Fig. 1Prevalence of allergic rhinitis sufferers in Australia [1]
The exact estimations of hay fever prevalence proves a challenging task due to the limited resources, i.e. time- and cost-consuming official statistics, marketing surveys, pharmaceuticals data, etc. The usual peak in hay fever occurrences is observed around spring and summer period. However, climate changes observed are lengthening the pollen seasons as well as introducing an increased intensity of allergens, and unexpected new pollens in certain areas [35]. Additionally, the increasing air pollution, especially around urban areas further affects the respiratory health of the population. This in turn adds an uncertainty to the accurate hay fever prevalence estimation. The real-time monitoring proves invaluable for allergy sufferers, health practitioners, and policy makers.Deep learning in text classificationPrevious studies on allergies surveillance from social media conducted in UK and US utilized either traditional machine learning classifiers, including Naive Bayes [6, 24] and lexicon-based approaches [14, 11, 12]. Despite the wealth of knowledge that social media offers, the natural language used still constitutes a major challenge in tweets analysis, and forms an obstacle in relevant information extraction [29]. For instance, the highly informal and continuously evolving vocabulary such as ‘dribbling nose’ and ‘hay fever sob’ prove difficult to classify as the potential symptoms and map to their medical equivalents, i.e. ‘runny nose’, ‘watery eyes’. Lack of advanced Natural Language Processing (NLP) techniques addressing the above mentioned issues leads to the limited applicability of the approaches in the case of the emerging symptoms/treatments, not identified a priori. Despite the existing shortcomings, no previous study applied deep learning to user-generated content classification in the context of hay fever. Furthermore, the performance improvement using neural attention is yet to be discovered in literature.Deep learning has already proven successful in text classification tasks, outperforming the conventional machine learning techniques [16, 21, 26, 27], effectively capturing both syntactic (e.g. allergy, allergic, allergen, etc.) and semantic (e.g. hay fever, pollen allergy, allergic rhinitis) word dependencies. Also, deep learning alleviates the need for laborious and time-consuming manual feature engineering. The most distinctive features are extracted automatically from the raw input during the model training. The successful application of deep learning has been reported in numerous NLP tasks, including topic categorization [19], machine translation [31], sentence modelling [20], and Part-Of-Speech tagging [4].Among many neural architectures, Recurrent Neural Networks (RNNs), in particular Long Short-Term Memory networks (LSTMs) [15] are widely implemented to model text sequences due to their capability in modeling long-range dependencies and historical information storage over time [32]. Attention mechanism can further boost the performance of RNNs by focusing on the time-steps that are most critical to the task [15]. In regular RNNs (without attention), the prediction is made using RNNs at the final time-step. With attention, RNNs save the output at every time-step, and the mechanism then selects and combines the most important outputs based on their relevance to the task [13]. The improved performance of RNNs with attention versus RNNs without attention was obtained in the case study of information extraction from cancer pathology reports [13]. | [
"29362452",
"10717968",
"26197474",
"15883903",
"29582403"
] | [
{
"pmid": "29362452",
"title": "Tell me about your hay fever: a qualitative investigation of allergic rhinitis management from the perspective of the patient.",
"abstract": "Allergic rhinitis (AR) is sub-optimally managed in the community and is responsible for a significant health and economic burden. Uncontrolled AR increases the risk of poorly controlled asthma and presents an increased susceptibility to thunderstorm asthma. With the availability of treatments over-the-counter, bypassing the health care professional (HCP), the role of the patient is paramount. Research on the role of the patient in AR management in the current environment is limited. This study aims to explore the patient perspective of AR management and understand why it is sub-optimally managed in the community. Patient perspectives of AR management were explored utilizing a qualitative, phenomenological approach. Adults with AR were included in the study and interviewed. Transcripts were analyzed for recurrent themes and emergent concepts. Forty-seven participants with AR were interviewed about their experiences. Patient reports of delayed diagnosis, treatment fatigue and confidence in the ability to manage their AR themselves, heavily influenced their management preferences. Patients also described barriers associated with AR management including financial expense as well as being mistaken for having an infectious disease. Patients described examples of the impact on their quality of life caused by their AR, yet they strongly believed they could manage it themselves. This belief that AR is a condition that should be entirely self-managed, contributes to its burden. It amplifies patients' separation from HCPs and having access to guidelines aimed at optimizing their AR control."
},
{
"pmid": "10717968",
"title": "Who talks? The social psychology of illness support groups.",
"abstract": "More Americans try to change their health behaviors through self-help than through all other forms of professionally designed programs. Mutual support groups, involving little or no cost to participants, have a powerful effect on mental and physical health, yet little is known about patterns of support group participation in health care. What kinds of illness experiences prompt patients to seek each other's company? In an effort to observe social comparison processes with real-world relevance, support group participation was measured for 20 disease categories in 4 metropolitan areas (New York, Chicago, Los Angeles, and Dallas) and on 2 on-line forums. Support seeking was highest for diseases viewed as stigmatizing (e.g., AIDS, alcoholism, breast and prostate cancer) and was lowest for less embarrassing but equally devastating disorders, such as heart disease. The authors discuss implications for social comparison theory and its applications in health care."
},
{
"pmid": "26197474",
"title": "Can Twitter Be a Source of Information on Allergy? Correlation of Pollen Counts with Tweets Reporting Symptoms of Allergic Rhinoconjunctivitis and Names of Antihistamine Drugs.",
"abstract": "Pollen forecasts are in use everywhere to inform therapeutic decisions for patients with allergic rhinoconjunctivitis (ARC). We exploited data derived from Twitter in order to identify tweets reporting a combination of symptoms consistent with a case definition of ARC and those reporting the name of an antihistamine drug. In order to increase the sensitivity of the system, we applied an algorithm aimed at automatically identifying jargon expressions related to medical terms. We compared weekly Twitter trends with National Allergy Bureau weekly pollen counts derived from US stations, and found a high correlation of the sum of the total pollen counts from each stations with tweets reporting ARC symptoms (Pearson's correlation coefficient: 0.95) and with tweets reporting antihistamine drug names (Pearson's correlation coefficient: 0.93). Longitude and latitude of the pollen stations affected the strength of the correlation. Twitter and other social networks may play a role in allergic disease surveillance and in signaling drug consumptions trends."
},
{
"pmid": "15883903",
"title": "Understanding interobserver agreement: the kappa statistic.",
"abstract": "Items such as physical exam findings, radiographic interpretations, or other diagnostic tests often rely on some degree of subjective interpretation by observers. Studies that measure the agreement between two or more observers should include a statistic that takes into account the fact that observers will sometimes agree or disagree simply by chance. The kappa statistic (or kappa coefficient) is the most commonly used statistic for this purpose. A kappa of 1 indicates perfect agreement, whereas a kappa of 0 indicates agreement equivalent to chance. A limitation of kappa is that it is affected by the prevalence of the finding under observation. Methods to overcome this limitation have been described."
},
{
"pmid": "29582403",
"title": "Medications and Prescribing Patterns as Factors Associated with Hospitalizations from Long-Term Care Facilities: A Systematic Review.",
"abstract": "BACKGROUND\nResidents of long-term care facilities (LTCFs) are at high risk of hospitalization. Medications are a potentially modifiable risk factor for hospitalizations.\n\n\nOBJECTIVE\nOur objective was to systematically review the association between medications or prescribing patterns and hospitalizations from LTCFs.\n\n\nMETHODS\nWe searched MEDLINE, Embase, Cumulative Index to Nursing and Allied Health Literature (CINAHL) and International Pharmaceutical Abstracts (IPA) from inception to August 2017 for longitudinal studies reporting associations between medications or prescribing patterns and hospitalizations. Two independent investigators completed the study selection, data extraction and quality assessment using the Joanna Briggs Institute Critical Appraisal Tools.\n\n\nRESULTS\nThree randomized controlled trials (RCTs), 22 cohort studies, five case-control studies, one case-time-control study and one case-crossover study, investigating 13 different medication classes and two prescribing patterns were included. An RCT demonstrated that high-dose influenza vaccination reduced all-cause hospitalization compared with standard-dose vaccination (risk ratio [RR] 0.93; 95% confidence interval [CI] 0.88-0.98). Another RCT found no difference in hospitalization rates between oseltamivir as influenza treatment and oseltamivir as treatment plus prophylaxis (treatment = 4.7%, treatment and prophylaxis = 3.5%; p = 0.7). The third RCT found no difference between multivitamin/mineral supplementation and hospitalization (odds ratio [OR] 0.94; 95% CI 0.74-1.20) or emergency department visits (OR 1.05; 95% CI 0.76-1.47). Two cohort studies demonstrated influenza vaccination reduced hospitalization. Four studies suggested polypharmacy and potentially inappropriate medications (PIMs) increased all-cause hospitalization. However, associations between polypharmacy (two studies), PIMs (one study) and fall-related hospitalizations were inconsistent. Inconsistent associations were found between psychotropic medications with all-cause and cause-specific hospitalizations (11 studies). Warfarin, nonsteroidal anti-inflammatory drugs, pantoprazole and vinpocetine but not long-term acetylsalicylic acid (aspirin), statins, trimetazidine, digoxin or β-blockers were associated with all-cause or cause-specific hospitalizations in single studies of specific resident populations. Most cohort studies assessed prevalent rather than incident medication exposure, and no studies considered time-varying medication use.\n\n\nCONCLUSION\nHigh-quality evidence suggests influenza vaccination reduces hospitalization. Polypharmacy and PIMs are consistently associated with increased all-cause hospitalization."
}
] |
Frontiers in Neurorobotics | 31649524 | PMC6795673 | 10.3389/fnbot.2019.00082 | Robust Event-Based Object Tracking Combining Correlation Filter and CNN Representation | Object tracking based on the event-based camera or dynamic vision sensor (DVS) remains a challenging task due to the noise events, rapid change of event-stream shape, chaos of complex background textures, and occlusion. To address the challenges, this paper presents a robust event-stream object tracking method based on correlation filter mechanism and convolutional neural network (CNN) representation. In the proposed method, rate coding is used to encode the event-stream object. Feature representations from hierarchical convolutional layers of a pre-trained CNN are used to represent the appearance of the rate encoded event-stream object. Results prove that the proposed method not only achieves good tracking performance in many complicated scenes with noise events, complex background textures, occlusion, and intersected trajectories, but also is robust to variable scale, variable pose, and non-rigid deformations. In addition, the correlation filter-based method has the advantage of high speed. The proposed approach will promote the potential applications of these event-based vision sensors in autonomous driving, robots and many other high-speed scenes. | Related WorksObject tracking methods based on event cameras can be classified into two categories. The first category is the event-driven mechanism in which each incoming event is processed and determined whether it belongs to the target object. In Litzenberger et al. (2006) implemented a continuous clustering of AER events and tracking of clusters. Each new event was assigned to a cluster based on a distance criterion and then the clusters weight and center position was updated. In addition, point cloud method is also introduced to model the event-stream object. In Ni et al. (2015) proposed an iterative closest point based tracking method by providing a continuous and iterative estimation of the geometric transformation between the model and the events of the tracked object. In Ni et al. (2012), the authors applied the iterative closest point tracking algorithm to track a microgripper position in an event-based microrobotic system. One disadvantage of these kinds of methods is that noise events occur will cause the tracker to make a wrong inference. Adding noise event filtering modules to the tracking system will unavoidably filter many informative events while increase the computational complexity of the system. In addition, although these event-based sensors are based on the event-driven nature, it is still a difficult task to recognize an object from each single event. The second category is based on feature representation of the target object. In Zhu et al. (2017), the authors proposed a soft data association modeled with probabilities relying on grouping events into a model and computing optical flow after assigning events to the model. In Lagorce et al. (2014), proposed an event-based multi-kernel algorithm, and various kernels, such as Gaussian, Gabor, and arbitrary user-defined kernels were used to handle the variations in position, scale and orientation. In Li et al. (2015), the authors prosed a compressive sensing based method for the robust tracking based on the event camera. The representation or appearance model of event-stream object is based on features extracted from the multi-scale space with a data-independent basis and employs non-adaptive random projections that preserve the structure of the feature space of objects.The core of most modern trackers is a discriminative classifier to distinguish the target from the surrounding environment. In computer vision, CF based methods has enjoyed great popularity due to the high computational efficiency with the use of fast Fourier transforms. In Bolme et al. (2010) learned a correlation filter over luminance channel the first time for real-time visual tracking, named MOSSE tracker. In Henriques et al. (2012, 2015), a kernelized correlation filter (KCF) is introduced to allow non-linear classification boundaries. Nowadays, features from convolutional neural network (CNN) are used to encode the object appearance and achieved good performance (Danelljan et al., 2015; Ma et al., 2015). In Danelljan et al. (2015), the authors proposed a method combining activations from the convolutional layer of a CNN in discriminative correlation filter based tracking frameworks, achieving a superior performance by using convolutional features compared to standard hand-crafted feature representations. They also show that activations from the first layer provides superior tracking performance compared to the deeper layers of the network. In Ma et al. (2015), they exploit the hierarchies of convolutional layers as a non-linear counterpart of an image pyramid representation and these multiple levels of abstraction to improve tracking accuracy and robustness. They demonstrate that representation by multiple layers of CNN is of great importance as semantics are robust to significant appearance variations and spatial details are effective for precise localization. Although feature-based methods show robustness and real-time capability, the most serious defect is that such algorithms should accumulate the events in a time window and then perform feature extraction. Then the length of the time window may be different under different scenes. | [
"26353263",
"27630540",
"25248193",
"25710087"
] | [
{
"pmid": "26353263",
"title": "High-Speed Tracking with Kernelized Correlation Filters.",
"abstract": "The core component of most modern trackers is a discriminative classifier, tasked with distinguishing between the target and the surrounding environment. To cope with natural image changes, this classifier is typically trained with translated and scaled sample patches. Such sets of samples are riddled with redundancies-any overlapping pixels are constrained to be the same. Based on this simple observation, we propose an analytic model for datasets of thousands of translated patches. By showing that the resulting data matrix is circulant, we can diagonalize it with the discrete Fourier transform, reducing both storage and computation by several orders of magnitude. Interestingly, for linear regression our formulation is equivalent to a correlation filter, used by some of the fastest competitive trackers. For kernel regression, however, we derive a new kernelized correlation filter (KCF), that unlike other kernel algorithms has the exact same complexity as its linear counterpart. Building on it, we also propose a fast multi-channel extension of linear correlation filters, via a linear kernel, which we call dual correlation filter (DCF). Both KCF and DCF outperform top-ranking trackers such as Struck or TLD on a 50 videos benchmark, despite running at hundreds of frames-per-second, and being implemented in a few lines of code (Algorithm 1). To encourage further developments, our tracking framework was made open-source."
},
{
"pmid": "25248193",
"title": "Asynchronous Event-Based Multikernel Algorithm for High-Speed Visual Features Tracking.",
"abstract": "This paper presents a number of new methods for visual tracking using the output of an event-based asynchronous neuromorphic dynamic vision sensor. It allows the tracking of multiple visual features in real time, achieving an update rate of several hundred kilohertz on a standard desktop PC. The approach has been specially adapted to take advantage of the event-driven properties of these sensors by combining both spatial and temporal correlations of events in an asynchronous iterative framework. Various kernels, such as Gaussian, Gabor, combinations of Gabor functions, and arbitrary user-defined kernels, are used to track features from incoming events. The trackers described in this paper are capable of handling variations in position, scale, and orientation through the use of multiple pools of trackers. This approach avoids the N(2) operations per event associated with conventional kernel-based convolution operations with N × N kernels. The tracking performance was evaluated experimentally for each type of kernel in order to demonstrate the robustness of the proposed solution."
},
{
"pmid": "25710087",
"title": "Visual tracking using neuromorphic asynchronous event-based cameras.",
"abstract": "This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly."
}
] |
Frontiers in Neurorobotics | 31649523 | PMC6795684 | 10.3389/fnbot.2019.00076 | Hybrid Brain-Computer-Interfacing for Human-Compliant Robots: Inferring Continuous Subjective Ratings With Deep Regression | Appropriate robot behavior during human-robot interaction is a key part in the development of human-compliant assistive robotic systems. This study poses the question of how to continuously evaluate the quality of robotic behavior in a hybrid brain-computer interfacing (BCI) task, combining brain and non-brain signals, and how to use the collected information to adapt the robot's behavior accordingly. To this aim, we developed a rating system compatible with EEG recordings, requiring the users to execute only small movements with their thumb on a wireless controller to rate the robot's behavior on a continuous scale. The ratings were recorded together with dry EEG, respiration, ECG, and robotic joint angles in ROS. Pilot experiments were conducted with three users that had different levels of previous experience with robots. The results demonstrate the feasibility to obtain continuous rating data that give insight into the subjective user perception during direct human-robot interaction. The rating data suggests differences in subjective perception for users with no, moderate, or substantial previous robot experience. Furthermore, a variety of regression techniques, including deep CNNs, allowed us to predict the subjective ratings. Performance was better when using the position of the robotic hand than when using EEG, ECG, or respiration. A consistent advantage of features expected to be related to a motor bias could not be found. Across-user predictions showed that the models most likely learned a combination of general and individual features across-users. A transfer of pre-trained regressor to a new user was especially accurate in users with more experience. For future research, studies with more participants will be needed to evaluate the methodology for its use in practice. Data and code to reproduce this study are available at https://github.com/TNTLFreiburg/NiceBot. | 1.2. Related WorkIn the field of human-robot interaction, the assessment of robotic behavior has been a key part in a number of studies. Huang and Mutlu (2012) developed a toolbox for behavioral assessment of humanoid robots. There, the authors focus especially on human-like social behavior in robots. Ratings for variables of robot behavior, e.g., naturalness, likability, and competence, were collected after the experiments rather than during the actual interaction. Tapus et al. (2008) proposed a robot personality matching for robot behavior adaptation in post-stroke rehabilitation. To adapt the robots behavior, the authors used a Policy Gradient Reinforcement Learning (PGRL) Algorithm. The robot collected feedback from the user with voice recognition, using discrete classes such as “yes,” “no,” and “stop.” Sekmen and Challa (2013) combined sensory input from speech recognition, natural language processing, face detection and recognition, and implemented a Bayesian learning mechanism to estimate and update a parameter set that models behaviors and preferences of users. Specifically, they predict future actions of their users to prepare the robot for these. In a recent study of Sarkar et al. (2017), the effects of robot experience and personality of a user on the assessment of, among other factors, trust into the robot were assessed. Interestingly, the group of participants with previous robot experience rated their safety during the interaction with the robot on a lower level than the group which had no previous experience with robots. Less experienced people also rated the robot as more intelligent in this study.Relevant to the decoding of perceived danger from EEG data, Kolkhorst et al. (2017) decoded the perceived hazardousness in traffic scenes from EEG data. This could also be used in human-robot interactions to prevent potentially dangerous situations. Kolkhorst et al. (2018) further developed an EEG-based target selection in collaboration with robotic effectors, which could harmonize well with assessment of robot behavior in human-machine interactions. Ehrlich and Cheng (2018) recently developed a system to validate robot actions by decoding error-related signals from EEG. Related to this, a number of studies in recent years have shown that the performance of robots in BCI scenarios can be enhanced with error decoding, e.g., in shared-control BCIs (Iturrate et al., 2013), or during the observation of autonomous robots (Salazar-Gomez et al., 2017).In recent years, promising new approaches to decoding information from brain signals for BCI control were developed, e.g., deep learning with convolutional neural networks (CNNs). A major advantage of CNNs is that feature extraction and classification are combined into a single learning process, removing the need to manually extract features. After pioneering achievements in the field of computer vision, they are increasingly being adapted to problems of EEG decoding (Manor and Geva, 2015; Bashivan et al., 2016) and are the subject of active research (e.g., Eitel et al., 2015; Watter et al., 2015; Oliveira et al., 2016). These biologically inspired networks have a great potential to improve the accuracy of BCI applications (Burget et al., 2017; Schirrmeister et al., 2017; Kuhner et al., 2019). They additionally can be applied to the raw EEG data, greatly simplifying the design of BCI pipelines. We further demonstrated the usefulness of CNNs for error decoding from noninvasive (Völker et al., 2018c) and intracranial EEG (Völker et al., 2018b).In contrast to discrete decoding problems, regression analysis with neural networks have become more popular in the recent time. Most use cases shown so far applied regression methods to video or image data. For example, Held et al. (2016) used regression to successfully track objects in videos at 100 frames per second. Shi et al. (2016) presented a regression approach to identify facial landmarks to subsequently align faces in images. In order to detect and localize robotic tools during robot-assisted surgery, Sarikaya et al. (2017) implemented a regression layer into a CNN. Miao et al. (2016) used regression techniques for a real-time 2D and 3D registration of X-ray images. With a CNN regressor, Viereck et al. (2017) improved the accuracy of robotic grasping and object recognition with respect to simulated depth images. | [
"24835663",
"19969093",
"6103567",
"29932424",
"26696875",
"28000254",
"26829785",
"25719670",
"20582257",
"10576479",
"28186883",
"15188875",
"28782865",
"25859204",
"29471099",
"19665554"
] | [
{
"pmid": "24835663",
"title": "Frontal theta as a mechanism for cognitive control.",
"abstract": "Recent advancements in cognitive neuroscience have afforded a description of neural responses in terms of latent algorithmic operations. However, the adoption of this approach to human scalp electroencephalography (EEG) has been more limited, despite the ability of this methodology to quantify canonical neuronal processes. Here, we provide evidence that theta band activities over the midfrontal cortex appear to reflect a common computation used for realizing the need for cognitive control. Moreover, by virtue of inherent properties of field oscillations, these theta band processes may be used to communicate this need and subsequently implement such control across disparate brain regions. Thus, frontal theta is a compelling candidate mechanism by which emergent processes, such as 'cognitive control', may be biophysically realized."
},
{
"pmid": "19969093",
"title": "Frontal theta links prediction errors to behavioral adaptation in reinforcement learning.",
"abstract": "Investigations into action monitoring have consistently detailed a frontocentral voltage deflection in the event-related potential (ERP) following the presentation of negatively valenced feedback, sometimes termed the feedback-related negativity (FRN). The FRN has been proposed to reflect a neural response to prediction errors during reinforcement learning, yet the single-trial relationship between neural activity and the quanta of expectation violation remains untested. Although ERP methods are not well suited to single-trial analyses, the FRN has been associated with theta band oscillatory perturbations in the medial prefrontal cortex. Mediofrontal theta oscillations have been previously associated with expectation violation and behavioral adaptation and are well suited to single-trial analysis. Here, we recorded EEG activity during a probabilistic reinforcement learning task and fit the performance data to an abstract computational model (Q-learning) for calculation of single-trial reward prediction errors. Single-trial theta oscillatory activities following feedback were investigated within the context of expectation (prediction error) and adaptation (subsequent reaction time change). Results indicate that interactive medial and lateral frontal theta activities reflect the degree of negative and positive reward prediction error in the service of behavioral adaptation. These different brain areas use prediction error calculations for different behavioral adaptations, with medial frontal theta reflecting the utilization of prediction errors for reaction time slowing (specifically following errors), but lateral frontal theta reflecting prediction errors leading to working memory-related reaction time speeding for the correct choice."
},
{
"pmid": "29932424",
"title": "EEGNet: a compact convolutional neural network for EEG-based brain-computer interfaces.",
"abstract": "OBJECTIVE\nBrain-computer interfaces (BCI) enable direct communication with a computer, using neural activity as the control signal. This neural signal is generally chosen from a variety of well-studied electroencephalogram (EEG) signals. For a given BCI paradigm, feature extractors and classifiers are tailored to the distinct characteristics of its expected EEG control signal, limiting its application to that specific signal. Convolutional neural networks (CNNs), which have been used in computer vision and speech recognition to perform automatic feature extraction and classification, have successfully been applied to EEG-based BCIs; however, they have mainly been applied to single BCI paradigms and thus it remains unclear how these architectures generalize to other paradigms. Here, we ask if we can design a single CNN architecture to accurately classify EEG signals from different BCI paradigms, while simultaneously being as compact as possible.\n\n\nAPPROACH\nIn this work we introduce EEGNet, a compact convolutional neural network for EEG-based BCIs. We introduce the use of depthwise and separable convolutions to construct an EEG-specific model which encapsulates well-known EEG feature extraction concepts for BCI. We compare EEGNet, both for within-subject and cross-subject classification, to current state-of-the-art approaches across four BCI paradigms: P300 visual-evoked potentials, error-related negativity responses (ERN), movement-related cortical potentials (MRCP), and sensory motor rhythms (SMR).\n\n\nMAIN RESULTS\nWe show that EEGNet generalizes across paradigms better than, and achieves comparably high performance to, the reference algorithms when only limited training data is available across all tested paradigms. In addition, we demonstrate three different approaches to visualize the contents of a trained EEGNet model to enable interpretation of the learned features.\n\n\nSIGNIFICANCE\nOur results suggest that EEGNet is robust enough to learn a wide variety of interpretable features over a range of BCI tasks. Our models can be found at: https://github.com/vlawhern/arl-eegmodels."
},
{
"pmid": "26696875",
"title": "Convolutional Neural Network for Multi-Category Rapid Serial Visual Presentation BCI.",
"abstract": "Brain computer interfaces rely on machine learning (ML) algorithms to decode the brain's electrical activity into decisions. For example, in rapid serial visual presentation (RSVP) tasks, the subject is presented with a continuous stream of images containing rare target images among standard images, while the algorithm has to detect brain activity associated with target images. Here, we continue our previous work, presenting a deep neural network model for the use of single trial EEG classification in RSVP tasks. Deep neural networks have shown state of the art performance in computer vision and speech recognition and thus have great promise for other learning tasks, like classification of EEG samples. In our model, we introduce a novel spatio-temporal regularization for EEG data to reduce overfitting. We show improved classification performance compared to our earlier work on a five categories RSVP experiment. In addition, we compare performance on data from different sessions and validate the model on a public benchmark data set of a P300 speller task. Finally, we discuss the advantages of using neural network models compared to manually designing feature extraction algorithms."
},
{
"pmid": "28000254",
"title": "High and dry? Comparing active dry EEG electrodes to active and passive wet electrodes.",
"abstract": "Dry electrodes are becoming popular for both lab-based and consumer-level electrophysiological-recording technologies because they better afford the ability to move traditional lab-based research into the real world. It is unclear, however, how dry electrodes compare in data quality to traditional electrodes. The current study compared three EEG electrode types: (a) passive-wet electrodes with no onboard amplification, (b) actively amplified, wet electrodes with moderate impedance levels, and low impedance levels, and (c) active-dry electrodes with very high impedance. Participants completed a classic P3 auditory oddball task to elicit characteristic EEG signatures and event-related potentials (ERPs). Across the three electrode types, we compared single-trial noise, average ERPs, scalp topographies, ERP noise, and ERP statistical power as a function of number of trials. We extended past work showing active electrodes' insensitivity to moderate levels of interelectrode impedance when compared to passive electrodes in the same amplifier. Importantly, the new dry electrode system could reliably measure EEG spectra and ERP components comparable to traditional electrode types. As expected, however, dry active electrodes with very high interelectrode impedance exhibited marked increases in single-trial and average noise levels, which decreased statistical power, requiring more trials to detect significant effects. This power decrease must be considered as a trade-off with the ease of application and long-term use. The current results help set constraints on experimental design with novel dry electrodes, and provide important evidence needed to measure brain activity in novel settings and situations."
},
{
"pmid": "26829785",
"title": "A CNN Regression Approach for Real-Time 2D/3D Registration.",
"abstract": "In this paper, we present a Convolutional Neural Network (CNN) regression approach to address the two major limitations of existing intensity-based 2-D/3-D registration technology: 1) slow computation and 2) small capture range. Different from optimization-based methods, which iteratively optimize the transformation parameters over a scalar-valued metric function representing the quality of the registration, the proposed method exploits the information embedded in the appearances of the digitally reconstructed radiograph and X-ray images, and employs CNN regressors to directly estimate the transformation parameters. An automatic feature extraction step is introduced to calculate 3-D pose-indexed features that are sensitive to the variables to be regressed while robust to other factors. The CNN regressors are then trained for local zones and applied in a hierarchical manner to break down the complex regression task into multiple simpler sub-tasks that can be learned separately. Weight sharing is furthermore employed in the CNN regression model to reduce the memory footprint. The proposed approach has been quantitatively evaluated on 3 potential clinical applications, demonstrating its significant advantage in providing highly accurate real-time 2-D/3-D registration with a significantly enlarged capture range when compared to intensity-based methods."
},
{
"pmid": "25719670",
"title": "Human-level control through deep reinforcement learning.",
"abstract": "The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks."
},
{
"pmid": "10576479",
"title": "Event-related EEG/MEG synchronization and desynchronization: basic principles.",
"abstract": "An internally or externally paced event results not only in the generation of an event-related potential (ERP) but also in a change in the ongoing EEG/MEG in form of an event-related desynchronization (ERD) or event-related synchronization (ERS). The ERP on the one side and the ERD/ERS on the other side are different responses of neuronal structures in the brain. While the former is phase-locked, the latter is not phase-locked to the event. The most important difference between both phenomena is that the ERD/ERS is highly frequency band-specific, whereby either the same or different locations on the scalp can display ERD and ERS simultaneously. Quantification of ERD/ERS in time and space is demonstrated on data from a number of movement experiments."
},
{
"pmid": "28186883",
"title": "Detection and Localization of Robotic Tools in Robot-Assisted Surgery Videos Using Deep Neural Networks for Region Proposal and Detection.",
"abstract": "Video understanding of robot-assisted surgery (RAS) videos is an active research area. Modeling the gestures and skill level of surgeons presents an interesting problem. The insights drawn may be applied in effective skill acquisition, objective skill assessment, real-time feedback, and human-robot collaborative surgeries. We propose a solution to the tool detection and localization open problem in RAS video understanding, using a strictly computer vision approach and the recent advances of deep learning. We propose an architecture using multimodal convolutional neural networks for fast detection and localization of tools in RAS videos. To the best of our knowledge, this approach will be the first to incorporate deep neural networks for tool detection and localization in RAS videos. Our architecture applies a region proposal network (RPN) and a multimodal two stream convolutional network for object detection to jointly predict objectness and localization on a fusion of image and temporal motion cues. Our results with an average precision of 91% and a mean computation time of 0.1 s per test frame detection indicate that our study is superior to conventionally used methods for medical imaging while also emphasizing the benefits of using RPN for precision and efficiency. We also introduce a new data set, ATLAS Dione, for RAS video understanding. Our data set provides video data of ten surgeons from Roswell Park Cancer Institute, Buffalo, NY, USA, performing six different surgical tasks on the daVinci Surgical System (dVSS) with annotations of robotic tools per frame."
},
{
"pmid": "15188875",
"title": "BCI2000: a general-purpose brain-computer interface (BCI) system.",
"abstract": "Many laboratories have begun to develop brain-computer interface (BCI) systems that provide communication and control capabilities to people with severe motor disabilities. Further progress and realization of practical applications depends on systematic evaluations and comparisons of different brain signals, recording methods, processing algorithms, output formats, and operating protocols. However, the typical BCI system is designed specifically for one particular BCI method and is, therefore, not suited to the systematic studies that are essential for continued progress. In response to this problem, we have developed a documented general-purpose BCI research and development platform called BCI2000. BCI2000 can incorporate alone or in combination any brain signals, signal processing methods, output devices, and operating protocols. This report is intended to describe to investigators, biomedical engineers, and computer scientists the concepts that the BC12000 system is based upon and gives examples of successful BCI implementations using this system. To date, we have used BCI2000 to create BCI systems for a variety of brain signals, processing methods, and applications. The data show that these systems function well in online operation and that BCI2000 satisfies the stringent real-time requirements of BCI systems. By substantially reducing labor and cost, BCI2000 facilitates the implementation of different BCI systems and other psychophysiological experiments. It is available with full documentation and free of charge for research or educational purposes and is currently being used in a variety of studies by many research groups."
},
{
"pmid": "28782865",
"title": "Deep learning with convolutional neural networks for EEG decoding and visualization.",
"abstract": "Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc."
},
{
"pmid": "25859204",
"title": "Error-related potentials during continuous feedback: using EEG to detect errors of different type and severity.",
"abstract": "When a person recognizes an error during a task, an error-related potential (ErrP) can be measured as response. It has been shown that ErrPs can be automatically detected in tasks with time-discrete feedback, which is widely applied in the field of Brain-Computer Interfaces (BCIs) for error correction or adaptation. However, there are only a few studies that concentrate on ErrPs during continuous feedback. With this study, we wanted to answer three different questions: (i) Can ErrPs be measured in electroencephalography (EEG) recordings during a task with continuous cursor control? (ii) Can ErrPs be classified using machine learning methods and is it possible to discriminate errors of different origins? (iii) Can we use EEG to detect the severity of an error? To answer these questions, we recorded EEG data from 10 subjects during a video game task and investigated two different types of error (execution error, due to inaccurate feedback; outcome error, due to not achieving the goal of an action). We analyzed the recorded data to show that during the same task, different kinds of error produce different ErrP waveforms and have a different spectral response. This allows us to detect and discriminate errors of different origin in an event-locked manner. By utilizing the error-related spectral response, we show that also a continuous, asynchronous detection of errors is possible. Although the detection of error severity based on EEG was one goal of this study, we did not find any significant influence of the severity on the EEG."
},
{
"pmid": "29471099",
"title": "The dynamics of error processing in the human brain as reflected by high-gamma activity in noninvasive and intracranial EEG.",
"abstract": "Error detection in motor behavior is a fundamental cognitive function heavily relying on local cortical information processing. Neural activity in the high-gamma frequency band (HGB) closely reflects such local cortical processing, but little is known about its role in error processing, particularly in the healthy human brain. Here we characterize the error-related response of the human brain based on data obtained with noninvasive EEG optimized for HGB mapping in 31 healthy subjects (15 females, 16 males), and additional intracranial EEG data from 9 epilepsy patients (4 females, 5 males). Our findings reveal a multiscale picture of the global and local dynamics of error-related HGB activity in the human brain. On the global level as reflected in the noninvasive EEG, the error-related response started with an early component dominated by anterior brain regions, followed by a shift to parietal regions, and a subsequent phase characterized by sustained parietal HGB activity. This phase lasted for more than 1 s after the error onset. On the local level reflected in the intracranial EEG, a cascade of both transient and sustained error-related responses involved an even more extended network, spanning beyond frontal and parietal regions to the insula and the hippocampus. HGB mapping appeared especially well suited to investigate late, sustained components of the error response, possibly linked to downstream functional stages such as error-related learning and behavioral adaptation. Our findings establish the basic spatio-temporal properties of HGB activity as a neural correlate of error processing, complementing traditional error-related potential studies."
},
{
"pmid": "19665554",
"title": "A review on directional information in neural signals for brain-machine interfaces.",
"abstract": "Brain-machine interfaces (BMIs) can be characterized by the technique used to measure brain activity and by the way different brain signals are translated into commands that control an effector. We give an overview of different approaches and focus on a particular BMI approach: the movement of an artificial effector (e.g. arm prosthesis to the right) by those motor cortical signals that control the equivalent movement of a corresponding body part (e.g. arm movement to the right). This approach has been successfully applied in monkeys and humans by accurately extracting parameters of movements from the spiking activity of multiple single-units. Here, we review recent findings showing that analog neuronal population signals, ranging from intracortical local field potentials over epicortical ECoG to non-invasive EEG and MEG, can also be used to decode movement direction and continuous movement trajectories. Therefore, these signals might provide additional or alternative control for this BMI approach, with possible advantages due to reduced invasiveness."
}
] |
Scientific Reports | 31619717 | PMC6795807 | 10.1038/s41598-019-51284-9 | Scalable Genome Assembly through Parallel de Bruijn Graph Construction for Multiple k-mers | Remarkable advancements in high-throughput gene sequencing technologies have led to an exponential growth in the number of sequenced genomes. However, unavailability of highly parallel and scalable de novo assembly algorithms have hindered biologists attempting to swiftly assemble high-quality complex genomes. Popular de Bruijn graph assemblers, such as IDBA-UD, generate high-quality assemblies by iterating over a set of k-values used in the construction of de Bruijn graphs (DBG). However, this process of sequentially iterating from small to large k-values slows down the process of assembly. In this paper, we propose ScalaDBG, which metamorphoses this sequential process, building DBGs for each distinct k-value in parallel. We develop an innovative mechanism to “patch” a higher k-valued graph with contigs generated from a lower k-valued graph. Moreover, ScalaDBG leverages multi-level parallelism, by both scaling up on all cores of a node, and scaling out to multiple nodes simultaneously. We demonstrate that ScalaDBG completes assembling the genome faster than IDBA-UD, but with similar accuracy on a variety of datasets (6.8X faster for one of the most complex genome in our dataset). | Related WorkSeveral effective de-novo assembly applications have been put forward to deal with the deluge in genomic sequences2–4,6–9,16–21. However, these applications are restricted to scaling up on a multi-core machine, or do not use several k-values during assembly. To the best of our knowledge, there has been no prior work on distributed and parallelized DBG construction for multiple k-values.In previous work, Ray16, ABySS3, PASHA17, and HipMer18 can distribute the task of DBG construction to different nodes in a cluster. Metagenomics assemblers, such as Meta-velvet19 also do not apply multiple k-values. However, this approach performs poorly for datasets with uneven sequencing depths, such as in metagenomics and single-cell datasets. ScalaDBG employs multiple k-values to deal with such datasets. On the other hand, SGA21, Velvet2, SOAPdenovo8, ALLPATHS-LG4 are limited to scaling up on a multi-core node. Additionally, while IDBA, IDBA-UD, and SPAdes can operate on several k-values, their scaling is restricted to multiple cores on a single node. In contrast, ScalaDBG is a distributed and parallelized assembler operating on multiple k-values.IDBA-UD as our algorithm for benchmarkingIDBA-UD is a an iterative k-value DBG-based assembler that runs through a range of k values from \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$k={k}_{min}$$\end{document}k=kmin to \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$k={k}_{max}$$\end{document}k=kmax, with a step-wise increment of s. It maintains an accumulated DBG Hk at each step. In the first step, a DBG \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${G}_{{k}_{min}}$$\end{document}Gkmin is generated from the input reads. For \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$k={k}_{min}$$\end{document}k=kmin, Hk is equivalent to \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${G}_{{k}_{min}}$$\end{document}Gkmin. After DBG construction, contigs for graph Hk are generated by considering all maximal paths in graph Hk. All vertices in any maximal path have an in-degree and out-degree equal to 1, except the vertices at the start and end of the path. Subsequently, reads from the input set that are substrings of these contigs are removed. This generally reduces the size of the input read set. Note that, a read of length r generates \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$r-k+1$$\end{document}r−k+1 vertices. Thus, as k is increased, each read introduces fewer vertices. This reduction in size of the input read set, coupled with the fact that there are fewer vertices for larger k-values, makes subsequent graph-construction steps less time consuming. For the next iteration, where \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$k={k}_{min}+s$$\end{document}k=kmin+s, the graph Hk, the remaining reads, and the contigs from Hk are fed as inputs. Every s-length path in Hk is upgraded to a vertex. A \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$(k+s+1)$$\end{document}(k+s+1)-mer in either the remaining reads or the contigs of Hk is used to connect vertices in Hk. The next set of iterations continue this process until \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$k={k}_{max}$$\end{document}k=kmax is reached. Observe that in this algorithm, at iteration i, graph \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${H}_{{k}_{min}+i\ast s}$$\end{document}Hkmin+i⁎s depends on graph \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${H}_{{k}_{min}+(i-1)\ast s}$$\end{document}Hkmin+(i−1)⁎s, the reduced read set, and the contigs obtained at the previous iteration (i − 1). This dependency compels IDBA-UD to work sequentially on the chain of k-values, irrespective of the length of the chain. This is the essence of the problem that we tackle in ScalaDBG. | [
"26151137",
"18349386",
"19251739",
"22495754",
"23587118",
"22506599",
"20958248",
"21867511",
"21685107",
"22156294",
"23422339"
] | [
{
"pmid": "26151137",
"title": "Big Data: Astronomical or Genomical?",
"abstract": "Genomics is a Big Data science and is going to get much bigger, very soon, but it is not known whether the needs of genomics will exceed other Big Data domains. Projecting to the year 2025, we compared genomics with three other major generators of Big Data: astronomy, YouTube, and Twitter. Our estimates show that genomics is a \"four-headed beast\"--it is either on par with or the most demanding of the domains analyzed here in terms of data acquisition, storage, distribution, and analysis. We discuss aspects of new technologies that will need to be developed to rise up and meet the computational challenges that genomics poses for the near future. Now is the time for concerted, community-wide planning for the \"genomical\" challenges of the next decade."
},
{
"pmid": "18349386",
"title": "Velvet: algorithms for de novo short read assembly using de Bruijn graphs.",
"abstract": "We have developed a new set of algorithms, collectively called \"Velvet,\" to manipulate de Bruijn graphs for genomic sequence assembly. A de Bruijn graph is a compact representation based on short words (k-mers) that is ideal for high coverage, very short read (25-50 bp) data sets. Applying Velvet to very short reads and paired-ends information only, one can produce contigs of significant length, up to 50-kb N50 length in simulations of prokaryotic data and 3-kb N50 on simulated mammalian BACs. When applied to real Solexa data sets without read pairs, Velvet generated contigs of approximately 8 kb in a prokaryote and 2 kb in a mammalian BAC, in close agreement with our simulated results without read-pair information. Velvet represents a new approach to assembly that can leverage very short reads in combination with read pairs to produce useful assemblies."
},
{
"pmid": "19251739",
"title": "ABySS: a parallel assembler for short read sequence data.",
"abstract": "Widespread adoption of massively parallel deoxyribonucleic acid (DNA) sequencing instruments has prompted the recent development of de novo short read assembly algorithms. A common shortcoming of the available tools is their inability to efficiently assemble vast amounts of data generated from large-scale sequencing projects, such as the sequencing of individual human genomes to catalog natural genetic variation. To address this limitation, we developed ABySS (Assembly By Short Sequences), a parallelized sequence assembler. As a demonstration of the capability of our software, we assembled 3.5 billion paired-end reads from the genome of an African male publicly released by Illumina, Inc. Approximately 2.76 million contigs > or =100 base pairs (bp) in length were created with an N50 size of 1499 bp, representing 68% of the reference human genome. Analysis of these contigs identified polymorphic and novel sequences not present in the human reference assembly, which were validated by alignment to alternate human assemblies and to other primate genomes."
},
{
"pmid": "22495754",
"title": "IDBA-UD: a de novo assembler for single-cell and metagenomic sequencing data with highly uneven depth.",
"abstract": "MOTIVATION\nNext-generation sequencing allows us to sequence reads from a microbial environment using single-cell sequencing or metagenomic sequencing technologies. However, both technologies suffer from the problem that sequencing depth of different regions of a genome or genomes from different species are highly uneven. Most existing genome assemblers usually have an assumption that sequencing depths are even. These assemblers fail to construct correct long contigs.\n\n\nRESULTS\nWe introduce the IDBA-UD algorithm that is based on the de Bruijn graph approach for assembling reads from single-cell sequencing or metagenomic sequencing technologies with uneven sequencing depths. Several non-trivial techniques have been employed to tackle the problems. Instead of using a simple threshold, we use multiple depthrelative thresholds to remove erroneous k-mers in both low-depth and high-depth regions. The technique of local assembly with paired-end information is used to solve the branch problem of low-depth short repeat regions. To speed up the process, an error correction step is conducted to correct reads of high-depth regions that can be aligned to highconfident contigs. Comparison of the performances of IDBA-UD and existing assemblers (Velvet, Velvet-SC, SOAPdenovo and Meta-IDBA) for different datasets, shows that IDBA-UD can reconstruct longer contigs with higher accuracy.\n\n\nAVAILABILITY\nThe IDBA-UD toolkit is available at our website http://www.cs.hku.hk/~alse/idba_ud"
},
{
"pmid": "23587118",
"title": "SOAPdenovo2: an empirically improved memory-efficient short-read de novo assembler.",
"abstract": "BACKGROUND\nThere is a rapidly increasing amount of de novo genome assembly using next-generation sequencing (NGS) short reads; however, several big challenges remain to be overcome in order for this to be efficient and accurate. SOAPdenovo has been successfully applied to assemble many published genomes, but it still needs improvement in continuity, accuracy and coverage, especially in repeat regions.\n\n\nFINDINGS\nTo overcome these challenges, we have developed its successor, SOAPdenovo2, which has the advantage of a new algorithm design that reduces memory consumption in graph construction, resolves more repeat regions in contig assembly, increases coverage and length in scaffold construction, improves gap closing, and optimizes for large genome.\n\n\nCONCLUSIONS\nBenchmark using the Assemblathon1 and GAGE datasets showed that SOAPdenovo2 greatly surpasses its predecessor SOAPdenovo and is competitive to other assemblers on both assembly length and accuracy. We also provide an updated assembly version of the 2008 Asian (YH) genome using SOAPdenovo2. Here, the contig and scaffold N50 of the YH genome were ~20.9 kbp and ~22 Mbp, respectively, which is 3-fold and 50-fold longer than the first published version. The genome coverage increased from 81.16% to 93.91%, and memory consumption was ~2/3 lower during the point of largest memory consumption."
},
{
"pmid": "22506599",
"title": "SPAdes: a new genome assembly algorithm and its applications to single-cell sequencing.",
"abstract": "The lion's share of bacteria in various environments cannot be cloned in the laboratory and thus cannot be sequenced using existing technologies. A major goal of single-cell genomics is to complement gene-centric metagenomic data with whole-genome assemblies of uncultivated organisms. Assembly of single-cell data is challenging because of highly non-uniform read coverage as well as elevated levels of sequencing errors and chimeric reads. We describe SPAdes, a new assembler for both single-cell and standard (multicell) assembly, and demonstrate that it improves on the recently released E+V-SC assembler (specialized for single-cell data) and on popular assemblers Velvet and SoapDeNovo (for multicell data). SPAdes generates single-cell assemblies, providing information about genomes of uncultivatable bacteria that vastly exceeds what may be obtained via traditional metagenomics studies. SPAdes is available online ( http://bioinf.spbau.ru/spades ). It is distributed as open source software."
},
{
"pmid": "20958248",
"title": "Ray: simultaneous assembly of reads from a mix of high-throughput sequencing technologies.",
"abstract": "An accurate genome sequence of a desired species is now a pre-requisite for genome research. An important step in obtaining a high-quality genome sequence is to correctly assemble short reads into longer sequences accurately representing contiguous genomic regions. Current sequencing technologies continue to offer increases in throughput, and corresponding reductions in cost and time. Unfortunately, the benefit of obtaining a large number of reads is complicated by sequencing errors, with different biases being observed with each platform. Although software are available to assemble reads for each individual system, no procedure has been proposed for high-quality simultaneous assembly based on reads from a mix of different technologies. In this paper, we describe a parallel short-read assembler, called Ray, which has been developed to assemble reads obtained from a combination of sequencing platforms. We compared its performance to other assemblers on simulated and real datasets. We used a combination of Roche/454 and Illumina reads to assemble three different genomes. We showed that mixing sequencing technologies systematically reduces the number of contigs and the number of errors. Because of its open nature, this new tool will hopefully serve as a basis to develop an assembler that can be of universal utilization (availability: http://deNovoAssembler.sf.Net/). For online Supplementary Material , see www.liebertonline.com."
},
{
"pmid": "21867511",
"title": "Parallelized short read assembly of large genomes using de Bruijn graphs.",
"abstract": "BACKGROUND\nNext-generation sequencing technologies have given rise to the explosive increase in DNA sequencing throughput, and have promoted the recent development of de novo short read assemblers. However, existing assemblers require high execution times and a large amount of compute resources to assemble large genomes from quantities of short reads.\n\n\nRESULTS\nWe present PASHA, a parallelized short read assembler using de Bruijn graphs, which takes advantage of hybrid computing architectures consisting of both shared-memory multi-core CPUs and distributed-memory compute clusters to gain efficiency and scalability. Evaluation using three small-scale real paired-end datasets shows that PASHA is able to produce more contiguous high-quality assemblies in shorter time compared to three leading assemblers: Velvet, ABySS and SOAPdenovo. PASHA's scalability for large genome datasets is demonstrated with human genome assembly. Compared to ABySS, PASHA achieves competitive assembly quality with faster execution speed on the same compute resources, yielding an NG50 contig size of 503 with the longest correct contig size of 18,252, and an NG50 scaffold size of 2,294. Moreover, the human assembly is completed in about 21 hours with only modest compute resources.\n\n\nCONCLUSIONS\nDeveloping parallel assemblers for large genomes has been garnering significant research efforts due to the explosive size growth of high-throughput short read datasets. By employing hybrid parallelism consisting of multi-threading on multi-core CPUs and message passing on compute clusters, PASHA is able to assemble the human genome with high quality and in reasonable time using modest compute resources."
},
{
"pmid": "21685107",
"title": "Meta-IDBA: a de Novo assembler for metagenomic data.",
"abstract": "MOTIVATION\nNext-generation sequencing techniques allow us to generate reads from a microbial environment in order to analyze the microbial community. However, assembling of a set of mixed reads from different species to form contigs is a bottleneck of metagenomic research. Although there are many assemblers for assembling reads from a single genome, there are no assemblers for assembling reads in metagenomic data without reference genome sequences. Moreover, the performances of these assemblers on metagenomic data are far from satisfactory, because of the existence of common regions in the genomes of subspecies and species, which make the assembly problem much more complicated.\n\n\nRESULTS\nWe introduce the Meta-IDBA algorithm for assembling reads in metagenomic data, which contain multiple genomes from different species. There are two core steps in Meta-IDBA. It first tries to partition the de Bruijn graph into isolated components of different species based on an important observation. Then, for each component, it captures the slight variants of the genomes of subspecies from the same species by multiple alignments and represents the genome of one species, using a consensus sequence. Comparison of the performances of Meta-IDBA and existing assemblers, such as Velvet and Abyss for different metagenomic datasets shows that Meta-IDBA can reconstruct longer contigs with similar accuracy.\n\n\nAVAILABILITY\nMeta-IDBA toolkit is available at our website http://www.cs.hku.hk/~alse/metaidba.\n\n\nCONTACT\[email protected]."
},
{
"pmid": "22156294",
"title": "Efficient de novo assembly of large genomes using compressed data structures.",
"abstract": "De novo genome sequence assembly is important both to generate new sequence assemblies for previously uncharacterized genomes and to identify the genome sequence of individuals in a reference-unbiased way. We present memory efficient data structures and algorithms for assembly using the FM-index derived from the compressed Burrows-Wheeler transform, and a new assembler based on these called SGA (String Graph Assembler). We describe algorithms to error-correct, assemble, and scaffold large sets of sequence data. SGA uses the overlap-based string graph model of assembly, unlike most de novo assemblers that rely on de Bruijn graphs, and is simply parallelizable. We demonstrate the error correction and assembly performance of SGA on 1.2 billion sequence reads from a human genome, which we are able to assemble using 54 GB of memory. The resulting contigs are highly accurate and contiguous, while covering 95% of the reference genome (excluding contigs <200 bp in length). Because of the low memory requirements and parallelization without requiring inter-process communication, SGA provides the first practical assembler to our knowledge for a mammalian-sized genome on a low-end computing cluster."
},
{
"pmid": "23422339",
"title": "QUAST: quality assessment tool for genome assemblies.",
"abstract": "SUMMARY\nLimitations of genome sequencing techniques have led to dozens of assembly algorithms, none of which is perfect. A number of methods for comparing assemblers have been developed, but none is yet a recognized benchmark. Further, most existing methods for comparing assemblies are only applicable to new assemblies of finished genomes; the problem of evaluating assemblies of previously unsequenced species has not been adequately considered. Here, we present QUAST-a quality assessment tool for evaluating and comparing genome assemblies. This tool improves on leading assembly comparison software with new ideas and quality metrics. QUAST can evaluate assemblies both with a reference genome, as well as without a reference. QUAST produces many reports, summary tables and plots to help scientists in their research and in their publications. In this study, we used QUAST to compare several genome assemblers on three datasets. QUAST tables and plots for all of them are available in the Supplementary Material, and interactive versions of these reports are on the QUAST website.\n\n\nAVAILABILITY\nhttp://bioinf.spbau.ru/quast .\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
}
] |
Journal of Big Data | 31700766 | PMC6803594 | 10.1186/s40537-019-0256-6 | HaRD: a heterogeneity-aware replica deletion for HDFS | The Hadoop distributed file system (HDFS) is responsible for storing very large data-sets reliably on clusters of commodity machines. The HDFS takes advantage of replication to serve data requested by clients with high throughput. Data replication is a trade-off between better data availability and higher disk usage. Recent studies propose different data replication management frameworks that alter the replication factor of files dynamically in response to the popularity of the data, keeping more replicas for in-demand data to enhance the overall performance of the system. When data gets less popular, these schemes reduce the replication factor, which changes the data distribution and leads to unbalanced data distribution. Such an unbalanced data distribution causes hot spots, low data locality and excessive network usage in the cluster. In this work, we first confirm that reducing the replication factor causes unbalanced data distribution when using Hadoop’s default replica deletion scheme. Then, we show that even keeping a balanced data distribution using WBRD (data-distribution-aware replica deletion scheme) that we proposed in previous work performs sub-optimally on heterogeneous clusters. In order to overcome this issue, we propose a heterogeneity-aware replica deletion scheme (HaRD). HaRD considers the nodes’ processing capabilities when deleting replicas; hence it stores more replicas on the more powerful nodes. We implemented HaRD on top of HDFS and conducted a performance evaluation on a 23-node dedicated heterogeneous cluster. Our results show that HaRD reduced execution time by up to 60%, and 17% when compared to Hadoop and WBRD, respectively. | Related workEven though large-scale Hadoop clusters can store a tremendous amount of data, the demand for each stored data-set is not the same. Moreover, the data-set demand changes over time. Hence, several studies have been conducted to understand the workload of Hadoop clusters [11, 21]. Ananthanarayanan et al. [11] underlined that 12% of the most popular files are more in demand and received ten times more requests than the bottom third of the data (based on the analysis they have accomplished from logs of Bing production clusters). Another study [21] was conducted by analysing three different workload traces (i.e., OpenCloud, M45, WebMining) with various cluster sizes (from 9 nodes to 400 nodes). The authors [21] draw attention to load balancing problems in the Hadoop cluster. Furthermore, the same study showed that despite the data distribution being well-balanced, the task distribution remains unbalanced. Consequently, an unbalanced cluster leads to poor data locality and performance degradation for the cluster.Data replication is a prominent method to improve fault-tolerance and load-balancing [9, 15, 16]. However, increasing the number of copies stored in the cluster comes with the price of extra storage. Considering the fact that not all data-sets have the same demand, there is no one-size-fits-all solution for the replication factor. Therefore, various approaches have been proposed in the literature for adapting the replication factor according to the access pattern of data-sets [11–14, 22]. All of these strategies alter the replication factor either proactively [11] or dynamically [12–14, 22] based on the ‘hotness’ of the data. Wei et al. [12] propose a cost-effective dynamic replication management scheme for the large-scale cloud storage system (CDRM). With the intention of developing such a system, the authors built a model between data availability and replication factor. Ananthanarayanan et al. [11] present Scarlett for adapting the replication factor by calculating a storage budget. Abad et al. [13] propose an adaptive data replication for efficient cluster scheduling (DARE). DARE aims to identify the replication factor dynamically based on probabilistic sampling techniques. Cheng et al. [14] introduce an active/standby storage model and propose an elastic replication management system (ERMS) based on the model. ERMS places new replicas of in-demand data to active nodes in order to increase data availability. Lin et al. [22] approach the problem of adapting the replication factor from an energy-efficiency perspective and propose an energy-efficient adaptive file replication system (EAFR). EAFR places ‘cold’ files into ‘cold’ servers to reach energy efficiency.In addition to adapting the replication factor, the placement of blocks is another factor to achieving good load-balancing. Eltabakh et al. [15] propose CoHadoop to co-locate related files based on the information gathered from the application level. CoHadoop leverages data pre-partitioning against expensive shuffles. Xie et al. [23] and Lea et al. [24] propose placing blocks based on the computing ratio of each node. Liao et al. [25] describe a new approach to the block placement problem based on block access frequency. The authors investigated the history of block access sequences and used the k-partition algorithm to separate blocks into different groups according to their access load. Moreover, the placement in hybrid storage systems [26, 27] and smart caching approaches for remote data accesses [28] is also proposed in the literature. There is a considerable amount of research about the block placement because the block placement is decisive for the system performance. However, the connection between replica management systems and the block placement is missing. For instance, which replica should be deleted when the framework decides to reduce the replication factor? One simple approach would be to use HDFS’s deletion algorithm.But altering the replication factor changes the block density on each node. The framework that adapts the replication factor should also be aware of how the replicas are distributed. Otherwise, the cluster ends up with unbalanced data distribution and consequently unbalanced load distribution. In our previous work [17], we identified that decreasing the replication factor leads to data unbalancing in HDFS and we proposed Workload-aware Balanced Replica Deletion (WBRD) to balance the data-set distribution among the nodes. As a result, WBRD achieves up to 48% improvement in execution time on average. But, WBRD does not fully exploit different nodes’ processing capability as it is designed for homogeneous clusters. One approach to determine nodes’ processing capability is to measure computing ratios for each different application on each node [23, 24]. However, as the workload of the cluster is highly dynamic and contains multiple ad-hoc queries, we prefer to use a more flexible and cost-effective approach. Therefore, instead of following previous approaches, the present work employs a novel cost-effective container-based approach. | [] | [] |
Scientific Reports | 31636338 | PMC6803717 | 10.1038/s41598-019-51545-7 | New Method for Evaluating Surface Roughness Parameters Acquired by Laser Scanning | Quality evaluation of a material’s surface is performed through roughness analysis of surface samples. Several techniques have been presented to achieve this goal, including geometrical analysis and surface roughness analysis. Geometric analysis allows a visual and subjective evaluation of roughness (a qualitative assessment), whereas computation of the roughness parameters is a quantitative assessment and allows a standardized analysis of the surfaces. In civil engineering, the process is performed with mechanical profilometer equipment (2D) without adequate accuracy and laser profilometer (3D) with no consensus on how to interpret the result quantitatively. This work proposes a new method to evaluate surface roughness, starting from the generation of a visual surface roughness signature, which is calculated through the roughness parameters computed in hierarchically organized regions. The evaluation tools presented in this new method provide a local and more accurate evaluation of the computed coefficients. In the tests performed it was possible to quantitatively analyze roughness differences between ceramic blocks and to find that a quantitative microscale analysis allows to identify the largest variation of roughness parameters Raavg, Rasdv, Ramin and Ramax between samples, which benefit the evaluation and comparison of the sampled surfaces. | Related WorksTo compute roughness coefficients or parameters to evaluate a surface, it is necessary to obtain the data that form the sampled surface. Computer hardware and systems are used for the computation of surface roughness. An efficient way of obtaining surface information is through laser scanning equipment15,19. In this technique, a ray emitted by the equipment hits the target, and its reflection is read by the equipment to measure the position and depth of the point where the ray collided with the target. In some equipment, the color associated with the hit point is returned. The result of this sampling is a point cloud. From the point cloud, the geometry and measurements are computed that relate the points with the fitting plane of the surface by a mean least squares method.The analysis of the roughness and salience of the sampled surfaces are used to evaluate their quality. Some works16–18,20–23 have performed the visual evaluation of surfaces based on geometric analysis to determine the roughness and salience measurements. In this type of evaluation, in general, the surface is reconstructed from a point cloud, generating a polygonal mesh. The surface reconstruction from sampled points is a well-studied problem in computer graphics15. These approaches16–18,22,23 can obtain good results for geometric surface reconstruction and for the qualitative evaluation of surfaces.The approaches used are triangularization and volumetric methods. In triangularization as presented in24–26, the algorithms search for neighboring points in a certain direction to form triangles and, from the set of triangles, obtain a polygonal mesh. In24 authors define Delaunay based mesh triangulation as the geometric dual of the Voronoi diagram, so from the Voronoi diagram, sites are defined as vertices of triangles and neighboring cells are connected to form triangles. In25 the authors use triangulation approach by inserting an energy term for Delaunay tetrahedra problem, ensuring greater robustness of the method over mesh noise. Wang et al.26 work over an unoriented point cloud using Delaunay tetrahedron and get better results in smooth surface reconstruction. After obtaining triangles via 3D Delaunay triangulation, a good initial triangle is considered to be the seed of the mesh and from it other appropriate triangles are connected to their front edges, those that are not connected to any other triangle. The initial triangle is the one that forms as flattest surface as possible with its adjacent ones. And so the mesh grows iteratively for all front edges until there are no more suitable candidate triangles. Suitable triangles are those that have edges that close with the current triangle and its neighbors on front edges and with an angle smaller than a threshold parameter. These methods generally reconstruct smooth surface and either incorporate roughness as a mesh relief (not treating it as non-mesh points) or remove it as a noise from the points.The most popular volumetric methods16–18,27,28 are available for authors and are used in commercial software. These methods aim to obtain a surface S that is formed by N ordered points of a point cloud, where the set of points D is D = (p1, n1), …, (pN, nN), each pi is a specific sampled point, and each ni its respective normal. The formal surface definition is S = x : f(x) = 0. The main algorithms of surface generation through volumetric methods are “smooth signed distance” (SSD)18, “Poisson surface reconstruction”16, and17.The “Poisson surface reconstruction” algorithm16 obtains a model indicator function (an implicit function), where the gradient of this function is a vector field that is zero at almost all points, except for points near the surface, where the value is equal to the normal of the points sampled. Thus, the algorithm looks for the gradient function that best approximates the local vector field (direction), associated with each point. This algorithm is ideal for use in the context of simplified point clouds for surface visualization, because it is a global solution that involves all the data, generates smoothed surfaces, and is consistent (robust) to work with discontinuities or noise in the data, which happens in the point cloud after a simplification process16. It is one of the most popular methods for surface reconstruction owing to its scalability and efficiency29. Therefore, it is suitable for surface reconstruction with a focus on visualization and does not favor the analysis of surface roughness.A known problem of the method16 is the excessive smoothing of the surface17. In this sense, the SSD18 and screened17 algorithm deal with this problem using positional constraints on17 the optimization and the gradient function18.From this geometric data, the roughness and salience are computed. In20 the computation of the salience coefficients is performed by comparing the height of the vertices in a given region around a vertex (neighboring vertices). Defining the quality of the model by the shape of the rough area is a task of subjective and relative perception; for example, the size of the rough area also depends on the measurement of the size of the model. Natasha et al.11 also describe how difficult it is to distinguish roughness from salience when evaluating a geometric model. In addition, other problems are related to the evaluation of geometric surfaces, mainly because they are polygonal approximations. These methods are suitable for viewing and not for a proper roughness measurement.Recent works6,11,30 focus mainly on the quantitative assessment of surfaces roughness (called roughness parameters). From the computation of these parameters, it is possible to standardize the evaluation of the sampled material surfaces. These measurements are described in the literature6,30–33 and are used to measure the level of adherence and quality of the material surfaces according to their roughness.The main roughness parameters reported in6,11,30 are average roughness (Ra) and root-mean-square roughness (Rq). These measures evaluate the average standard deviation of the heights (valleys and peaks) in a surface profile to compute the degree of roughness. However, for the computation of these parameters, it is first necessary to compute the fitting plane for the points acquired from the surface. From the plane coefficients, it is possible to determine the height of a peak or valley by evaluating the height coordinate of each point of the cloud. The calculation of the plane is described in more detail in Section 4.1.The average roughness Ra, described in6,30, is given by:\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${R}_{a}\approx \frac{1}{n}\mathop{\sum }\limits_{i=1}^{n}|{z}_{i}|$$\end{document}Ra≈1n∑i=1n|zi|where zi is the height coordinate of the current point. The root-mean-square roughness (Rq), also described in6,30, is defined as:\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${R}_{q}\approx \sqrt{\frac{1}{n}\mathop{\sum }\limits_{i=1}^{n}{z}_{i}^{2}}$$\end{document}Rq≈1n∑i=1nzi2Figure 1, presented in6, illustrates the behavior of the parameter in relation to a profile of a sampled surface.Figure 1Surface profile described in6,30, with peaks and valleys. (a) The value of the parameter Ra, and (b) dividing the surface into parts to compute Ra. Based on the images presented in6.However, Santos et al.6 also point out that the Ra and Rq parameters do not provide any type of local surface evaluation. For a local measurement, other roughness parameters are used, based on the division of the surface profile into smaller parts and considering information on peaks and valleys separately. In this way, it is possible to analyze a greater level of detail about the roughness evaluation. These parameters are mean peak height (Rpm), mean valley depth (Rvm), mean peak-to-valley height (Rz(DIN)), ten points height or average of five peaks (Rz(ISO)), maximum peak height (Rp), maximum valley depth (Rv), maximum peak-to-valley height (Rmax), and total roughness height (Ry), which is the sum of the heights of the highest peak and the deepest valley.The main feature involved in the computation of these parameters is that they are obtained from samples/patches of a surface, providing a level of local control, because the maxima and minima of each part are considered. Figure 1(b), presented in6, shows the relation of the calculated parameters on peaks and valleys with samples (or patches) of the surface profile.Finally, despite the local control provided in the calculation of the parameters based on the division of the profile into patches/samples, in11, the authors also indicate that multiresolution surface analysis yields the best results for roughness computation from geometry. Other works such as23,34,35 do not aim at quantitative roughness measurement, but use a roughness measurement as a subjective evaluation criterion of mesh reconstruction quality. In this work, is proposed a spatial division control that allows analyzing the sampled surface at hierarchical levels (described section 4.1.2). | [
"20018362"
] | [
{
"pmid": "20018362",
"title": "A review of adhesion science.",
"abstract": "OBJECTIVE\nAdhesion or cohesion includes an adherend, adhesive, and intervening interface. Adhesive joints may include one or more interfaces. Adhesion science focuses on understanding the materials properties associated with formation of the interfaces, changes in the interfaces with time, and events associated with failure of the interfaces.\n\n\nMETHODS\nThe key principles for good interface formation are creation of a clean surface, generation of a rough surface for interfacial interlocking, good wetting of the substratum by the adhesive/cohesive materials, adequate flow and adaptation for intimate interaction, and acceptable curing when phase changes are required for final joint formation.\n\n\nRESULTS\nMuch more effort is needed in the future to carefully assess each of these using available testing methods that attempt to characterize the energetics of the interfaces. Bonding involves potential contributions from physical, chemical, and mechanical sources but primarily relies on micro-mechanical interaction for success. Characterization of the interface before adhesion, during service, and after failure would be much more useful for future investigations and remains as a great challenge.\n\n\nSIGNIFICANCE\nScientists should more rigorously apply techniques such as comprehensive contact angle analysis (rather than simple water wettability) for surface energy determination, and AFM in addition to SEM for surface texture analysis."
}
] |
BMC Medical Informatics and Decision Making | 31638963 | PMC6805472 | 10.1186/s12911-019-0917-6 | Methods for a similarity measure for clinical attributes based on survival data analysis | BackgroundCase-based reasoning is a proven method that relies on learned cases from the past for decision support of a new case. The accuracy of such a system depends on the applied similarity measure, which quantifies the similarity between two cases. This work proposes a collection of methods for similarity measures especially for comparison of clinical cases based on survival data, as they are available for example from clinical trials.MethodsOur approach is intended to be used in scenarios, where it is of interest to use longitudinal data, such as survival data, for a case-based reasoning approach. This might be especially important, where uncertainty about the ideal therapy decision exists. The collection of methods consists of definitions of the local similarity of nominal as well as numeric attributes, a calculation of attribute weights, a feature selection method and finally a global similarity measure. All of them use survival time (consisting of survival status and overall survival) as a reference of similarity. As a baseline, we calculate a survival function for each value of any given clinical attribute.ResultsWe define the similarity between values of the same attribute by putting the estimated survival functions in relation to each other. Finally, we quantify the similarity by determining the area between corresponding curves of survival functions. The proposed global similarity measure is designed especially for cases from randomized clinical trials or other collections of clinical data with survival information. Overall survival can be considered as an eligible and alternative solution for similarity calculations. It is especially useful, when similarity measures that depend on the classic solution-describing attribute “applied therapy” are not applicable. This is often the case for data from clinical trials containing randomized arms.ConclusionsIn silico evaluation scenarios showed that the mean accuracy of biomarker detection in k = 10 most similar cases is higher (0.909–0.998) than for competing similarity measures, such as Heterogeneous Euclidian-Overlap Metric (0.657–0.831) and Discretized Value Difference Metric (0.535–0.671). The weight calculation method showed a more than six times (6.59–6.95) higher weight for biomarker attributes over non-biomarker attributes. These results suggest that the similarity measure described here is suitable for applications based on survival data. | Related workIn the last years, many new approaches have been developed in the field of CBR and related topics such as similarity measures and information retrieval. For example, Goel and Diaz-Agudo provide a comprehensive overview on the development in the field [22]. Especially interesting examples are works on textual CBR and spatial CBR. Textual CBR is a subdomain of CBR where the knowledge source is available in textual form. In the clinical domain, this could be medical reports, like discharge or referral letters. In order to retrieve knowledge from unstructured text data, further techniques must be applied initially to transformation information into structured case representations [23]. A common way to achieve this is the textual analysis with methods from natural language processing [24]. An example for spatial CBR is Q-CBR (Qualitative Case-Based Reasoning) that has shown promising results using Qualitative Spatial Reasoning (QSR) theory for retrieval in the technical domain of robotics artificial intelligence [25]. Here, qualitative spatial relations between objects are assumed, aiming to model the human common sense understanding of space.Closely related to similarity measures, distance functions are often used to determine differences in an absolute vector space. So, instead of a similarity that usually has a value range of [0.0, 1.0], a distance function between two attributes may result in any decimal number. However, a conversion from a distance function to a similarity function is feasible in many cases. The by far most commonly used methods are the Euclidian Distance function and the Manhattan (city-block) function. Both are equivalent to the Minkowskian r-distance function [26] with r = 1 and 2, respectively, however, they do not handle non-numeric (nominal) attributes appropriately.The Heterogeneous Euclidian-Overlap Metric (HEOM) [27, 28] tackles this issue by a dedicated handling of nominal and continuous attributes. The overlap metric applies for nominal attributes and results in a distance of 1.0 for matching and 0.0 for not matching attributes, respectively. On the contrary, for linear attributes the numeric value difference of the attributes is normalized by dividing by the range of all possible values for that specific attribute a (rangea = maxa-mina). The normalization fails, however, if the value range is defined too tight. Also, the nominal value handling is not able to compute distances other than the extreme ones. Expert domain knowledge must be added to further differentiate such cases.The Value Difference Metric (VDM) [29] was initially introduced by Stanfill and Walz. In this approach the difference between two nominal values (of the same attribute) depends on the conditional probability that the output class is c, given that attribute a has the value x: P(c|xa). Wilson and Martinez [30] published an improved version of VDM that adds the ability to handle continuous attributes. This is done by transforming them into a fixed number of equally sized intervals that enables them to be treated in the same way as a nominal attribute (DVDM, short for Discretized VDM). The overall distance of two cases is then determined by the Euclidian Distance. The Interpolated and Windowed VDM (IVDM/WVDM) are furthermore smoothing the steps between probability input classes. The VDM‘s strength is the assignment of case bases with verified knowledge about the solution that is known to be the best available. However, it cannot learn local similarities when the solution attribute is numeric, like the overall survival. | [
"25769682",
"8004146",
"27297679",
"10206110",
"20947318",
"12349930",
"18252376",
"21683563",
"15193344",
"8057948",
"8790451",
"15694636",
"20971621",
"28214658",
"26457759",
"26500200",
"26121653",
"20336314",
"2778478"
] | [
{
"pmid": "25769682",
"title": "Case-based reasoning using electronic health records efficiently identifies eligible patients for clinical trials.",
"abstract": "OBJECTIVE\nTo develop a cost-effective, case-based reasoning framework for clinical research eligibility screening by only reusing the electronic health records (EHRs) of minimal enrolled participants to represent the target patient for each trial under consideration.\n\n\nMATERIALS AND METHODS\nThe EHR data--specifically diagnosis, medications, laboratory results, and clinical notes--of known clinical trial participants were aggregated to profile the \"target patient\" for a trial, which was used to discover new eligible patients for that trial. The EHR data of unseen patients were matched to this \"target patient\" to determine their relevance to the trial; the higher the relevance, the more likely the patient was eligible. Relevance scores were a weighted linear combination of cosine similarities computed over individual EHR data types. For evaluation, we identified 262 participants of 13 diversified clinical trials conducted at Columbia University as our gold standard. We ran a 2-fold cross validation with half of the participants used for training and the other half used for testing along with other 30 000 patients selected at random from our clinical database. We performed binary classification and ranking experiments.\n\n\nRESULTS\nThe overall area under the ROC curve for classification was 0.95, enabling the highlight of eligible patients with good precision. Ranking showed satisfactory results especially at the top of the recommended list, with each trial having at least one eligible patient in the top five positions.\n\n\nCONCLUSIONS\nThis relevance-based method can potentially be used to identify eligible patients for clinical trials by processing patient EHR data alone without parsing free-text eligibility criteria, and shows promise of efficient \"case-based reasoning\" modeled only on minimal trial participants."
},
{
"pmid": "8004146",
"title": "Integrating consultation and semi-automatic knowledge acquisition in a prototype-based architecture: experiences with dysmorphic syndromes.",
"abstract": "The paper describes an application of cognitive theories of Tversky and Rosch to prototype similarity of dysmorphic syndromes cases. The knowledge-based system supports diagnostic consultation and research in dysmorphic syndromes. It has been used routinely for many years. The knowledge base is semi-automatically generated from known cases of an outpatient clinic. Some results of the evaluation process of the system's achievements are shown. General conclusions based on the experience with this successful system are discussed."
},
{
"pmid": "27297679",
"title": "An ontology-driven, case-based clinical decision support model for removable partial denture design.",
"abstract": "We present the initial work toward developing a clinical decision support model for specific design of removable partial dentures (RPDs) in dentistry. We developed an ontological paradigm to represent knowledge of a patient's oral conditions and denture component parts. During the case-based reasoning process, a cosine similarity algorithm was applied to calculate similarity values between input patients and standard ontology cases. A group of designs from the most similar cases were output as the final results. To evaluate this model, the output designs of RPDs for 104 randomly selected patients were compared with those selected by professionals. An area under the curve of the receiver operating characteristic (AUC-ROC) was created by plotting true-positive rates against the false-positive rate at various threshold settings. The precision at position 5 of the retrieved cases was 0.67 and at the top of the curve it was 0.96, both of which are very high. The mean average of precision (MAP) was 0.61 and the normalized discounted cumulative gain (NDCG) was 0.74 both of which confirmed the efficient performance of our model. All the metrics demonstrated the efficiency of our model. This methodology merits further research development to match clinical applications for designing RPDs. This paper is organized as follows. After the introduction and description of the basis for the paper, the evaluation and results are presented in Section 2. Section 3 provides a discussion of the methodology and results. Section 4 describes the details of the ontology, similarity algorithm, and application."
},
{
"pmid": "10206110",
"title": "Case-based prediction in experimental medical studies.",
"abstract": "Case-based approaches predict the behaviour of dynamic systems by analysing a given experimental setting in the context of others. To select similar cases and to control adaptation of cases, they employ general knowledge. If that is neither available nor inductively derivable, the knowledge implicit in cases can be utilized for a case-based ranking and adaptation of similar cases. We introduce the system OASES and its application to medical experimental studies to demonstrate this approach."
},
{
"pmid": "20947318",
"title": "A multi-module case-based biofeedback system for stress treatment.",
"abstract": "OBJECTIVE\nBiofeedback is today a recognized treatment method for a number of physical and psychological problems. Experienced clinicians often achieve good results in these areas and their success largely builds on many years of experience and often thousands of treated patients. Unfortunately many of the areas where biofeedback is used are very complex, e.g. diagnosis and treatment of stress. Less experienced clinicians may even have difficulties to initially classify the patient correctly. Often there are only a few experts available to assist less experienced clinicians. To reduce this problem we propose a computer-assisted biofeedback system helping in classification, parameter setting and biofeedback training.\n\n\nMETHODS\nThe decision support system (DSS) analysis finger temperature in time series signal where the derivative of temperature in time is calculated to extract the features. The case-based reasoning (CBR) is used in three modules to classify a patient, estimate parameters and biofeedback. In each and every module the CBR approach retrieves most similar cases by comparing a new finger temperature measurement with previously solved measurements. Three different methods are used to calculate similarity between features, they are: modified distance function, similarity matrix and fuzzy similarity.\n\n\nRESULTS AND CONCLUSION\nWe explore how such a DSS can be designed and validated the approach in the area of stress where the system assists in the classification, parameter setting and finally in the training. In this case study we show that the case based biofeedback system outperforms trainee clinicians based on a case library of cases authorized by an expert."
},
{
"pmid": "12349930",
"title": "Development and evaluation of a case-based reasoning classifier for prediction of breast biopsy outcome with BI-RADS lexicon.",
"abstract": "Approximately 70-85% of breast biopsies are performed on benign lesions. To reduce this high number of biopsies performed on benign lesions, a case-based reasoning (CBR) classifier was developed to predict biopsy results from BI-RADS findings. We used 1433 (931 benign) biopsy-proven mammographic cases. CBR similarity was defined using either the Hamming or Euclidean distance measure over case features. Ten features represented each case: calcification distribution, calcification morphology, calcification number, mass margin, mass shape, mass density, mass size, associated findings, special cases, and age. Performance was evaluated using Round Robin sampling, Receiver Operating Characteristic (ROC) analysis, and bootstrap. To determine the most influential features for the CBR, an exhaustive feature search was performed over all possible feature combinations (1022) and similarity thresholds. Influential features were defined as the most frequently occurring features in the feature subsets with the highest partial ROC areas (0.90AUC). For CBR with Hamming distance, the most influential features were found to be mass margin, calcification morphology, age, calcification distribution, calcification number, and mass shape, resulting in an 0.90AUC of 0.33. At 95% sensitivity, the Hamming CBR would spare from biopsy 34% of the benign lesions. At 98% sensitivity, the Hamming CBR would spare 27% benign lesions. For the CBR with Euclidean distance, the most influential feature subset consisted of mass margin, calcification morphology, age, mass density, and associated findings, resulting in 0.90AUC of 0.37. At 95% sensitivity, the Euclidean CBR would spare from biopsy 41% benign lesions. At 98% sensitivity, the Euclidean CBR would spare 27% benign lesions. The profile of cases spared by both distance measures at 98% sensitivity indicates that the CBR is a potentially useful diagnostic tool for the classification of mammographic lesions, by recommending short-term follow-up for likely benign lesions that is in agreement with final biopsy results and mammographer's intuition."
},
{
"pmid": "18252376",
"title": "Discovering relevance knowledge in data: a growing cell structures approach.",
"abstract": "Both information retrieval and case-based reasoning systems rely on effective and efficient selection of relevant data. Typically, relevance in such systems is approximated by similarity or indexing models. However, the definition of what makes data items similar or how they should be indexed is often nontrivial and time-consuming. Based on growing cell structure artificial neural networks, this paper presents a method that automatically constructs a case retrieval model from existing data. Within the case-based reasoning (CBR) framework, the method is evaluated for two medical prognosis tasks, namely, colorectal cancer survival and coronary heart disease risk prognosis. The results of the experiments suggest that the proposed method is effective and robust. To gain a deeper insight and understanding of the underlying mechanisms of the proposed model, a detailed empirical analysis of the models structural and behavioral properties is also provided."
},
{
"pmid": "21683563",
"title": "Feasibility of case-based beam generation for robotic radiosurgery.",
"abstract": "OBJECTIVE\nRobotic radiosurgery uses the kinematic flexibility of a robotic arm to target tumors and lesions from many different directions. This approach allows to focus the dose to the target region while sparing healthy surrounding tissue. However, the flexibility in the placement of treatment beams is also a challenge during treatment planning. We study an approach to make the search for treatment beams more efficient by considering previous treatment plans.\n\n\nMETHODS AND MATERIAL\nConventionally, a beam generation heuristic based on randomly selected candidate beams has been proven to be most robust in clinical practice. However, for prevalent types of cancer similarities in patient anatomy and dose prescription exist. We present a case-based approach that introduces a problem specific measure of similarity and allows to generate candidate beams from a database of previous treatment plans. Similarity between treatments is established based on projections of the organs and structures considered during planning, and the desired dose distribution. Solving the inverse planning problem a subset of treatment beams is determined and adapted to the new clinical case.\n\n\nRESULTS\nPreliminary experimental results indicate that the new approach leads to comparable plan quality for substantially fewer candidate beams. For two prostate cases, the dose homogeneity in the target region and the sparing of critical structures is similar for plans based on 400 and 600 candidate beams generated with the novel and the conventional method, respectively. However, the runtime for solving the inverse planning problem for could be reduced by up to 47%, i.e., from approximately 19 min to less than 11 min.\n\n\nCONCLUSION\nWe have shown the feasibility of case-based beam generation for robotic radiosurgery. For prevalent clinical cases with similar anatomy the cased-based approach could substantially reduce planning time while maintaining high plan quality."
},
{
"pmid": "15193344",
"title": "A similarity function to evaluate the orthodontic condition in patients with cleft lip and palate.",
"abstract": "The objective of this work is the modeling of a similarity function adapted to the medical environment using the logical-combinatorial approach of pattern recognition theory, and its application to compare the orthodontic conditions of patients with cleft-primary palate and/or cleft-secondary palate congenital malformations. The variables in domains with no a priori algebraic or topological structure are objects whose similarity or difference is evaluated by comparison criteria functions. The range of these functions is an ordered set normalized into the unit interval, and they are designed to allow differentiation and non-uniform treatment of the object-variables. The analogy between objects is formalized as a similarity function that stresses the relations among the comparison criteria and evaluates the partial descriptions (partial similarity/difference) or total descriptions (total similarity/difference) of the objects. For the orthodontic problem we defined a set of 12 variables featuring the unilateral/bilateral fissures, the conditions of maxilla, premaxilla, mandible and patient's bite. The comparison criteria (logical for malocclusion, fuzzy for maxillary collapse unilateral/anteroposterior and for overbite, and Boolean for protrusive/retrusive premaxilla conditions) were assigned a relevance factor based on the orthodontist accumulated knowledge and experience. The modeling of the similarity function and its effectiveness in comparing orthodontic conditions in patients are illustrated by the study of four clinical cases with different clefts. The results through similarity are close to the expected ones. Moreover evaluated at different moments it allows to assess the effect of treatment in a single patient, hence providing valuable auxiliary criteria for medical decision making as to the patient's rehabilitation. We include the potential extension of the methodology to other medical disciplines such as speech therapy and reconstructive surgery."
},
{
"pmid": "8057948",
"title": "Case-based explanation for medical diagnostic programs, with an example from gynaecology.",
"abstract": "One of the most accountable methods of providing machine assistance in medical diagnosis is to retrieve and display similar previously diagnosed cases from a database. In practice, however, classifying cases according to the diagnoses of their nearest neighbours is often significantly less accurate than other statistical classifiers. In this paper the transparency of the nearest neighbours method is combined with the accuracy of another statistical method. This is achieved by using the other statistical method to define a measure of similarity between the presentations of two cases. The diagnosis of abdominal pain of suspected gynaecological origin is used as a case study to evaluate this method. Bayes' theorem, with the usual assumption of conditional independence, is used to define a metric on cases. This new metric was found to correspond as well as Hamming distance to the clinical notion of \"similarity\" between cases, while significantly increasing accuracy to that of the Bayes' method itself."
},
{
"pmid": "8790451",
"title": "Protein secondary structure prediction using two-level case-based reasoning.",
"abstract": "We have developed a two-level case-based reasoning architecture for predicting protein secondary structure. The central idea is to break the problem into two levels: (i) reasoning at the object (protein) level and using the global information from this level to focus on a more restricted problem space; (ii) decomposing objects into pieces (segments) and reasoning at the level of internal structures. As a last step to the procedure, inferences from the parts of the internal structure are synthesized into predictions about global structure. The architecture has been developed and tested on a commonly used data set with 69.5% predictive accuracy. It was then tested on a new data set with 68.2% accuracy. With additional tuning, over 70% accuracy was achieved. In addition, a series of experiments were conducted to test various aspects of the method and the results are informative."
},
{
"pmid": "15694636",
"title": "Modelling a decision-support system for oncology using rule-based and case-based reasoning methodologies.",
"abstract": "In most hospital medical units, multidisciplinary committees meet weekly to discuss their patients' cases. The medical experts base their decisions on three sources of information. First, they check if their patient complies with existing guidelines. Failing these, the medical experts will base their therapeutic decisions on the cases of similar patients that they have treated in the past. We propose a multi-modal reasoning decision-support system based on both guideline and case series, which will automatically compare the patient's case to the corresponding guideline, then to other cases, and retrieve similar cases. The general structure of the system is presented here, the domain of application being oncology. As the patients' records are not currently stored in a database in a format which is directly accessible, an object-oriented model is proposed, which includes prognosis factors currently tested in clinical trials, well-established ones, and a description of the illness episodes. The system is designed to be a data warehouse. Such a system does not exist in the literature. Future work will be needed to define the similarity measures, and to connect the system to the current database."
},
{
"pmid": "20971621",
"title": "eXiT*CBR: A framework for case-based medical diagnosis development and experimentation.",
"abstract": "OBJECTIVE\nMedical applications have special features (interpretation of results in medical metrics, experiment reproducibility and dealing with complex data) that require the development of particular tools. The eXiT*CBR framework is proposed to support the development of and experimentation with new case-based reasoning (CBR) systems for medical diagnosis.\n\n\nMETHOD\nOur framework offers a modular, heterogeneous environment that combines different CBR techniques for different application requirements. The graphical user interface allows easy navigation through a set of experiments that are pre-visualized as plots (receiver operator characteristics (ROC) and accuracy curves). This user-friendly navigation allows easy analysis and replication of experiments. Used as a plug-in on the same interface, eXiT*CBR can work with any data mining technique such as determining feature relevance.\n\n\nRESULTS\nThe results show that eXiT*CBR is a user-friendly tool that facilitates medical users to utilize CBR methods to determine diagnoses in the field of breast cancer, dealing with different patterns implicit in the data.\n\n\nCONCLUSIONS\nAlthough several tools have been developed to facilitate the rapid construction of prototypes, none of them has taken into account the particularities of medical applications as an appropriate interface to medical users. eXiT*CBR aims to fill this gap. It uses CBR methods and common medical visualization tools, such as ROC plots, that facilitate the interpretation of the results. The navigation capabilities of this tool allow the tuning of different CBR parameters using experimental results. In addition, the tool allows experiment reproducibility."
},
{
"pmid": "28214658",
"title": "An association study of established breast cancer reproductive and lifestyle risk factors with tumour subtype defined by the prognostic 70-gene expression signature (MammaPrint®).",
"abstract": "BACKGROUND\nReproductive and lifestyle factors influence both breast cancer risk and prognosis; this might be through breast cancer subtype. Subtypes defined by immunohistochemical hormone receptor markers and gene expression signatures are used to predict prognosis of breast cancer patients based on their tumour biology. We investigated the association between established breast cancer risk factors and the 70-gene prognostication signature in breast cancer patients.\n\n\nPATIENTS AND METHODS\nStandardised questionnaires were used to obtain information on established risk factors of breast cancer from the Dutch patients of the MINDACT trial. Clinical-pathological and genomic information were obtained from the trial database. Logistic regression analyses were used to estimate the associations between lifestyle risk factors and tumour prognostic subtypes, measured by the 70-gene MammaPrint® signature (i.e. low-risk or high-risk tumours).\n\n\nRESULTS\nOf the 1555 breast cancer patients included, 910 had low-risk and 645 had high-risk tumours. Current body mass index (BMI), age at menarche, age at first birth, age at menopause, hormonal contraceptive use and hormone replacement therapy use were not associated with MammaPrint®. In parous women, higher parity was associated with a lower risk (OR: 0.75, [95% confidence interval {CI}: 0.59-0.95] P = 0.018) and longer breastfeeding duration with a higher risk (OR: 1.03, [95% CI: 1.01-1.05] P = 0.005) of developing high-risk tumours; risk estimates were similar within oestrogen receptor-positive disease. After stratifying by menopausal status, the associations remained present in post-menopausal women.\n\n\nCONCLUSION\nUsing prognostic gene expression profiles, we have indications that specific reproductive factors may be associated with prognostic tumour subtypes beyond hormone receptor status."
},
{
"pmid": "26457759",
"title": "The consensus molecular subtypes of colorectal cancer.",
"abstract": "Colorectal cancer (CRC) is a frequently lethal disease with heterogeneous outcomes and drug responses. To resolve inconsistencies among the reported gene expression-based CRC classifications and facilitate clinical translation, we formed an international consortium dedicated to large-scale data sharing and analytics across expert groups. We show marked interconnectivity between six independent classification systems coalescing into four consensus molecular subtypes (CMSs) with distinguishing features: CMS1 (microsatellite instability immune, 14%), hypermutated, microsatellite unstable and strong immune activation; CMS2 (canonical, 37%), epithelial, marked WNT and MYC signaling activation; CMS3 (metabolic, 13%), epithelial and evident metabolic dysregulation; and CMS4 (mesenchymal, 23%), prominent transforming growth factor-β activation, stromal invasion and angiogenesis. Samples with mixed features (13%) possibly represent a transition phenotype or intratumoral heterogeneity. We consider the CMS groups the most robust classification system currently available for CRC-with clear biological interpretability-and the basis for future clinical stratification and subtype-based targeted interventions."
},
{
"pmid": "26500200",
"title": "Safety and efficacy of palliative systemic chemotherapy combined with colorectal self-expandable metallic stents in advanced colorectal cancer: A multicenter study.",
"abstract": "PURPOSE\nSelf-expandable metallic stent (SEMS) placement is an accepted palliative therapy for management of acute malignant bowel obstruction in advanced colorectal cancer. Nevertheless, data are lacking on the effects of systemic chemotherapy combined with colorectal SEMS. The aim of this study was to investigate the safety and efficacy of palliative chemotherapy for advanced colorectal cancer combined with colorectal SEMS placement.\n\n\nPATIENTS AND METHODS\nThis multicentre retrospective study included all consecutive advanced colorectal cancer patients who received first-line palliative chemotherapy combined with endoscopic stenting for colorectal cancer with obstruction. We analyzed the number of cycles and the type of combination used. The primary endpoint was overall survival. Secondary endpoints included progression-free survival, response rate, grade 3-4 toxicity and the outcomes of SEMS for malignant colorectal obstruction.\n\n\nRESULTS\nA total of 38 patients were included. Among them, 25 patients received oxaliplatin and 5-fluorouracil combination chemotherapy. Objective response and stabilization occurred in 38 and 24% of patients, respectively. The median overall survival and progression-free survival from the start of chemotherapy were 18 and 5months, respectively. The objective response rate and overall disease control rate were 38 and 62%, respectively. Toxicity was generally acceptable. Major complications related to stenting included perforation (8%), stent migration (5%), and reobstruction secondary to tumor ingrowths (13%).\n\n\nCONCLUSIONS\nChemotherapy combined with colonic stenting as a first-line treatment seems to be a valid option in advanced colorectal cancer patients with malignant colorectal obstruction."
},
{
"pmid": "26121653",
"title": "Impaired Neonatal Outcome after Emergency Cerclage Adds Controversy to Prolongation of Pregnancy.",
"abstract": "OBJECTIVE\nEmergency cervical cerclage is one of the treatment options for the reduction of preterm birth. The aim of this study is to assess neonatal outcome after cerclage with special focus on adverse effects in very low birth weight infants.\n\n\nSTUDY DESIGN\nRetrospective cohort study. Classification of cerclages in history-indicated (HIC, n = 38), ultrasound-indicated (UIC, n = 29) and emergency/ physical examination-indicated (PEIC, n = 33) cerclage. Descriptive analysis of pregnancy and neonatal outcome (admission to NICU, duration of hospitalization, respiratory outcome (intubation, CPAP, FiO2max), neonatal complications (ROP, IVH)). Statistical comparison of perinatal parameters and outcome of neonates <1500 g after cerclage with a birth weight matched control group.\n\n\nRESULTS\nNeonates <1500 g after PEIC show significantly impaired outcome, i.e. prolonged respiratory support (total ventilation in days, CPAP, FiO2max) and higher rates of neonatal complications (IVH ≥ II, ROP ≥ 2). Placental pathologic evaluation revealed a significantly higher rate of chorioamnionitis (CAM) after PEIC. Neonates <1500 g after UIC or HIC show no significant difference in neonatal complications or CAM.\n\n\nCONCLUSIONS\nIn our study PEIC is associated with adverse neonatal outcome in infants <1500 g. The high incidence of CAM indicates a potential inflammatory factor in the pathogenesis. Large well-designed RCTs are required to give conclusive answers to the question whether to prolong or to deliver."
},
{
"pmid": "20336314",
"title": "Palliative radiotherapy for bleeding from advanced gastric cancer: is a schedule of 30 Gy in 10 fractions adequate?",
"abstract": "PURPOSE\nTo evaluate the effectiveness of short-course radiotherapy (RT) with 30 Gy in 10 fractions for bleeding from advanced gastric cancer.\n\n\nMETHODS\nWe reviewed the data for all patients with gastric cancer requiring blood transfusions due to gastric bleeding who were treated with RT at the Shizuoka Cancer Center Hospital between September 2002 and March 2007. Patients with curative-intent chemoradiotherapy or previous irradiation were excluded. RT was planned to deliver a total of 30 Gy at 3 Gy per fraction. We defined RT as effective if the patients did not require blood transfusions for 1 or more months after RT.\n\n\nRESULTS\nTwenty-two out of 30 patients (73%) responded to RT, and rebleeding occurred in 11 (50%) of 22 patients responding to RT. The median actuarial time to rebleeding was 3.3 months. Twelve patients received concurrent chemoradiotherapy and had a significantly lower rebleeding rate than patients undergoing RT alone (P = 0.001). Among patients receiving CRT, 1 with grade 3 non-hematological toxicity and 5 with grade 3-4 hematological toxicity were observed. No Grade 3 or higher adverse events were observed in patients treated with RT alone.\n\n\nCONCLUSIONS\nRT with 30 Gy in 10 fractions is an adequate treatment for bleeding from advanced gastric cancer, especially in patients with poor prognosis."
},
{
"pmid": "2778478",
"title": "Surgical adjuvant therapy of large-bowel carcinoma: an evaluation of levamisole and the combination of levamisole and fluorouracil. The North Central Cancer Treatment Group and the Mayo Clinic.",
"abstract": "A total of 401 eligible patients with resected stages B and C colorectal carcinoma were randomly assigned to no-further therapy or to adjuvant treatment with either levamisole alone, 150 mg/d for 3 days every 2 weeks for 1 year, or levamisole plus fluorouracil (5-FU), 450 mg/m2/d intravenously (IV) for 5 days and beginning at 28 days, 450 mg/m2 weekly for 1 year. Levamisole plus 5-FU, and to a lesser extent levamisole alone, reduced cancer recurrence in comparison with no adjuvant therapy. These differences, after correction for imbalances in prognostic variables, were only suggestive for levamisole alone (P = .05) but quite significant for levamisole plus 5-FU (P = .003). Whereas both treatment regimens were associated with overall improvements in survival, these improvements reached borderline significance only for stage C patients treated with levamisole plus 5-FU (P = .03). Therapy was clinically tolerable with either regimen and severe toxicity was uncommon. These promising results have led to a large national intergroup confirmatory trial currently in progress."
}
] |
Scientific Reports | 31672998 | PMC6823352 | 10.1038/s41598-019-51269-8 | MC-SleepNet: Large-scale Sleep Stage Scoring in Mice by Deep Neural Networks | Automated sleep stage scoring for mice is in high demand for sleep research, since manual scoring requires considerable human expertise and efforts. The existing automated scoring methods do not provide the scoring accuracy required for practical use. In addition, the performance of such methods has generally been evaluated using rather small-scale datasets, and their robustness against individual differences and noise has not been adequately verified. This research proposes a novel automated scoring method named “MC-SleepNet”, which combines two types of deep neural networks. Then, we evaluate its performance using a large-scale dataset that contains 4,200 biological signal records of mice. The experimental results show that MC-SleepNet can automatically score sleep stages with an accuracy of 96.6% and kappa statistic of 0.94. In addition, we confirm that the scoring accuracy does not significantly decrease even if the target biological signals are noisy. These results suggest that MC-SleepNet is very robust against individual differences and noise. To the best of our knowledge, evaluations using such a large-scale dataset (containing 4,200 records) and high scoring accuracy (96.6%) have not been reported in previous related studies. | Related WorkAutomated sleep stage scoring methods for miceSeveral existing sleep stage scoring methods for mice have been proposed1–8. As mentioned above, performance evaluation using the large-scale dataset containing 4,200 mouse records is a contribution of this study. Here, we provide a brief introduction to the existing methods and describe the technical originality of this study.Although the existing sleep stage scoring methods consist of the feature extraction and scoring phases, the employed techniques/models are different.Most conventional sleep stage scoring methods employ FFT for extracting frequency-domain features2,4–7. For example, FASTER2 and MASC4 use FFT to extract several frequency components of EEG and EMG signals, which are effective in manual sleep stage scoring. Moreover, the scoring method employing CNN has also been proposed8. Please see the SPINDLE section for more details.To model the relationship between the features and sleep stages, the existing methods employ various classification models, such as nonparametric density estimation clustering2, support vector machine4,6, LSTM model5, and hidden Markov model7,8. Generally, the models that can handle time-series data and consider sleep transition rules tend to achieve high scoring accuracy. For example, the LSTM model5 achieves scoring accuracy that is almost the same level as the existing state-of-the-art method MASC.In contrast to these methods, we have adopted a CNN and bi-LSTM for the feature extraction and scoring phases. The CNN can locate the effective features automatically, and bi-LSTM can capture the sleep stage transition rules and relationship between the target epoch and its neighboring epochs. By combining these deep learning models, MC-SleepNet achieves high accuracy and high robustness against individual differences and noise.In addition, we have also developed a rescoring model to improve the recall of REM. The other existing methods cannot adjust their accuracy according to the purpose of the research. Thus, the rescoring model is another feature of MC-SleepNet.MASCMASC4 is one of the state-of-the-art methods for mice sleep stage scoring, proposed by Suzuki et al. in 20174. By using the sleep stages of consecutive neighboring epochs as features and employing the rescoring phase for uncertain epochs, MASC achieves high scoring accuracy of 94.9%.However, the authors have reported that MASC is weak against noise in EEG and EMG signals4. In addition, MASC is not practical for large-scale scoring tasks due to the high computational complexity of the support vector machine, which is employed as a scoring model.SPINDLESPINDLE8 is a scoring method that employs a CNN for feature extraction and achieves high accuracy of 96.8%. However, they adopted “Artifact” as a new sleep stage and ignored them in the accuracy calculation (including “Artifact”, its accuracy decreases to 88.6%).Moreover, the number of training samples is too small to train a CNN. They used sleep records obtained from only 4–8 mice/rats. Due to the shortage of training samples, the CNN could not locate the feature of individual differences or noise. Thus, the robustness of SPINDLE against them is quite limited.Employing a CNN for feature extraction and training it with sufficient training samples are essential to make MC-SleepNet robust against individual differences and noise. This will be the main reason why MC-SleepNet can score sleep stages more accurately than other existing methods.Automated sleep stage scoring methods for humansWhen we designed MC-SleepNet, we referred to several automated methods for scoring human sleep stages18,25–27. In particular, some of our ideas were inspired by “DeepSleepNet” by A. Supratak et al.18. For example, their model also employs multiple CNN blocks with different filter sizes. However, their purpose and the modeling input/output relationship are quite different from those of MC-SleepNet. In addition, it uses only one-channel EEG signal, while MC-SleepNet uses both EEG and EMG signals. | [
"20123089",
"23621645",
"26366107",
"26928255",
"21884727",
"9377276",
"11032042",
"30106699",
"20567055",
"16112549",
"27806374",
"843571",
"30254177",
"10481909"
] | [
{
"pmid": "20123089",
"title": "EEG gamma frequency and sleep-wake scoring in mice: comparing two types of supervised classifiers.",
"abstract": "There is growing interest in sleep research and increasing demand for screening of circadian rhythms in genetically modified animals. This requires reliable sleep stage scoring programs. Present solutions suffer, however, from the lack of flexible adaptation to experimental conditions and unreliable selection of stage-discriminating variables. EEG was recorded in freely moving C57BL/6 mice and different sets of frequency variables were used for analysis. Parameters included conventional power spectral density functions as well as period-amplitude analysis. Manual staging was compared with the performance of two different supervised classifiers, linear discriminant analysis (LDA) and Classification Tree. Gamma activity was particularly high during REM (rapid eye movements) sleep and waking. Four out of 73 variables were most effective for sleep-wake stage separation: amplitudes of upper gamma-, delta- and upper theta-frequency bands and neck muscle EMG. Using small sets of training data, LDA produced better results than Classification Tree or a conventional threshold formula. Changing epoch duration (4 to 10s) had only minor effects on performance with 8 to 10s yielding the best results. Gamma and upper theta activity during REM sleep is particularly useful for sleep-wake stage separation. Linear discriminant analysis performs best in supervised automatic staging procedures. Reliable semi-automatic sleep scoring with LDA substantially reduces analysis time."
},
{
"pmid": "23621645",
"title": "FASTER: an unsupervised fully automated sleep staging method for mice.",
"abstract": "Identifying the stages of sleep, or sleep staging, is an unavoidable step in sleep research and typically requires visual inspection of electroencephalography (EEG) and electromyography (EMG) data. Currently, scoring is slow, biased and prone to error by humans and thus is the most important bottleneck for large-scale sleep research in animals. We have developed an unsupervised, fully automated sleep staging method for mice that allows less subjective and high-throughput evaluation of sleep. Fully Automated Sleep sTaging method via EEG/EMG Recordings (FASTER) is based on nonparametric density estimation clustering of comprehensive EEG/EMG power spectra. FASTER can accurately identify sleep patterns in mice that have been perturbed by drugs or by genetic modification of a clock gene. The overall accuracy is over 90% in every group. 24-h data are staged by a laptop computer in 10 min, which is faster than an experienced human rater. Dramatically improving the sleep staging process in both quality and throughput FASTER will open the door to quantitative and comprehensive animal sleep research."
},
{
"pmid": "26366107",
"title": "An automated sleep-state classification algorithm for quantifying sleep timing and sleep-dependent dynamics of electroencephalographic and cerebral metabolic parameters.",
"abstract": "INTRODUCTION\nRodent sleep research uses electroencephalography (EEG) and electromyography (EMG) to determine the sleep state of an animal at any given time. EEG and EMG signals, typically sampled at >100 Hz, are segmented arbitrarily into epochs of equal duration (usually 2-10 seconds), and each epoch is scored as wake, slow-wave sleep (SWS), or rapid-eye-movement sleep (REMS), on the basis of visual inspection. Automated state scoring can minimize the burden associated with state and thereby facilitate the use of shorter epoch durations.\n\n\nMETHODS\nWe developed a semiautomated state-scoring procedure that uses a combination of principal component analysis and naïve Bayes classification, with the EEG and EMG as inputs. We validated this algorithm against human-scored sleep-state scoring of data from C57BL/6J and BALB/CJ mice. We then applied a general homeostatic model to characterize the state-dependent dynamics of sleep slow-wave activity and cerebral glycolytic flux, measured as lactate concentration.\n\n\nRESULTS\nMore than 89% of epochs scored as wake or SWS by the human were scored as the same state by the machine, whether scoring in 2-second or 10-second epochs. The majority of epochs scored as REMS by the human were also scored as REMS by the machine. However, of epochs scored as REMS by the human, more than 10% were scored as SWS by the machine and 18 (10-second epochs) to 28% (2-second epochs) were scored as wake. These biases were not strain-specific, as strain differences in sleep-state timing relative to the light/dark cycle, EEG power spectral profiles, and the homeostatic dynamics of both slow waves and lactate were detected equally effectively with the automated method or the manual scoring method. Error associated with mathematical modeling of temporal dynamics of both EEG slow-wave activity and cerebral lactate either did not differ significantly when state scoring was done with automated versus visual scoring, or was reduced with automated state scoring relative to manual classification.\n\n\nCONCLUSIONS\nMachine scoring is as effective as human scoring in detecting experimental effects in rodent sleep studies. Automated scoring is an efficient alternative to visual inspection in studies of strain differences in sleep and the temporal dynamics of sleep-related physiological parameters."
},
{
"pmid": "26928255",
"title": "Multiple classifier systems for automatic sleep scoring in mice.",
"abstract": "BACKGROUND\nElectroencephalogram (EEG) and electromyogram (EMG) recordings are often used in rodents to study sleep architecture and sleep-associated neural activity. These recordings must be scored to designate what sleep/wake state the animal is in at each time point. Manual sleep-scoring is very time-consuming, so machine-learning classifier algorithms have been used to automate scoring.\n\n\nNEW METHOD\nInstead of using single classifiers, we implement a multiple classifier system. The multiple classifier is built from six base classifiers: decision tree, k-nearest neighbors, naïve Bayes, support vector machine, neural net, and linear discriminant analysis. Decision tree and k-nearest neighbors were improved into ensemble classifiers by using bagging and random subspace. Confidence scores from each classifier were combined to determine the final classification. Ambiguous epochs can be rejected and left for a human to classify.\n\n\nRESULTS\nSupport vector machine was the most accurate base classifier, and had error rate of 0.054. The multiple classifier system reduced the error rate to 0.049, which was not significantly different from a second human scorer. When 10% of epochs were rejected, the remaining epochs' error rate dropped to 0.018.\n\n\nCOMPARISON WITH EXISTING METHOD(S)\nCompared with the most accurate single classifier (support vector machine), the multiple classifier reduced errors by 9.4%. The multiple classifier surpassed the accuracy of a second human scorer after rejecting only 2% of epochs.\n\n\nCONCLUSIONS\nMultiple classifier systems are an effective way to increase automated sleep scoring accuracy. Improvements in autoscoring will allow sleep researchers to increase sample sizes and recording lengths, opening new experimental possibilities."
},
{
"pmid": "21884727",
"title": "Automated sleep scoring in rats and mice using the naive Bayes classifier.",
"abstract": "We describe a new simple MATLAB-based method for automated scoring of rat and mouse sleep using the naive Bayes classifier. This method is highly sensitive resulting in overall auto-rater agreement of 93%, comparable to an inter-rater agreement between two human scorers (92%), with high sensitivity and specificity values for wake (94% and 96%), NREM sleep (94% and 97%) and REM sleep (89% and 97%) states. In addition to baseline sleep-wake conditions, the performance of the naive Bayes classifier was assessed in sleep deprivation and drug infusion experiments, as well as in aged and transgenic animals using multiple EEG derivations. 24-h recordings from 30 different animals were used, with approximately 5% of the data manually scored as training data for the classification algorithm."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "11032042",
"title": "Learning to forget: continual prediction with LSTM.",
"abstract": "Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indefinitely and eventually cause the network to break down. Our remedy is a novel, adaptive \"forget gate\" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them, and in an elegant way."
},
{
"pmid": "30106699",
"title": "Multiscaled Fusion of Deep Convolutional Neural Networks for Screening Atrial Fibrillation From Single Lead Short ECG Recordings.",
"abstract": "Atrial fibrillation (AF) is one of the most common sustained chronic cardiac arrhythmia in elderly population, associated with a high mortality and morbidity in stroke, heart failure, coronary artery disease, systemic thromboembolism, etc. The early detection of AF is necessary for averting the possibility of disability or mortality. However, AF detection remains problematic due to its episodic pattern. In this paper, a multiscaled fusion of deep convolutional neural network (MS-CNN) is proposed to screen out AF recordings from single lead short electrocardiogram (ECG) recordings. The MS-CNN employs the architecture of two-stream convolutional networks with different filter sizes to capture features of different scales. The experimental results show that the proposed MS-CNN achieves 96.99% of classification accuracy on ECG recordings cropped/padded to 5 s. Especially, the best classification accuracy, 98.13%, is obtained on ECG recordings of 20 s. Compared with artificial neural network, shallow single-stream CNN, and VisualGeometry group network, the MS-CNN can achieve the better classification performance. Meanwhile, visualization of the learned features from the MS-CNN demonstrates its superiority in extracting linear separable ECG features without hand-craft feature engineering. The excellent AF screening performance of the MS-CNN can satisfy the most elders for daily monitoring with wearable devices."
},
{
"pmid": "20567055",
"title": "Convolutional neural networks for P300 detection with application to brain-computer interfaces.",
"abstract": "A Brain-Computer Interface (BCI) is a specific type of human-computer interface that enables the direct communication between human and computers by analyzing brain measurements. Oddball paradigms are used in BCI to generate event-related potentials (ERPs), like the P300 wave, on targets selected by the user. A P300 speller is based on this principle, where the detection of P300 waves allows the user to write characters. The P300 speller is composed of two classification problems. The first classification is to detect the presence of a P300 in the electroencephalogram (EEG). The second one corresponds to the combination of different P300 responses for determining the right character to spell. A new method for the detection of P300 waves is presented. This model is based on a convolutional neural network (CNN). The topology of the network is adapted to the detection of P300 waves in the time domain. Seven classifiers based on the CNN are proposed: four single classifiers with different features set and three multiclassifiers. These models are tested and compared on the Data set II of the third BCI competition. The best result is obtained with a multiclassifier solution with a recognition rate of 95.5 percent, without channel selection before the classification. The proposed approach provides also a new way for analyzing brain activities due to the receptive field of the CNN models."
},
{
"pmid": "16112549",
"title": "Framewise phoneme classification with bidirectional LSTM and other neural network architectures.",
"abstract": "In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it."
},
{
"pmid": "27806374",
"title": "Forward-genetics analysis of sleep in randomly mutagenized mice.",
"abstract": "Sleep is conserved from invertebrates to vertebrates, and is tightly regulated in a homeostatic manner. The molecular and cellular mechanisms that determine the amount of rapid eye movement sleep (REMS) and non-REMS (NREMS) remain unknown. Here we identify two dominant mutations that affect sleep and wakefulness by using an electroencephalogram/electromyogram-based screen of randomly mutagenized mice. A splicing mutation in the Sik3 protein kinase gene causes a profound decrease in total wake time, owing to an increase in inherent sleep need. Sleep deprivation affects phosphorylation of regulatory sites on the kinase, suggesting a role for SIK3 in the homeostatic regulation of sleep amount. Sik3 orthologues also regulate sleep in fruitflies and roundworms. A missense, gain-of-function mutation in the sodium leak channel NALCN reduces the total amount and episode duration of REMS, apparently by increasing the excitability of REMS-inhibiting neurons. Our results substantiate the use of a forward-genetics approach for studying sleep behaviours in mice, and demonstrate the role of SIK3 and NALCN in regulating the amount of NREMS and REMS, respectively."
},
{
"pmid": "843571",
"title": "The measurement of observer agreement for categorical data.",
"abstract": "This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially involves the construction of functions of the observed proportions which are directed at the extent to which the observers agree among themselves and the construction of test statistics for hypotheses involving these functions. Tests for interobserver bias are presented in terms of first-order marginal homogeneity and measures of interobserver agreement are developed as generalized kappa-type statistics. These procedures are illustrated with a clinical diagnosis example from the epidemiological literature."
},
{
"pmid": "30254177",
"title": "A single phosphorylation site of SIK3 regulates daily sleep amounts and sleep need in mice.",
"abstract": "Sleep is an evolutionally conserved behavior from vertebrates to invertebrates. The molecular mechanisms that determine daily sleep amounts and the neuronal substrates for homeostatic sleep need remain unknown. Through a large-scale forward genetic screen of sleep behaviors in mice, we previously demonstrated that the Sleepy mutant allele of the Sik3 protein kinase gene markedly increases daily nonrapid-eye movement sleep (NREMS) amounts and sleep need. The Sleepy mutation deletes the in-frame exon 13 encoding a peptide stretch encompassing S551, a known PKA recognition site in SIK3. Here, we demonstrate that single amino acid changes at SIK3 S551 (S551A and S551D) reproduce the hypersomnia phenotype of the Sleepy mutant mice. These mice exhibit increased NREMS amounts and inherently increased sleep need, the latter demonstrated by increased duration of individual NREMS episodes and higher EEG slow-wave activity during NREMS. At the molecular level, deletion or mutation at SIK3 S551 reduces PKA recognition and abolishes 14-3-3 binding. Our results suggest that the evolutionally conserved S551 of SIK3 mediates, together with PKA and 14-3-3, the intracellular signaling crucial for the regulation of daily sleep amounts and sleep need at the organismal level."
},
{
"pmid": "10481909",
"title": "Narcolepsy in orexin knockout mice: molecular genetics of sleep regulation.",
"abstract": "Neurons containing the neuropeptide orexin (hypocretin) are located exclusively in the lateral hypothalamus and send axons to numerous regions throughout the central nervous system, including the major nuclei implicated in sleep regulation. Here, we report that, by behavioral and electroencephalographic criteria, orexin knockout mice exhibit a phenotype strikingly similar to human narcolepsy patients, as well as canarc-1 mutant dogs, the only known monogenic model of narcolepsy. Moreover, modafinil, an anti-narcoleptic drug with ill-defined mechanisms of action, activates orexin-containing neurons. We propose that orexin regulates sleep/wakefulness states, and that orexin knockout mice are a model of human narcolepsy, a disorder characterized primarily by rapid eye movement (REM) sleep dysregulation."
}
] |
Scientific Reports | 31673000 | PMC6823361 | 10.1038/s41598-019-50290-1 | High-speed and Large-scale Privacy Amplification Scheme for Quantum Key Distribution | State-of-art quantum key distribution (QKD) systems are performed with several GHz pulse rates, meanwhile privacy amplification (PA) with large scale inputs has to be performed to generate the final secure keys with quantified security. In this paper, we propose a fast Fourier transform (FFT) enhanced high-speed and large-scale (HiLS) PA scheme on commercial CPU platform without increasing dedicated computational devices. The long input weak secure key is divided into many blocks and the random seed for constructing Toeplitz matrix is shuffled to multiple sub-sequences respectively, then PA procedures are parallel implemented for all sub-key blocks with correlated sub-sequences, afterwards, the outcomes are merged as the final secure key. When the input scale is 128 Mb, our proposed HiLS PA scheme reaches 71.16 Mbps, 54.08 Mbps and 39.15 Mbps with the compression ratio equals to 0.125, 0.25 and 0.375 respectively, resulting achievable secure key generation rates close to the asymptotic limit. HiLS PA scheme can be applied to 10 GHz QKD systems with even larger input scales and the evaluated throughput is around 32.49 Mbps with the compression ratio equals to 0.125 and the input scale of 1 Gb, which is ten times larger than the previous works for QKD systems. Furthermore, with the limited computational resources, the achieved throughput of HiLS PA scheme is 0.44 Mbps with the compression ratio equals to 0.125, when the input scale equals up to 128 Gb. In theory, the PA of the randomness extraction in quantum random number generation (QRNG) is same as the PA procedure in QKD, and our work can also be efficiently performed in high-speed QRNG. | Related WorkPrivacy amplification was first proposed in the context of quantum key distribution by Bennett et al.6, where the channel with perfect authenticity but no privacy (public classical channel) can be used to repair the defects of a channel with imperfect privacy but no authenticity (quantum channel). The schematic diagram of PA in QKD is shown in Fig. 1, Alice and Bob firstly distribute quantum signals via a noisy and lossy quantum channel (fiber or free space), then share correlated and weak secure key W after basis/key sifting and error correction procedures via a public channel. The min-entropy of shared weak secure key W is n. Let random variable E summarizes Eve’s entire learned knowledge about W, here, H(W|E) ≤ t, t < n. PA, where Alice and Bob publicly discuss a extractor function G:{0,1}n→{0,1}r, such that reduces Eve’s learned information of the final secure key Kf from t to at most ε6,7,35,36. Nowadays, most practical extractors are known to the universal hash function, especially the (modified) Toeplitz matrix defined as131\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$G(A)\,:=({I}_{r}|T(A))=[\begin{array}{ccccccccc}1 & & & & & {a}_{r-1} & {a}_{r} & \ldots & {a}_{n-2}\\ & 1 & & & & {a}_{r-2} & {a}_{r-1} & \ldots & {a}_{n-3}\\ & & \ddots & & & \vdots & \vdots & \ddots & \vdots \\ & & & & 1 & {a}_{0} & {a}_{1} & \cdots & {a}_{n-r-1}\end{array}],$$\end{document}G(A):=(Ir|T(A))=[1ar−1ar…an−21ar−2ar−1…an−3⋱⋮⋮⋱⋮1a0a1⋯an−r−1],where T(A) is a r × (n − r) Toeplitz matrix, A is a random seed, A = (a0, a1, …, an−1) ∈ {0,1}n−1, T(A)i,j = aj−i+r−1. Also, we define WI = (w0, w1, …, wr−1) and WTA = (wr, wr+1, …, wn−1). Therefore, the final secure key can be calculated as2\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${K}_{f}=G(A)W={I}_{r}\times ({w}_{0},{w}_{1},\ldots ,{w}_{r-1})\oplus T(A)\times ({w}_{r},{w}_{r+1},\ldots ,{w}_{n-1})={W}_{{\rm{I}}}\oplus T(A){W}_{{\rm{TA}}}.$$\end{document}Kf=G(A)W=Ir×(w0,w1,…,wr−1)⊕T(A)×(wr,wr+1,…,wn−1)=WI⊕T(A)WTA.Figure 1Schematic diagram of privacy amplification in quantum key distribution.In order to efficiently implement the calculation of T(A)WTA using fast Fourier transform (FFT), we have to extend T(A) to a special circulant Toeplitz matrix with scale of (n − 1) × (n − 1) and extend WTA to a vector with length of n − 1 by padding zeros. The optimized multiplication of a circulant matrix and a vector is shown as3\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$H\cdot X={F}^{-1}[F(h)\,\ast \,F(X)],$$\end{document}H⋅X=F−1[F(h)∗F(X)],where “*” denotes the Hadamard product operator, F denotes the Fourier transform operator, F−1 is the inverse Fourier transform operator, X is a vector and H is a circulant Toeplitz matrix with first row h. Since the complexity of F and F−1 operations is O(nlogn) and the complexity of Hadmard product operation is O(n), the computational complexity of optimized PA algorithm is O(nlogn)8,12.In theory, QKD can generate ITS keys for communication parties, even the quantum channel is under control of the eavesdropper Eve. Imperfect implementation and active attacks would leak some information about W to Eve. Alice and Bob can quantify the bound of leaked information accurately with the infinite post-processing block size. In this paper, we take entanglement based QKD as an example, the secure key rate can be calculated as374\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$R\ge q{Q}_{\mu }{\nu }_{s}[1-{H}_{2}({e}_{p}^{U})-f({e}_{b}){H}_{2}({e}_{b})],$$\end{document}R≥qQμνs[1−H2(epU)−f(eb)H2(eb)],where q is the basis sifting factor, Qμ is the gain of detected entangled photon pairs, νs is the repetition rate of the entangled source, eb is the measured quantum bit error rate, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${e}_{p}^{U}$$\end{document}epU is the estimated upper-bound of phase error rate, f(x) is the error correction efficiency, H2(x) is the binary Shannon entropy.In practice, epU can not be measured directly and could not be accurately estimated due to the statistical fluctuations with finite post-processing block sizes. Here, we simulate the required throughput of PA algorithm in a 10 GHz entanglement based QKD with the parameters shown in Table 1. The entangled photon source is put into the middle of communication parties, the finite-size-effect for the final secure key Kf is considered with post-processing block size from the order of 104 to infinite, and the failure probability εph = 10−10 for estimating \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${e}_{p}^{U}$$\end{document}epU4. The analyzed results are shown in Fig. 2, the post-processing block size should be at least the order of 108 to achieve a secure key rate close to the asymptotic limit. Directly implementing PA algorithms with ultra large-scale inputs will limit the performance of full QKD systems. Meanwhile, the required throughput of PA algorithm is around 40 Mbps without any channel loss.Table 1Parameters used for simulation of entanglement based QKD.ParameterValuesPulse Repetition Rate νs10 GHzHeralding Efficiency0.316Dark Count Rate pd10−7Detector Efficiency ηd0.40Misalignment Error Rate ed0.015Error Correction Efficiency f1.10Photon Pair Number per Coincidence Window μOptimalBasis Reconciliation Factor q0.50Phase Error Estimation Failure Probability εph10−10Figure 2Required throughput of PA algorithms and final secure key rate with different block sizes for 10 GHz entanglement based QKD systems, under the simulation parameters shown in Table 1. | [
"19495198",
"19529339",
"19475036",
"22446206",
"24005413",
"28737168"
] | [
{
"pmid": "19495198",
"title": "Quantum key distribution system clocked at 2 GHz.",
"abstract": "An improved quantum key distribution test system operating at clock rates of up to 2GHz using a specially adapted commercially-available silicon single-photon counting module is presented. The use of an enhanced detector has improved the fiber-based quantum key distribution test system performance in terms of transmission distance and quantum bit error rate."
},
{
"pmid": "19529339",
"title": "10-GHz clock differential phase shift quantum key distribution experiment.",
"abstract": "This paper reports the first quantum key distribution experiment implemented with a 10-GHz clock frequency. We used a 10-GHz actively mode-locked fiber laser as a source of short coherent pulses and single photon detectors based on frequency up-conversion in periodically poled lithium niobate waveguides. The use of short pulses and low-jitter upconversion detectors significantly reduced the bit errors caused by detector dark counts even after long-distance transmission of a weak coherent state pulse. We employed the differential phase shift quantum key distribution protocol, and generated sifted keys at a rate of 3.7 kbit/s over a 105 km fiber with a bit error rate of 9.7%."
},
{
"pmid": "19475036",
"title": "Quantum key distribution with 1.25 Gbps clock synchronization.",
"abstract": "We have demonstrated the exchange of sifted quantum cryptographic key over a 730 meter free-space link at rates of up to 1.0 Mbps, two orders of magnitude faster than previously reported results. A classical channel at 1550 nm operates in parallel with a quantum channel at 845 nm. Clock recovery techniques on the classical channel at 1.25 Gbps enable quantum transmission at up to the clock rate. System performance is currently limited by the timing resolution of our silicon avalanche photodiode detectors. With improved detector resolution, our technique will yield another order of magnitude increase in performance, with existing technology."
},
{
"pmid": "22446206",
"title": "2 GHz clock quantum key distribution over 260 km of standard telecom fiber.",
"abstract": "We report a demonstration of quantum key distribution (QKD) over a standard telecom fiber exceeding 50 dB in loss and 250 km in length. The differential phase shift QKD protocol was chosen and implemented with a 2 GHz system clock rate. By careful optimization of the 1 bit delayed Faraday-Michelson interferometer and the use of the superconducting single photon detector (SSPD), we achieved a quantum bit error rate below 2% when the fiber length was no more than 205 km, and of 3.45% for a 260 km fiber with 52.9 dB loss. We also improved the quantum efficiency of SSPD to obtain a high key rate for 50 km length."
},
{
"pmid": "24005413",
"title": "A quantum access network.",
"abstract": "The theoretically proven security of quantum key distribution (QKD) could revolutionize the way in which information exchange is protected in the future. Several field tests of QKD have proven it to be a reliable technology for cryptographic key exchange and have demonstrated nodal networks of point-to-point links. However, until now no convincing answer has been given to the question of how to extend the scope of QKD beyond niche applications in dedicated high security networks. Here we introduce and experimentally demonstrate the concept of a 'quantum access network': based on simple and cost-effective telecommunication technologies, the scheme can greatly expand the number of users in quantum networks and therefore vastly broaden their appeal. We show that a high-speed single-photon detector positioned at a network node can be shared between up to 64 users for exchanging secret keys with the node, thereby significantly reducing the hardware requirements for each user added to the network. This point-to-multipoint architecture removes one of the main obstacles restricting the widespread application of QKD. It presents a viable method for realizing multi-user QKD networks with efficient use of resources, and brings QKD closer to becoming a widespread technology."
},
{
"pmid": "28737168",
"title": "Distribution of high-dimensional entanglement via an intra-city free-space link.",
"abstract": "Quantum entanglement is a fundamental resource in quantum information processing and its distribution between distant parties is a key challenge in quantum communications. Increasing the dimensionality of entanglement has been shown to improve robustness and channel capacities in secure quantum communications. Here we report on the distribution of genuine high-dimensional entanglement via a 1.2-km-long free-space link across Vienna. We exploit hyperentanglement, that is, simultaneous entanglement in polarization and energy-time bases, to encode quantum information, and observe high-visibility interference for successive correlation measurements in each degree of freedom. These visibilities impose lower bounds on entanglement in each subspace individually and certify four-dimensional entanglement for the hyperentangled system. The high-fidelity transmission of high-dimensional entanglement under real-world atmospheric link conditions represents an important step towards long-distance quantum communications with more complex quantum systems and the implementation of advanced quantum experiments with satellite links."
}
] |
GigaScience | 31675414 | PMC6824458 | 10.1093/gigascience/giz095 | Sharing interoperable workflow provenance: A review of best practices and their practical application in CWLProv | AbstractBackgroundThe automation of data analysis in the form of scientific workflows has become a widely adopted practice in many fields of research. Computationally driven data-intensive experiments using workflows enable automation, scaling, adaptation, and provenance support. However, there are still several challenges associated with the effective sharing, publication, and reproducibility of such workflows due to the incomplete capture of provenance and lack of interoperability between different technical (software) platforms.ResultsBased on best-practice recommendations identified from the literature on workflow design, sharing, and publishing, we define a hierarchical provenance framework to achieve uniformity in provenance and support comprehensive and fully re-executable workflows equipped with domain-specific information. To realize this framework, we present CWLProv, a standard-based format to represent any workflow-based computational analysis to produce workflow output artefacts that satisfy the various levels of provenance. We use open source community-driven standards, interoperable workflow definitions in Common Workflow Language (CWL), structured provenance representation using the W3C PROV model, and resource aggregation and sharing as workflow-centric research objects generated along with the final outputs of a given workflow enactment. We demonstrate the utility of this approach through a practical implementation of CWLProv and evaluation using real-life genomic workflows developed by independent groups.ConclusionsThe underlying principles of the standards utilized by CWLProv enable semantically rich and executable research objects that capture computational workflows with retrospective provenance such that any platform supporting CWL will be able to understand the analysis, reuse the methods for partial reruns, or reproduce the analysis to validate the published findings. | Related workWe focus on relevant studies and efforts trying to resolve the issue of availability of required resources used in a given computational analysis. In addition, we cover efforts directed towards provenance capture of workflow enactments. We restrict our attention to scientific workflows and studies related to the bioinformatics domain.Workflow software environment capture
Freezing and packaging the runtime environment to encompass all the software components and their dependencies used in an analysis is a recommended and widely adopted practice [20] especially after use of cloud computing resources where images and snapshots of the cloud instances are created and shared with fellow researchers [21]. Nowadays, preservation and sharing of the software environment, e.g., in open access repositories, is becoming a regular practice in the workflow domain as well. Leading platforms managing infrastructure and providing cloud computing services and configuration on demand include DigitalOcean [22], Amazon Elastic Compute Cloud [23], Google Cloud Platform [24], and Microsoft Azure [25]. The instances launched on these platforms can be saved as snapshots and published with an analysis study to later recreate an instance representing the computing state at analysis time.Using “system-wide packaging" for data-driven analyses, although simplest on the part of the workflow developers and researchers, has its own caveats. One notable issue is the size of the snapshot as it captures everything in an instance at a given time; hence, the size can range from a few gigabytes to many terabytes. To distribute research software and share execution environments, various lightweight and container-based virtualization and package managers are emerging, including Docker, Singularity, Debian Med, and Bioconda.
Docker [26] is a lightweight container-based virtualization technology that facilitates the automation of application development by archiving software systems and environment to improve portability of the applications on many common platforms including Linux, Microsoft Windows, Mac OS X, and cloud instances. Singularity [27] is also a cross-platform open source container engine specifically supporting high-performance computing resources. An existing Docker format software image can be imported and used by the Singularity container engine. Debian Med [28] contributes packages of medical practice and biomedical research to the Debian Linux distribution, lately also including workflows. Bioconda [29] packages, based on the open source package manager Conda [30], are available for Mac OS X and Linux environments, directing towards availability and portability of software used in the life sciences domain.Data/method preservation, aggregation, and sharingPreserving and sharing only the software environment is not enough to verify results of any computational analysis or reuse the methods (e.g., workflows) with a different dataset. It is also necessary to share other details including data (example or the original), scripts, workflow files, input configuration settings, the hypothesis of the experiment, and any/all trace/logging information related to “what happened," i.e., the retrospective provenance of the actual workflow enactment. The publishing of resources to improve the state of scholarly publications is now supported by various online repositories, including Zenodo [31], GitHub [32], myExperiment [33], and Figshare [34]. These repositories facilitate collaborative research, in addition to public sharing of source code and the results of a given analysis. There is however no standard format that must be followed when someone shares artefacts associated with an analysis. As a result, the quality of the shared resources can range from a highly annotated, properly documented and complete set of artefacts to raw data with undocumented code and incomplete information about the analysis as a whole. Individual organizations or groups might provide a set of “recommended practices," e.g., in readme files, to attempt to maintain the quality of shared resources. The initiative Code as a Research Object [35] is a joint project between Figshare, GitHub, and Mozilla Science Lab [36] and aims to archive any GitHub code repository to Figshare and produce a DOI to improve the discovery of resources (for the source code that supports this work we have used a similar publishing feature with Zenodo).ReproZip [37] aims to resolve portability issues by identifying and packaging all dependencies in a self-contained package that when unpacked and executed on another system (with ReproZip installed) should reproduce the methods and results of the analysis. Each package also contains a human-readable configuration file containing provenance information obtained by tracing system calls during system execution. The corresponding provenance trace is however not formatted using existing open standards established by the community. Several platform-dependent studies have been targeted towards extensions to existing standards by implementing the RO model and improving aggregation of resources. Belhajjame et al. [8] proposed the application of ROs to develop workflow-centric ROs containing data and metadata to support the understandability of the utilized methods (in this case workflow specifications). They explored 5 essential requirements to workflow preservation and identified data and metadata that could be stored to satisfy the said requirements. These requirements include providing example data, preserving workflows with provenance traces, annotating workflows, tracking the evolution in workflows, and packaging the auxiliary data and information with workflows. They proposed extensions to existing ontologies such as Object Reuse and Exchange (ORE), the Annotation Ontology (AO), and PROV-O, with 4 additional ontologies to represent workflow-specific information. However, as they state, the scope of the proposed model at that time was not focused on interoperability of heterogeneous workflows because it was demonstrated for a workflow specific to Taverna WMS using myExperiment, which makes it quite platform-dependent.A domain-specific solution was proposed by Gomez-Perez et al. [38] by extending the RO model to equip workflow-centric ROs with information catering to the specific needs of the earth science community, resulting in enhanced discovery and reusability by experts. They demonstrated that the principles of ROs can support extensions to generate aggregated resources leveraging domain-specific knowledge. Hettne et al. [11] used 3 genomic workflow case studies to demonstrate the use of ROs to capture methods and data supporting querying and useful extraction of information about the scientific investigation under observation. The solution was tightly coupled with the Taverna WMS and hence, if shared, would not be reproducible outside of the Taverna environment. Other notable efforts to use ROs for workflow preservation and method aggregation have been undertaken in systems biology [39], in clinical settings [40], and in precision medicine [41].Provenance capture and standardizationA range of standards for provenance representation have been proposed. Many studies have emphasized the need for provenance focusing on aspects such as scalability, granularity, security, authenticity, modelling, and annotation [14]. They identify the need to support standardized dialogues to make provenance interoperable. Many of these were used as inputs to initial attempts at creating a standard Provenance Model to tackle the often inconsistent and disjointed terminology related to provenance concepts. This ultimately resulted in the specification of the Open Provenance Model (OPM) [42] together with an open source model for the governance of OPM [43]. Working towards similar goals of interoperability and standardization of provenance for web technologies, the W3C Provenance Incubator Group [44] and the authors of OPM together set the fourth provenance challenge at the International Provenance and Annotation Workshop, 2010 (IPAW’10), that later resulted in PROV, a family of documents serving as the conceptual model for provenance capture and its representation, sharing, and exchange over the Web [45] regardless of the domain or platform. Since then, a number of studies have proposed extensions to this domain-neutral standard. The model is general enough to be adapted to any field and flexible enough to allow extensions for specialized cases.Michaelides et al. [46] presented a domain-specific PROV-based solution for retrospective provenance to support portability and reproducibility of a statistical software suite. They captured the essential elements from the log of a workflow enactment and represented them using an intermediate notation. This representation was later translated to PROV-N and used as the basis for the PROV Template System. A Linux-specific system provenance approach was proposed by Pasquier et al. [47], who demonstrated retrospective provenance capture at the system level. Another project, UniProv, is working to extract information from Unicore middleware and transform it into a PROV-O representation to facilitate the backtracking of workflow enactments [48]. Other notable domain-specific efforts leveraging the established standards to record provenance and context information are PROV-man [49], PoeM [50], and micropublications [51]. Platforms such as VisTrails and Taverna have built in retrospective provenance support. Taverna [39] implements an extensive provenance capture system, TavernaProv [52], using both PROV ontologies as well as ROs aggregating the resources used in an analysis. VisTrails [53] is an open source project supporting platform-dependent provenance capture, visualization, and querying for extraction of required information about a workflow enactment. Chirigati et al. [37] provide an overview of PROV terms and how they can be translated from the VisTrails schema and serialized to PROV-XML. WINGS [54] can report fine-grained workflow execution provenance as Linked Data using the Open Provenance Model for Workflows ontology [55], which builds on both PROV-O and OPM.All these efforts are fairly recent and use a standardized approach to provenance capture and hence are relevant to our work on the capture of retrospective provenance. However, our aim is a domain-neutral and platform-independent solution that can be easily adapted for any domain and shared across different platforms and operating systems.As evident from the literature, there are efforts in progress to resolve the issues associated with effective and complete sharing of computational analysis including both the results and provenance information. These studies range from highly domain-specific solutions and platform-dependent objects to open source flexible interoperable standards. CWL has widespread adoption as a workflow definition standard and hence is an ideal candidate for portable workflow definitions. The next section investigates existing studies focused on workflow-centric science and summarizes best-practice recommendations put forward in these studies. From this we define a hierarchical provenance and resource-sharing framework. | [
"26151137",
"25276335",
"28494014",
"29967506",
"20501605",
"23640334",
"25805205",
"26261718",
"26978244",
"22898652",
"24312207",
"24204232",
"28701218",
"27940837",
"22028928",
"23479348",
"29069476",
"27896971",
"28398314",
"28398311",
"26334920",
"23104886",
"19505943",
"22539670",
"21816040",
"22975805",
"29552334",
"23203883",
"24812344",
"22581179"
] | [
{
"pmid": "26151137",
"title": "Big Data: Astronomical or Genomical?",
"abstract": "Genomics is a Big Data science and is going to get much bigger, very soon, but it is not known whether the needs of genomics will exceed other Big Data domains. Projecting to the year 2025, we compared genomics with three other major generators of Big Data: astronomy, YouTube, and Twitter. Our estimates show that genomics is a \"four-headed beast\"--it is either on par with or the most demanding of the domains analyzed here in terms of data acquisition, storage, distribution, and analysis. We discuss aspects of new technologies that will need to be developed to rise up and meet the computational challenges that genomics poses for the near future. Now is the time for concerted, community-wide planning for the \"genomical\" challenges of the next decade."
},
{
"pmid": "25276335",
"title": "Structuring research methods and data with the research object model: genomics workflows as a case study.",
"abstract": "BACKGROUND\nOne of the main challenges for biomedical research lies in the computer-assisted integrative study of large and increasingly complex combinations of data in order to understand molecular mechanisms. The preservation of the materials and methods of such computational experiments with clear annotations is essential for understanding an experiment, and this is increasingly recognized in the bioinformatics community. Our assumption is that offering means of digital, structured aggregation and annotation of the objects of an experiment will provide necessary meta-data for a scientist to understand and recreate the results of an experiment. To support this we explored a model for the semantic description of a workflow-centric Research Object (RO), where an RO is defined as a resource that aggregates other resources, e.g., datasets, software, spreadsheets, text, etc. We applied this model to a case study where we analysed human metabolite variation by workflows.\n\n\nRESULTS\nWe present the application of the workflow-centric RO model for our bioinformatics case study. Three workflows were produced following recently defined Best Practices for workflow design. By modelling the experiment as an RO, we were able to automatically query the experiment and answer questions such as \"which particular data was input to a particular workflow to test a particular hypothesis?\", and \"which particular conclusions were drawn from a particular workflow?\".\n\n\nCONCLUSIONS\nApplying a workflow-centric RO model to aggregate and annotate the resources used in a bioinformatics experiment, allowed us to retrieve the conclusions of the experiment in the context of the driving hypothesis, the executed workflows and their input data. The RO model is an extendable reference model that can be used by other systems as well.\n\n\nAVAILABILITY\nThe Research Object is available at http://www.myexperiment.org/packs/428 The Wf4Ever Research Object Model is available at http://wf4ever.github.io/ro."
},
{
"pmid": "28494014",
"title": "Singularity: Scientific containers for mobility of compute.",
"abstract": "Here we present Singularity, software developed to bring containers and reproducibility to scientific computing. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. Singularity is an open source initiative that harnesses the expertise of system and software engineers and researchers alike, and integrates seamlessly into common workflows for both of these groups. As its primary use case, Singularity brings mobility of computing to both users and HPC centers, providing a secure means to capture and distribute software and compute environments. This ability to create and deploy reproducible environments across these centers, a previously unmet need, makes Singularity a game changing development for computational science."
},
{
"pmid": "20501605",
"title": "myExperiment: a repository and social network for the sharing of bioinformatics workflows.",
"abstract": "myExperiment (http://www.myexperiment.org) is an online research environment that supports the social sharing of bioinformatics workflows. These workflows are procedures consisting of a series of computational tasks using web services, which may be performed on data from its retrieval, integration and analysis, to the visualization of the results. As a public repository of workflows, myExperiment allows anybody to discover those that are relevant to their research, which can then be reused and repurposed to their specific requirements. Conversely, developers can submit their workflows to myExperiment and enable them to be shared in a secure manner. Since its release in 2007, myExperiment currently has over 3500 registered users and contains more than 1000 workflows. The social aspect to the sharing of these workflows is facilitated by registered users forming virtual communities bound together by a common interest or research project. Contributors of workflows can build their reputation within these communities by receiving feedback and credit from individuals who reuse their work. Further documentation about myExperiment including its REST web service is available from http://wiki.myexperiment.org. Feedback and requests for support can be sent to [email protected]."
},
{
"pmid": "23640334",
"title": "The Taverna workflow suite: designing and executing workflows of Web Services on the desktop, web or in the cloud.",
"abstract": "The Taverna workflow tool suite (http://www.taverna.org.uk) is designed to combine distributed Web Services and/or local tools into complex analysis pipelines. These pipelines can be executed on local desktop machines or through larger infrastructure (such as supercomputers, Grids or cloud environments), using the Taverna Server. In bioinformatics, Taverna workflows are typically used in the areas of high-throughput omics analyses (for example, proteomics or transcriptomics), or for evidence gathering methods involving text mining or data mining. Through Taverna, scientists have access to several thousand different tools and resources that are freely available from a large range of life science institutions. Once constructed, the workflows are reusable, executable bioinformatics protocols that can be shared, reused and repurposed. A repository of public workflows is available at http://www.myexperiment.org. This article provides an update to the Taverna tool suite, highlighting new features and developments in the workbench and the Taverna Server."
},
{
"pmid": "25805205",
"title": "The Study Team for Early Life Asthma Research (STELAR) consortium 'Asthma e-lab': team science bringing data, methods and investigators together.",
"abstract": "We created Asthma e-Lab, a secure web-based research environment to support consistent recording, description and sharing of data, computational/statistical methods and emerging findings across the five UK birth cohorts. The e-Lab serves as a data repository for our unified dataset and provides the computational resources and a scientific social network to support collaborative research. All activities are transparent, and emerging findings are shared via the e-Lab, linked to explanations of analytical methods, thus enabling knowledge transfer. eLab facilitates the iterative interdisciplinary dialogue between clinicians, statisticians, computer scientists, mathematicians, geneticists and basic scientists, capturing collective thought behind the interpretations of findings."
},
{
"pmid": "26261718",
"title": "Micropublications: a semantic model for claims, evidence, arguments and annotations in biomedical communications.",
"abstract": "BACKGROUND\nScientific publications are documentary representations of defeasible arguments, supported by data and repeatable methods. They are the essential mediating artifacts in the ecosystem of scientific communications. The institutional \"goal\" of science is publishing results. The linear document publication format, dating from 1665, has survived transition to the Web. Intractable publication volumes; the difficulty of verifying evidence; and observed problems in evidence and citation chains suggest a need for a web-friendly and machine-tractable model of scientific publications. This model should support: digital summarization, evidence examination, challenge, verification and remix, and incremental adoption. Such a model must be capable of expressing a broad spectrum of representational complexity, ranging from minimal to maximal forms.\n\n\nRESULTS\nThe micropublications semantic model of scientific argument and evidence provides these features. Micropublications support natural language statements; data; methods and materials specifications; discussion and commentary; challenge and disagreement; as well as allowing many kinds of statement formalization. The minimal form of a micropublication is a statement with its attribution. The maximal form is a statement with its complete supporting argument, consisting of all relevant evidence, interpretations, discussion and challenges brought forward in support of or opposition to it. Micropublications may be formalized and serialized in multiple ways, including in RDF. They may be added to publications as stand-off metadata. An OWL 2 vocabulary for micropublications is available at http://purl.org/mp. A discussion of this vocabulary along with RDF examples from the case studies, appears as OWL Vocabulary and RDF Examples in Additional file 1.\n\n\nCONCLUSION\nMicropublications, because they model evidence and allow qualified, nuanced assertions, can play essential roles in the scientific communications ecosystem in places where simpler, formalized and purely statement-based models, such as the nanopublications model, will not be sufficient. At the same time they will add significant value to, and are intentionally compatible with, statement-based formalizations. We suggest that micropublications, generated by useful software tools supporting such activities as writing, editing, reviewing, and discussion, will be of great value in improving the quality and tractability of biomedical communications."
},
{
"pmid": "26978244",
"title": "The FAIR Guiding Principles for scientific data management and stewardship.",
"abstract": "There is an urgent need to improve the infrastructure supporting the reuse of scholarly data. A diverse set of stakeholders-representing academia, industry, funding agencies, and scholarly publishers-have come together to design and jointly endorse a concise and measureable set of principles that we refer to as the FAIR Data Principles. The intent is that these may act as a guideline for those wishing to enhance the reusability of their data holdings. Distinct from peer initiatives that focus on the human scholar, the FAIR Principles put specific emphasis on enhancing the ability of machines to automatically find and use the data, in addition to supporting its reuse by individuals. This Comment is the first formal publication of the FAIR Principles, and includes the rationale behind them, and some exemplar implementations in the community."
},
{
"pmid": "22898652",
"title": "Next-generation sequencing data interpretation: enhancing reproducibility and accessibility.",
"abstract": "Areas of life sciences research that were previously distant from each other in ideology, analysis practices and toolkits, such as microbial ecology and personalized medicine, have all embraced techniques that rely on next-generation sequencing instruments. Yet the capacity to generate the data greatly outpaces our ability to analyse it. Existing sequencing technologies are more mature and accessible than the methodologies that are available for individual researchers to move, store, analyse and present data in a fashion that is transparent and reproducible. Here we discuss currently pressing issues with analysis, interpretation, reproducibility and accessibility of these data, and we present promising solutions and venture into potential future developments."
},
{
"pmid": "24312207",
"title": "Quantifying reproducibility in computational biology: the case of the tuberculosis drugome.",
"abstract": "How easy is it to reproduce the results found in a typical computational biology paper? Either through experience or intuition the reader will already know that the answer is with difficulty or not at all. In this paper we attempt to quantify this difficulty by reproducing a previously published paper for different classes of users (ranging from users with little expertise to domain experts) and suggest ways in which the situation might be improved. Quantification is achieved by estimating the time required to reproduce each of the steps in the method described in the original paper and make them part of an explicit workflow that reproduces the original results. Reproducing the method took several months of effort, and required using new versions and new software that posed challenges to reconstructing and validating the results. The quantification leads to \"reproducibility maps\" that reveal that novice researchers would only be able to reproduce a few of the steps in the method, and that only expert researchers with advance knowledge of the domain would be able to reproduce the method in its entirety. The workflow itself is published as an online resource together with supporting software and data. The paper concludes with a brief discussion of the complexities of requiring reproducibility in terms of cost versus benefit, and a desiderata with our observations and guidelines for improving reproducibility. This has implications not only in reproducing the work of others from published papers, but reproducing work from one's own laboratory."
},
{
"pmid": "28701218",
"title": "Investigating reproducibility and tracking provenance - A genomic workflow case study.",
"abstract": "BACKGROUND\nComputational bioinformatics workflows are extensively used to analyse genomics data, with different approaches available to support implementation and execution of these workflows. Reproducibility is one of the core principles for any scientific workflow and remains a challenge, which is not fully addressed. This is due to incomplete understanding of reproducibility requirements and assumptions of workflow definition approaches. Provenance information should be tracked and used to capture all these requirements supporting reusability of existing workflows.\n\n\nRESULTS\nWe have implemented a complex but widely deployed bioinformatics workflow using three representative approaches to workflow definition and execution. Through implementation, we identified assumptions implicit in these approaches that ultimately produce insufficient documentation of workflow requirements resulting in failed execution of the workflow. This study proposes a set of recommendations that aims to mitigate these assumptions and guides the scientific community to accomplish reproducible science, hence addressing reproducibility crisis.\n\n\nCONCLUSIONS\nReproducing, adapting or even repeating a bioinformatics workflow in any environment requires substantial technical knowledge of the workflow execution environment, resolving analysis assumptions and rigorous compliance with reproducibility requirements. Towards these goals, we propose conclusive recommendations that along with an explicit declaration of workflow specification would result in enhanced reproducibility of computational genomic analyses."
},
{
"pmid": "22028928",
"title": "Resources and costs for microbial sequence analysis evaluated using virtual machines and cloud computing.",
"abstract": "BACKGROUND\nThe widespread popularity of genomic applications is threatened by the \"bioinformatics bottleneck\" resulting from uncertainty about the cost and infrastructure needed to meet increasing demands for next-generation sequence analysis. Cloud computing services have been discussed as potential new bioinformatics support systems but have not been evaluated thoroughly.\n\n\nRESULTS\nWe present benchmark costs and runtimes for common microbial genomics applications, including 16S rRNA analysis, microbial whole-genome shotgun (WGS) sequence assembly and annotation, WGS metagenomics and large-scale BLAST. Sequence dataset types and sizes were selected to correspond to outputs typically generated by small- to midsize facilities equipped with 454 and Illumina platforms, except for WGS metagenomics where sampling of Illumina data was used. Automated analysis pipelines, as implemented in the CloVR virtual machine, were used in order to guarantee transparency, reproducibility and portability across different operating systems, including the commercial Amazon Elastic Compute Cloud (EC2), which was used to attach real dollar costs to each analysis type. We found considerable differences in computational requirements, runtimes and costs associated with different microbial genomics applications. While all 16S analyses completed on a single-CPU desktop in under three hours, microbial genome and metagenome analyses utilized multi-CPU support of up to 120 CPUs on Amazon EC2, where each analysis completed in under 24 hours for less than $60. Representative datasets were used to estimate maximum data throughput on different cluster sizes and to compare costs between EC2 and comparable local grid servers.\n\n\nCONCLUSIONS\nAlthough bioinformatics requirements for microbial genomics depend on dataset characteristics and the analysis protocols applied, our results suggests that smaller sequencing facilities (up to three Roche/454 or one Illumina GAIIx sequencer) invested in 16S rRNA amplicon sequencing, microbial single-genome and metagenomics WGS projects can achieve cost-efficient bioinformatics support using CloVR in combination with Amazon EC2 as an alternative to local computing centers."
},
{
"pmid": "23479348",
"title": "EDAM: an ontology of bioinformatics operations, types of data and identifiers, topics and formats.",
"abstract": "MOTIVATION\nAdvancing the search, publication and integration of bioinformatics tools and resources demands consistent machine-understandable descriptions. A comprehensive ontology allowing such descriptions is therefore required.\n\n\nRESULTS\nEDAM is an ontology of bioinformatics operations (tool or workflow functions), types of data and identifiers, application domains and data formats. EDAM supports semantic annotation of diverse entities such as Web services, databases, programmatic libraries, standalone tools, interactive applications, data schemas, datasets and publications within bioinformatics. EDAM applies to organizing and finding suitable tools and data and to automating their integration into complex applications or workflows. It includes over 2200 defined concepts and has successfully been used for annotations and implementations.\n\n\nAVAILABILITY\nThe latest stable version of EDAM is available in OWL format from http://edamontology.org/EDAM.owl and in OBO format from http://edamontology.org/EDAM.obo. It can be viewed online at the NCBO BioPortal and the EBI Ontology Lookup Service. For documentation and license please refer to http://edamontology.org. This article describes version 1.2 available at http://edamontology.org/EDAM_1.2.owl.\n\n\nCONTACT\[email protected]."
},
{
"pmid": "29069476",
"title": "EBI Metagenomics in 2017: enriching the analysis of microbial communities, from sequence reads to assemblies.",
"abstract": "EBI metagenomics (http://www.ebi.ac.uk/metagenomics) provides a free to use platform for the analysis and archiving of sequence data derived from the microbial populations found in a particular environment. Over the past two years, EBI metagenomics has increased the number of datasets analysed 10-fold. In addition to increased throughput, the underlying analysis pipeline has been overhauled to include both new or updated tools and reference databases. Of particular note is a new workflow for taxonomic assignments that has been extended to include assignments based on both the large and small subunit RNA marker genes and to encompass all cellular micro-organisms. We also describe the addition of metagenomic assembly as a new analysis service. Our pilot studies have produced over 2400 assemblies from datasets in the public domain. From these assemblies, we have produced a searchable, non-redundant protein database of over 50 million sequences. To provide improved access to the data stored within the resource, we have developed a programmatic interface that provides access to the analysis results and associated sample metadata. Finally, we have integrated the results of a series of statistical analyses that provide estimations of diversity and sample comparisons."
},
{
"pmid": "27896971",
"title": "RABIX: AN OPEN-SOURCE WORKFLOW EXECUTOR SUPPORTING RECOMPUTABILITY AND INTEROPERABILITY OF WORKFLOW DESCRIPTIONS.",
"abstract": "As biomedical data has become increasingly easy to generate in large quantities, the methods used to analyze it have proliferated rapidly. Reproducible and reusable methods are required to learn from large volumes of data reliably. To address this issue, numerous groups have developed workflow specifications or execution engines, which provide a framework with which to perform a sequence of analyses. One such specification is the Common Workflow Language, an emerging standard which provides a robust and flexible framework for describing data analysis tools and workflows. In addition, reproducibility can be furthered by executors or workflow engines which interpret the specification and enable additional features, such as error logging, file organization, optim1izations to computation and job scheduling, and allow for easy computing on large volumes of data. To this end, we have developed the Rabix Executor, an open-source workflow engine for the purposes of improving reproducibility through reusability and interoperability of workflow descriptions."
},
{
"pmid": "26334920",
"title": "Mapping RNA-seq Reads with STAR.",
"abstract": "Mapping of large sets of high-throughput sequencing reads to a reference genome is one of the foundational steps in RNA-seq data analysis. The STAR software package performs this task with high levels of accuracy and speed. In addition to detecting annotated and novel splice junctions, STAR is capable of discovering more complex RNA sequence arrangements, such as chimeric and circular RNA. STAR can align spliced sequences of any length with moderate error rates, providing scalability for emerging sequencing technologies. STAR generates output files that can be used for many downstream analyses such as transcript/gene expression quantification, differential gene expression, novel isoform reconstruction, and signal visualization. In this unit, we describe computational protocols that produce various output files, use different RNA-seq datatypes, and utilize different mapping strategies. STAR is open source software that can be run on Unix, Linux, or Mac OS X systems."
},
{
"pmid": "23104886",
"title": "STAR: ultrafast universal RNA-seq aligner.",
"abstract": "MOTIVATION\nAccurate alignment of high-throughput RNA-seq data is a challenging and yet unsolved problem because of the non-contiguous transcript structure, relatively short read lengths and constantly increasing throughput of the sequencing technologies. Currently available RNA-seq aligners suffer from high mapping error rates, low mapping speed, read length limitation and mapping biases.\n\n\nRESULTS\nTo align our large (>80 billon reads) ENCODE Transcriptome RNA-seq dataset, we developed the Spliced Transcripts Alignment to a Reference (STAR) software based on a previously undescribed RNA-seq alignment algorithm that uses sequential maximum mappable seed search in uncompressed suffix arrays followed by seed clustering and stitching procedure. STAR outperforms other aligners by a factor of >50 in mapping speed, aligning to the human genome 550 million 2 × 76 bp paired-end reads per hour on a modest 12-core server, while at the same time improving alignment sensitivity and precision. In addition to unbiased de novo detection of canonical junctions, STAR can discover non-canonical splices and chimeric (fusion) transcripts, and is also capable of mapping full-length RNA sequences. Using Roche 454 sequencing of reverse transcription polymerase chain reaction amplicons, we experimentally validated 1960 novel intergenic splice junctions with an 80-90% success rate, corroborating the high precision of the STAR mapping strategy.\n\n\nAVAILABILITY AND IMPLEMENTATION\nSTAR is implemented as a standalone C++ code. STAR is free open source software distributed under GPLv3 license and can be downloaded from http://code.google.com/p/rna-star/."
},
{
"pmid": "19505943",
"title": "The Sequence Alignment/Map format and SAMtools.",
"abstract": "SUMMARY\nThe Sequence Alignment/Map (SAM) format is a generic alignment format for storing read alignments against reference sequences, supporting short and long reads (up to 128 Mbp) produced by different sequencing platforms. It is flexible in style, compact in size, efficient in random access and is the format in which alignments from the 1000 Genomes Project are released. SAMtools implements various utilities for post-processing alignments in the SAM format, such as indexing, variant caller and alignment viewer, and thus provides universal tools for processing read alignments.\n\n\nAVAILABILITY\nhttp://samtools.sourceforge.net."
},
{
"pmid": "22539670",
"title": "RNA-SeQC: RNA-seq metrics for quality control and process optimization.",
"abstract": "UNLABELLED\nRNA-seq, the application of next-generation sequencing to RNA, provides transcriptome-wide characterization of cellular activity. Assessment of sequencing performance and library quality is critical to the interpretation of RNA-seq data, yet few tools exist to address this issue. We introduce RNA-SeQC, a program which provides key measures of data quality. These metrics include yield, alignment and duplication rates; GC bias, rRNA content, regions of alignment (exon, intron and intragenic), continuity of coverage, 3'/5' bias and count of detectable transcripts, among others. The software provides multi-sample evaluation of library construction protocols, input materials and other experimental parameters. The modularity of the software enables pipeline integration and the routine monitoring of key measures of data quality such as the number of alignable reads, duplication rates and rRNA contamination. RNA-SeQC allows investigators to make informed decisions about sample inclusion in downstream analysis. In summary, RNA-SeQC provides quality control measures critical to experiment design, process optimization and downstream computational analysis.\n\n\nAVAILABILITY AND IMPLEMENTATION\nSee www.genepattern.org to run online, or www.broadinstitute.org/rna-seqc/ for a command line tool."
},
{
"pmid": "21816040",
"title": "RSEM: accurate transcript quantification from RNA-Seq data with or without a reference genome.",
"abstract": "BACKGROUND\nRNA-Seq is revolutionizing the way transcript abundances are measured. A key challenge in transcript quantification from RNA-Seq data is the handling of reads that map to multiple genes or isoforms. This issue is particularly important for quantification with de novo transcriptome assemblies in the absence of sequenced genomes, as it is difficult to determine which transcripts are isoforms of the same gene. A second significant issue is the design of RNA-Seq experiments, in terms of the number of reads, read length, and whether reads come from one or both ends of cDNA fragments.\n\n\nRESULTS\nWe present RSEM, an user-friendly software package for quantifying gene and isoform abundances from single-end or paired-end RNA-Seq data. RSEM outputs abundance estimates, 95% credibility intervals, and visualization files and can also simulate RNA-Seq data. In contrast to other existing tools, the software does not require a reference genome. Thus, in combination with a de novo transcriptome assembler, RSEM enables accurate transcript quantification for species without sequenced genomes. On simulated and real data sets, RSEM has superior or comparable performance to quantification methods that rely on a reference genome. Taking advantage of RSEM's ability to effectively use ambiguously-mapping reads, we show that accurate gene-level abundance estimates are best obtained with large numbers of short single-end reads. On the other hand, estimates of the relative frequencies of isoforms within single genes may be improved through the use of paired-end reads, depending on the number of possible splice forms for each gene.\n\n\nCONCLUSIONS\nRSEM is an accurate and user-friendly software tool for quantifying transcript abundances from RNA-Seq data. As it does not rely on the existence of a reference genome, it is particularly useful for quantification with de novo transcriptome assemblies. In addition, RSEM has enabled valuable guidance for cost-efficient design of quantification experiments with RNA-Seq, which is currently relatively expensive."
},
{
"pmid": "22975805",
"title": "The transcriptional landscape and mutational profile of lung adenocarcinoma.",
"abstract": "All cancers harbor molecular alterations in their genomes. The transcriptional consequences of these somatic mutations have not yet been comprehensively explored in lung cancer. Here we present the first large scale RNA sequencing study of lung adenocarcinoma, demonstrating its power to identify somatic point mutations as well as transcriptional variants such as gene fusions, alternative splicing events, and expression outliers. Our results reveal the genetic basis of 200 lung adenocarcinomas in Koreans including deep characterization of 87 surgical specimens by transcriptome sequencing. We identified driver somatic mutations in cancer genes including EGFR, KRAS, NRAS, BRAF, PIK3CA, MET, and CTNNB1. Candidates for novel driver mutations were also identified in genes newly implicated in lung adenocarcinoma such as LMTK2, ARID1A, NOTCH2, and SMARCA4. We found 45 fusion genes, eight of which were chimeric tyrosine kinases involving ALK, RET, ROS1, FGFR2, AXL, and PDGFRA. Among 17 recurrent alternative splicing events, we identified exon 14 skipping in the proto-oncogene MET as highly likely to be a cancer driver. The number of somatic mutations and expression outliers varied markedly between individual cancers and was strongly correlated with smoking history of patients. We identified genomic blocks within which gene expression levels were consistently increased or decreased that could be explained by copy number alterations in samples. We also found an association between lymph node metastasis and somatic mutations in TP53. These findings broaden our understanding of lung adenocarcinoma and may also lead to new diagnostic and therapeutic approaches."
},
{
"pmid": "29552334",
"title": "A review of somatic single nucleotide variant calling algorithms for next-generation sequencing data.",
"abstract": "Detection of somatic mutations holds great potential in cancer treatment and has been a very active research field in the past few years, especially since the breakthrough of the next-generation sequencing technology. A collection of variant calling pipelines have been developed with different underlying models, filters, input data requirements, and targeted applications. This review aims to enumerate these unique features of the state-of-the-art variant callers, in the hope to provide a practical guide for selecting the appropriate pipeline for specific applications. We will focus on the detection of somatic single nucleotide variants, ranging from traditional variant callers based on whole genome or exome sequencing of paired tumor-normal samples to recent low-frequency variant callers designed for targeted sequencing protocols with unique molecular identifiers. The variant callers have been extensively benchmarked with inconsistent performances across these studies. We will review the reference materials, datasets, and performance metrics that have been used in the benchmarking studies. In the end, we will discuss emerging trends and future directions of the variant calling algorithms."
},
{
"pmid": "23203883",
"title": "Facing growth in the European Nucleotide Archive.",
"abstract": "The European Nucleotide Archive (ENA; http://www.ebi.ac.uk/ena/) collects, maintains and presents comprehensive nucleic acid sequence and related information as part of the permanent public scientific record. Here, we provide brief updates on ENA content developments and major service enhancements in 2012 and describe in more detail two important areas of development and policy that are driven by ongoing growth in sequencing technologies. First, we describe the ENA data warehouse, a resource for which we provide a programmatic entry point to integrated content across the breadth of ENA. Second, we detail our plans for the deployment of CRAM data compression technology in ENA."
},
{
"pmid": "24812344",
"title": "SAMBLASTER: fast duplicate marking and structural variant read extraction.",
"abstract": "MOTIVATION\nIllumina DNA sequencing is now the predominant source of raw genomic data, and data volumes are growing rapidly. Bioinformatic analysis pipelines are having trouble keeping pace. A common bottleneck in such pipelines is the requirement to read, write, sort and compress large BAM files multiple times.\n\n\nRESULTS\nWe present SAMBLASTER, a tool that reduces the number of times such costly operations are performed. SAMBLASTER is designed to mark duplicates in read-sorted SAM files as a piped post-pass on DNA aligner output before it is compressed to BAM. In addition, it can simultaneously output into separate files the discordant read-pairs and/or split-read mappings used for structural variant calling. As an alignment post-pass, its own runtime overhead is negligible, while dramatically reducing overall pipeline complexity and runtime. As a stand-alone duplicate marking tool, it performs significantly better than PICARD or SAMBAMBA in terms of both speed and memory usage, while achieving nearly identical results.\n\n\nAVAILABILITY AND IMPLEMENTATION\nSAMBLASTER is open-source C+ + code and freely available for download from https://github.com/GregoryFaust/samblaster."
},
{
"pmid": "22581179",
"title": "Strelka: accurate somatic small-variant calling from sequenced tumor-normal sample pairs.",
"abstract": "MOTIVATION\nWhole genome and exome sequencing of matched tumor-normal sample pairs is becoming routine in cancer research. The consequent increased demand for somatic variant analysis of paired samples requires methods specialized to model this problem so as to sensitively call variants at any practical level of tumor impurity.\n\n\nRESULTS\nWe describe Strelka, a method for somatic SNV and small indel detection from sequencing data of matched tumor-normal samples. The method uses a novel Bayesian approach which represents continuous allele frequencies for both tumor and normal samples, while leveraging the expected genotype structure of the normal. This is achieved by representing the normal sample as a mixture of germline variation with noise, and representing the tumor sample as a mixture of the normal sample with somatic variation. A natural consequence of the model structure is that sensitivity can be maintained at high tumor impurity without requiring purity estimates. We demonstrate that the method has superior accuracy and sensitivity on impure samples compared with approaches based on either diploid genotype likelihoods or general allele-frequency tests.\n\n\nAVAILABILITY\nThe Strelka workflow source code is available at ftp://[email protected]/.\n\n\nCONTACT\[email protected]"
}
] |
Digital Health | 31700652 | PMC6826916 | 10.1177/2055207619878601 | Investigation of persuasive system design predictors of competitive behavior in fitness application: A mixed-method approach | Fitness applications aimed at behavior change are becoming increasingly popular due to the global prevalence of sedentary lifestyles and physical inactivity, causing countless non-communicable diseases. Competition is one of the most common persuasive strategies employed in such applications to motivate users to engage in physical activity in a social context. However, there is limited research on the persuasive system design predictors of users’ susceptibility to competition as a persuasive strategy for motivating behavior change in a social context. To bridge this gap, we designed storyboards illustrating four of the commonly employed persuasive strategies (reward, social learning, social comparison, and competition) in fitness applications and asked potential users to evaluate their perceived persuasiveness. The result of our path analysis showed that, overall, users’ susceptibilities to social comparison (βT = 0.48, p < 0.001), reward (βT = 0.42, p < 0.001), and social learning (βT = 0.29, p < 0.01) predicted their susceptibility to competition, with our model accounting for 41% of its variance. Social comparison partially mediated the relationship between reward and competition, while social learning partially mediated the relationship between social comparison and competition. Comparatively, the relationship between reward and social learning was stronger for females than for males, whereas the relationship between reward and competition was stronger for males than for females. Overall, our findings underscore the compatibility of all four persuasive strategies in a one-size-fits-all fitness application. We discuss our findings, drawing insight from the comments provided by participants. | Summary and gaps in related workOur review shows that most of the empirical studies on social influence (e.g., Oyibo et al.21,22) are based on investigating users’ susceptibility to persuasive strategies at the level of perception, using quantitative measures. However, there are limited mixed-method (quantitative and qualitative) studies on the interrelationships among socially oriented persuasive strategies, which are commonly employed in persuasive technologies aimed at behavior change.15,23 Moreover, Oyibo and Vassileva’s11 model of competitive behavior has not been investigated in a domain-specific context to confirm its validity or verify its replicability. Our study is aimed at bridging these gaps in the extant literature, using the fitness domain as a case study, storyboards to measure users’ susceptibility to persuasive strategies, and path analysis to model their interrelationships. | [
"10620381"
] | [
{
"pmid": "10620381",
"title": "Intrinsic and Extrinsic Motivations: Classic Definitions and New Directions.",
"abstract": "Intrinsic and extrinsic types of motivation have been widely studied, and the distinction between them has shed important light on both developmental and educational practices. In this review we revisit the classic definitions of intrinsic and extrinsic motivation in light of contemporary research and theory. Intrinsic motivation remains an important construct, reflecting the natural human propensity to learn and assimilate. However, extrinsic motivation is argued to vary considerably in its relative autonomy and thus can either reflect external control or true self-regulation. The relations of both classes of motives to basic human needs for autonomy, competence and relatedness are discussed. Copyright 2000 Academic Press."
}
] |
Brain Sciences | 31652635 | PMC6826987 | 10.3390/brainsci9100289 | Computer-Aided Diagnosis System of Alzheimer’s Disease Based on Multimodal Fusion: Tissue Quantification Based on the Hybrid Fuzzy-Genetic-Possibilistic Model and Discriminative Classification Based on the SVDD Model | An improved computer-aided diagnosis (CAD) system is proposed for the early diagnosis of Alzheimer’s disease (AD) based on the fusion of anatomical (magnetic resonance imaging (MRI)) and functional (8F-fluorodeoxyglucose positron emission tomography (FDG-PET)) multimodal images, and which helps to address the strong ambiguity or the uncertainty produced in brain images. The merit of this fusion is that it provides anatomical information for the accurate detection of pathological areas characterized in functional imaging by physiological abnormalities. First, quantification of brain tissue volumes is proposed based on a fusion scheme in three successive steps: modeling, fusion and decision. (1) Modeling which consists of three sub-steps: the initialization of the centroids of the tissue clusters by applying the Bias corrected Fuzzy C-Means (FCM) clustering algorithm. Then, the optimization of the initial partition is performed by running genetic algorithms. Finally, the creation of white matter (WM), gray matter (GM) and cerebrospinal fluid (CSF) tissue maps by applying the Possibilistic FCM clustering algorithm. (2) Fusion using a possibilistic operator to merge the maps of the MRI and PET images highlighting redundancies and managing ambiguities. (3) Decision offering more representative anatomo-functional fusion images. Second, a support vector data description (SVDD) classifier is used that must reliably distinguish AD from normal aging and automatically detects outliers. The “divide and conquer” strategy is then used, which speeds up the SVDD process and reduces the load and cost of the calculating. The robustness of the tissue quantification process is proven against noise (20% level), partial volume effects and when inhomogeneities of spatial intensity are high. Thus, the superiority of the SVDD classifier over competing conventional systems is also demonstrated with the adoption of the 10-fold cross-validation approach for synthetic datasets (Alzheimer disease neuroimaging (ADNI) and Open Access Series of Imaging Studies (OASIS)) and real images. The percentage of classification in terms of accuracy, sensitivity, specificity and area under ROC curve was 93.65%, 90.08%, 92.75% and 97.3%; 91.46%, 92%, 91.78% and 96.7%; 85.09%, 86.41%, 84.92% and 94.6% in the case of the ADNI, OASIS and real images respectively. | 2.2. Related Work to Computer Aided-Diagnosis System of Alzheimer’s Disease Given the clinical accessibility of MRI clinically for neuroimaging, several studies have attempted to use images from synthetic bases such as ADNI and OASIS to exploit this anatomical modality or to associate it with various modalities using multimodal fusion techniques to improve the performance of CAD systems for AD.In [45], Huang and Lee improved the performance of the maximization mutual information (MMI) approach by providing FCM/MMI fusion for MR and SPECT multimodal images to generate the fuzzy map of brain tissue. The error functions of brain slices were performed with three refined invariants: the area, the long axis and the short axis. The authors have published results for ADNI images that prove the speed and accuracy of the registration with the proposed fusion approach.In [46], the authors propose an MR-PET multimodal CAD system based on the multiple-kernel learning (MKL) approach. The authors adopted the 10-fold cross-validation approach by publishing interesting results for ADNI database based on classification accuracy, sensitivity, and specificity.In [47], the authors propose the hybrid Principal Component Analysis/Linear Discrimination Analysis (PCA/LDA) approach for extracting characteristics. The Fisher discriminant ratio (FDR) is then used to select the relevant characteristics. Two classifiers, the support vector machines (SVM) and the feed-forward neural network (FFNN) were used with PET images from the ADNI database. The results were based on classification accuracy, sensitivity, specificity and AUC (area under the ROC curve). The SVM outperformed the FFNN with better results.In [48], the scale-invariant feature transforms (SIFT) approach is used for parameter extraction of MRI images from the OASIS database. A selection strategy was then used based on the application of Fisher’s discrimination report and GA. SVMs with different kernels were finally applied with "leave-one-out" cross-validation. In [49], Bhavana and Krishnappa improved the discrete wavelet transform (DWT) approach by proposing the intensity hue saturation (HIS) approach to merge PET and MRI ADNI images. The authors validated the results with four performance measures: mean squared error (MSE), peak signal-to-noise ratio (PSNR), average gradient (AG), and spectral discrepancy (SD) and with best visual observation. In [50], MKL method that combines the MRI, FDG-PET and CSF modalities are proposed to measure brain atrophy and to quantify hypo metabolism and proteins related to AD. The linear SVM classifier is then applied using 10-fold cross-validation. The published results for ADNI images showed good performance.In [51], the sparse composite LDA (SCLDA) model is proposed to identify brain regions affected by early AD. Published results for ADNI multimodal images show good performance. In [52], a random forest-based fusion approach is proposed that combines regional MRI volumes, voxel-based FDG-PET signal intensities, CSF biomarker measurements, and categorical genetic information. Published results for ADNI images are vastly better than the exploitation of a single modality.In [53], a fusion method that combines the MRI and PET ADNI images is proposed. A multi-kernel SVM is then applied for classification. The published results using classification accuracy, sensitivity, specificity and AUC demonstrate good results. In [54], multilevel convolutional neural networks (CNN) is proposed to train and combine the parameters of MRI and PET multimodal images from the ADNI database. Good performance is achieved in term of classification accuracy, sensitivity, specificity and AUC. In [6], a very deep convolution network is proposed, applying it on MRI images from the OASIS database. The performance of the approach is demonstrated in terms of classification accuracy with five-fold cross validation.In a previous work [5], a hybrid FCM/ PCM segmentation process is proposed to evaluate the tissue volume of MR and PET images with 20% noise level from ADNI database. SVMs (with RBF kernel) were then used for classification. The performance with “leave-one-out” cross-validation strategy, and using classification accuracy, sensitivity and specificity measures was good. | [
"26174219",
"29154286",
"26371347",
"27776438",
"23153970",
"11989844",
"8413011",
"25309940",
"29136034",
"24273728",
"27046893",
"28577131",
"9339497",
"18818051",
"15350623",
"16361080",
"19395212",
"21584770",
"21236349",
"23041336",
"24045077",
"12391568",
"10739558",
"17714011"
] | [
{
"pmid": "26174219",
"title": "Risk Factors for Progression of Alzheimer Disease in a Canadian Population: The Canadian Outcomes Study in Dementia (COSID).",
"abstract": "OBJECTIVE\nTo determine risk factors for clinically significant progression during 12 months in patients with mild-to-moderate Alzheimer disease.\n\n\nMETHOD\nCommunity-dwelling patients with mild-to-moderate Alzheimer disease were enrolled in a 3-year prospective study, the Canadian Outcomes Study in Dementia (commonly referred to as COSID), at 32 Canadian sites. Assessments included the Global Deterioration Scale (GDS) for disease severity, the Mini-Mental State Examination (MMSE) for cognition, the Functional Autonomy Measurement System (SMAF) for daily functioning, and the NeuroPsychiatric Inventory (NPI) for behaviour, measured at baseline and at 12 months. Logistic regression identified factors associated with GDS decline, and subsequent stepwise regression identified key independent predictors. Area under the curve (AUC) was then calculated for the model.\n\n\nRESULTS\nAmong 488 patients (mean age 76.5 years [SD 6.4], MMSE 22.1 [SD4.6], 44.1% male), 225 (46%) showed GDS decline. After adjusting for age, baseline risk factors for deterioration included the following: poorer cognition (lower MMSE score, OR 0.55; 95% CI 0.4 to 0.72 per 5 points, P ≤ 0.001), greater dependence (lower SMAF, OR 0.72; 95% CI 0.63 to 0.83 per 5 points, P ≤ 0.001), and more neuropsychiatric symptoms (higher NPI, OR 1.11; 95% CI 1.02 to 1.2 per 5 points, P = 0.02), with a protective effect of male sex (OR 0.59; 95% CI 0.39 to 0.9, P = 0.02), and higher (worse) GDS score (very mild, compared with mild OR 0.25; 95% CI 0.09 to 0.70, P ≤ 0.01; compared with moderate, OR 0.08; 95% CI 0.03 to 0.23, P < 0.001; compared with moderately severe, OR 0.03; 95% CI 0.01 to 0.11, P < 0.001). The AUC was 73% (P < 0.001) (sensitivity 90% and specificity 33%).\n\n\nCONCLUSION\nThe progression of Alzheimer disease in Canada can be predicted using readily available clinical information."
},
{
"pmid": "29154286",
"title": "Clinic-Based Validation of Cerebrospinal Fluid Biomarkers with Florbetapir PET for Diagnosis of Dementia.",
"abstract": "BACKGROUND\nCerebrospinal fluid (CSF) biomarker studies have shown variable accuracy for diagnosis of Alzheimer's disease (AD); therefore, internal validation is recommended.\n\n\nOBJECTIVE\nTo investigate the correlation between CSF biomarkers and cerebral 18-Florbetapir positron emission tomography (Amyloid-PET) and calculate their sensitivity and specificity to obtain the optimal clinical cut-off points to diagnose the etiology of cognitive impairment.\n\n\nMETHODS\nWe performed Amyloid-PET scans and CSF biomarker levels analyses in 68 subjects (50 with mild cognitive impairment, 11 with AD dementia, and 7 with non-AD dementia). Visual examination of Amyloid-PET scans was performed. CSF analyses were performed using standard sandwich ELISA.\n\n\nRESULTS\nAmyloid-PET was positive in 36 subjects, negative in 26, and inconclusive in 6. Optimal clinical cut-off points for CSF markers were the following: amyloid-β 1-42 (Aβ42) = 629 pg/ml, total tau (t-tau) = 532 pg/ml, phosphorylated tau (p-tau) = 88 pg/ml, and t-tau/Aβ42 ratio = 0.58. T-tau/Aβ42 ratio showed the best sensitivity and specificity (92 and 84%, respectively). T-tau and p-tau CSF levels (r2 = 0.867) followed by the t-tau and t-tau/Aβ42 CSF ratio (r2 = 0.666) showed the strongest inter-marker correlation. Interestingly, subjects with inconclusive Amyloid-PET showed intermediate values for all CSF markers between negative and positive Amyloid-PET groups.\n\n\nCONCLUSIONS\nCSF t-tau/Aβ42 ratio appears to be the most accurate AD CSF marker. The presence of intermediate values for CSF markers among the subjects with inconclusive Amyloid-PET suggests the presence of other dementias associated with AD pathology or intermediate phenotypes."
},
{
"pmid": "26371347",
"title": "Computer-Aided Diagnosis System for Alzheimer's Disease Using Different Discrete Transform Techniques.",
"abstract": "The different discrete transform techniques such as discrete cosine transform (DCT), discrete sine transform (DST), discrete wavelet transform (DWT), and mel-scale frequency cepstral coefficients (MFCCs) are powerful feature extraction techniques. This article presents a proposed computer-aided diagnosis (CAD) system for extracting the most effective and significant features of Alzheimer's disease (AD) using these different discrete transform techniques and MFCC techniques. Linear support vector machine has been used as a classifier in this article. Experimental results conclude that the proposed CAD system using MFCC technique for AD recognition has a great improvement for the system performance with small number of significant extracted features, as compared with the CAD system based on DCT, DST, DWT, and the hybrid combination methods of the different transform techniques."
},
{
"pmid": "27776438",
"title": "Independent Component Analysis-Support Vector Machine-Based Computer-Aided Diagnosis System for Alzheimer's with Visual Support.",
"abstract": "Computer-aided diagnosis (CAD) systems constitute a powerful tool for early diagnosis of Alzheimer's disease (AD), but limitations on interpretability and performance exist. In this work, a fully automatic CAD system based on supervised learning methods is proposed to be applied on segmented brain magnetic resonance imaging (MRI) from Alzheimer's disease neuroimaging initiative (ADNI) participants for automatic classification. The proposed CAD system possesses two relevant characteristics: optimal performance and visual support for decision making. The CAD is built in two stages: a first feature extraction based on independent component analysis (ICA) on class mean images and, secondly, a support vector machine (SVM) training and classification. The obtained features for classification offer a full graphical representation of the images, giving an understandable logic in the CAD output, that can increase confidence in the CAD support. The proposed method yields classification results up to 89% of accuracy (with 92% of sensitivity and 86% of specificity) for normal controls (NC) and AD patients, 79% of accuracy (with 82% of sensitivity and 76% of specificity) for NC and mild cognitive impairment (MCI), and 85% of accuracy (with 85% of sensitivity and 86% of specificity) for MCI and AD patients."
},
{
"pmid": "23153970",
"title": "Unbiased tensor-based morphometry: improved robustness and sample size estimates for Alzheimer's disease clinical trials.",
"abstract": "Various neuroimaging measures are being evaluated for tracking Alzheimer's disease (AD) progression in therapeutic trials, including measures of structural brain change based on repeated scanning of patients with magnetic resonance imaging (MRI). Methods to compute brain change must be robust to scan quality. Biases may arise if any scans are thrown out, as this can lead to the true changes being overestimated or underestimated. Here we analyzed the full MRI dataset from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) from the first phase of Alzheimer's Disease Neuroimaging Initiative (ADNI-1) and assessed several sources of bias that can arise when tracking brain changes with structural brain imaging methods, as part of a pipeline for tensor-based morphometry (TBM). In all healthy subjects who completed MRI scanning at screening, 6, 12, and 24months, brain atrophy was essentially linear with no detectable bias in longitudinal measures. In power analyses for clinical trials based on these change measures, only 39AD patients and 95 mild cognitive impairment (MCI) subjects were needed for a 24-month trial to detect a 25% reduction in the average rate of change using a two-sided test (α=0.05, power=80%). Further sample size reductions were achieved by stratifying the data into Apolipoprotein E (ApoE) ε4 carriers versus non-carriers. We show how selective data exclusion affects sample size estimates, motivating an objective comparison of different analysis techniques based on statistical power and robustness. TBM is an unbiased, robust, high-throughput imaging surrogate marker for large, multi-site neuroimaging studies and clinical trials of AD and MCI."
},
{
"pmid": "11989844",
"title": "A modified fuzzy C-means algorithm for bias field estimation and segmentation of MRI data.",
"abstract": "In this paper, we present a novel algorithm for fuzzy segmentation of magnetic resonance imaging (MRI) data and estimation of intensity inhomogeneities using fuzzy logic. MRI intensity inhomogeneities can be attributed to imperfections in the radio-frequency coils or to problems associated with the acquisition sequences. The result is a slowly varying shading artifact over the image that can produce errors with conventional intensity-based classification. Our algorithm is formulated by modifying the objective function of the standard fuzzy c-means (FCM) algorithm to compensate for such inhomogeneities and to allow the labeling of a pixel (voxel) to be influenced by the labels in its immediate neighborhood. The neighborhood effect acts as a regularizer and biases the solution toward piecewise-homogeneous labelings. Such a regularization is useful in segmenting scans corrupted by salt and pepper noise. Experimental results on both synthetic images and MR data are given to demonstrate the effectiveness and efficiency of the proposed algorithm."
},
{
"pmid": "8413011",
"title": "Review of MR image segmentation techniques using pattern recognition.",
"abstract": "This paper has reviewed, with somewhat variable coverage, the nine MR image segmentation techniques itemized in Table II. A wide array of approaches have been discussed; each has its merits and drawbacks. We have also given pointers to other approaches not discussed in depth in this review. The methods reviewed fall roughly into four model groups: c-means, maximum likelihood, neural networks, and k-nearest neighbor rules. Both supervised and unsupervised schemes require human intervention to obtain clinically useful results in MR segmentation. Unsupervised techniques require somewhat less interaction on a per patient/image basis. Maximum likelihood techniques have had some success, but are very susceptible to the choice of training region, which may need to be chosen slice by slice for even one patient. Generally, techniques that must assume an underlying statistical distribution of the data (such as LML and UML) do not appear promising, since tissue regions of interest do not usually obey the distributional tendencies of probability density functions. The most promising supervised techniques reviewed seem to be FF/NN methods that allow hidden layers to be configured as examples are presented to the system. An example of a self-configuring network, FF/CC, was also discussed. The relatively simple k-nearest neighbor rule algorithms (hard and fuzzy) have also shown promise in the supervised category. Unsupervised techniques based upon fuzzy c-means clustering algorithms have also shown great promise in MR image segmentation. Several unsupervised connectionist techniques have recently been experimented with on MR images of the brain and have provided promising initial results. A pixel-intensity-based edge detection algorithm has recently been used to provide promising segmentations of the brain. This is also an unsupervised technique, older versions of which have been susceptible to oversegmenting the image because of the lack of clear boundaries between tissue types or finding uninteresting boundaries between slightly different types of the same tissue. To conclude, we offer some remarks about improving MR segmentation techniques. The better unsupervised techniques are too slow. Improving speed via parallelization and optimization will improve their competitiveness with, e.g., the k-nn rule, which is the fastest technique covered in this review. Another area for development is dynamic cluster validity. Unsupervised methods need better ways to specify and adjust c, the number of tissue classes found by the algorithm. Initialization is a third important area of research. Many of the schemes listed in Table II are sensitive to good initialization, both in terms of the parameters of the design, as well as operator selection of training data.(ABSTRACT TRUNCATED AT 400 WORDS)"
},
{
"pmid": "25309940",
"title": "Brain Imaging Analysis.",
"abstract": "The increasing availability of brain imaging technologies has led to intense neuroscientific inquiry into the human brain. Studies often investigate brain function related to emotion, cognition, language, memory, and numerous other externally induced stimuli as well as resting-state brain function. Studies also use brain imaging in an attempt to determine the functional or structural basis for psychiatric or neurological disorders and, with respect to brain function, to further examine the responses of these disorders to treatment. Neuroimaging is a highly interdisciplinary field, and statistics plays a critical role in establishing rigorous methods to extract information and to quantify evidence for formal inferences. Neuroimaging data present numerous challenges for statistical analysis, including the vast amounts of data collected from each individual and the complex temporal and spatial dependence present. We briefly provide background on various types of neuroimaging data and analysis objectives that are commonly targeted in the field. We present a survey of existing methods targeting these objectives and identify particular areas offering opportunities for future statistical contribution."
},
{
"pmid": "29136034",
"title": "A fast stochastic framework for automatic MR brain images segmentation.",
"abstract": "This paper introduces a new framework for the segmentation of different brain structures (white matter, gray matter, and cerebrospinal fluid) from 3D MR brain images at different life stages. The proposed segmentation framework is based on a shape prior built using a subset of co-aligned training images that is adapted during the segmentation process based on first- and second-order visual appearance characteristics of MR images. These characteristics are described using voxel-wise image intensities and their spatial interaction features. To more accurately model the empirical grey level distribution of the brain signals, we use a linear combination of discrete Gaussians (LCDG) model having positive and negative components. To accurately account for the large inhomogeneity in infant MRIs, a higher-order Markov-Gibbs Random Field (MGRF) spatial interaction model that integrates third- and fourth- order families with a traditional second-order model is proposed. The proposed approach was tested and evaluated on 102 3D MR brain scans using three metrics: the Dice coefficient, the 95-percentile modified Hausdorff distance, and the absolute brain volume difference. Experimental results show better segmentation of MR brain images compared to current open source segmentation tools."
},
{
"pmid": "24273728",
"title": "Accurate white matter lesion segmentation by k nearest neighbor classification with tissue type priors (kNN-TTPs).",
"abstract": "INTRODUCTION\nThe segmentation and volumetric quantification of white matter (WM) lesions play an important role in monitoring and studying neurological diseases such as multiple sclerosis (MS) or cerebrovascular disease. This is often interactively done using 2D magnetic resonance images. Recent developments in acquisition techniques allow for 3D imaging with much thinner sections, but the large number of images per subject makes manual lesion outlining infeasible. This warrants the need for a reliable automated approach. Here we aimed to improve k nearest neighbor (kNN) classification of WM lesions by optimizing intensity normalization and using spatial tissue type priors (TTPs).\n\n\nMETHODS\nThe kNN-TTP method used kNN classification with 3.0 T 3DFLAIR and 3DT1 intensities as well as MNI-normalized spatial coordinates as features. Additionally, TTPs were computed by nonlinear registration of data from healthy controls. Intensity features were normalized using variance scaling, robust range normalization or histogram matching. The algorithm was then trained and evaluated using a leave-one-out experiment among 20 patients with MS against a reference segmentation that was created completely manually. The performance of each normalization method was evaluated both with and without TTPs in the feature set. Volumetric agreement was evaluated using intra-class coefficient (ICC), and voxelwise spatial agreement was evaluated using Dice similarity index (SI). Finally, the robustness of the method across different scanners and patient populations was evaluated using an independent sample of elderly subjects with hypertension.\n\n\nRESULTS\nThe intensity normalization method had a large influence on the segmentation performance, with average SI values ranging from 0.66 to 0.72 when no TTPs were used. Independent of the normalization method, the inclusion of TTPs as features increased performance particularly by reducing the lesion detection error. Best performance was achieved using variance scaled intensity features and including TTPs in the feature set: this yielded ICC = 0.93 and average SI = 0.75 ± 0.08. Validation of the method in an independent sample of elderly subjects with hypertension, yielded even higher ICC = 0.96 and SI = 0.84 ± 0.14.\n\n\nCONCLUSION\nAdding TTPs increases the performance of kNN based MS lesion segmentation methods. Best performance was achieved using variance scaling for intensity normalization and including TTPs in the feature set, showing excellent agreement with the reference segmentations across a wide range of lesion severity, irrespective of the scanner used or the pathological substrate of the lesions."
},
{
"pmid": "27046893",
"title": "Automatic Segmentation of MR Brain Images With a Convolutional Neural Network.",
"abstract": "Automatic segmentation in MR brain images is important for quantitative analysis in large-scale studies with images acquired at all ages. This paper presents a method for the automatic segmentation of MR brain images into a number of tissue classes using a convolutional neural network. To ensure that the method obtains accurate segmentation details as well as spatial consistency, the network uses multiple patch sizes and multiple convolution kernel sizes to acquire multi-scale information about each voxel. The method is not dependent on explicit features, but learns to recognise the information that is important for the classification based on training data. The method requires a single anatomical MR image only. The segmentation method is applied to five different data sets: coronal T2-weighted images of preterm infants acquired at 30 weeks postmenstrual age (PMA) and 40 weeks PMA, axial T2-weighted images of preterm infants acquired at 40 weeks PMA, axial T1-weighted images of ageing adults acquired at an average age of 70 years, and T1-weighted images of young adults acquired at an average age of 23 years. The method obtained the following average Dice coefficients over all segmented tissue classes for each data set, respectively: 0.87, 0.82, 0.84, 0.86, and 0.91. The results demonstrate that the method obtains accurate segmentations in all five sets, and hence demonstrates its robustness to differences in age and acquisition protocol."
},
{
"pmid": "28577131",
"title": "Deep Learning for Brain MRI Segmentation: State of the Art and Future Directions.",
"abstract": "Quantitative analysis of brain MRI is routine for many neurological diseases and conditions and relies on accurate segmentation of structures of interest. Deep learning-based segmentation approaches for brain MRI are gaining interest due to their self-learning and generalization ability over large amounts of data. As the deep learning architectures are becoming more mature, they gradually outperform previous state-of-the-art classical machine learning algorithms. This review aims to provide an overview of current deep learning-based segmentation approaches for quantitative brain MRI. First we review the current deep learning architectures used for segmentation of anatomical brain structures and brain lesions. Next, the performance, speed, and properties of deep learning approaches are summarized and discussed. Finally, we provide a critical assessment of the current state and identify likely future developments and trends."
},
{
"pmid": "9339497",
"title": "Medical image analysis with fuzzy models.",
"abstract": "This paper updates several recent surveys on the use of fuzzy models for segmentation and edge detection in medical image data. Our survey is divided into methods based on supervised and unsupervised learning (that is, on whether there are or are not labelled data available for supervising the computations), and is organized first and foremost by groups (that we know of!) that are active in this area. Our review is aimed more towards 'who is doing it' rather than 'how good it is'. This is partially dictated by the fact that direct comparisons of supervised and unsupervised methods is somewhat akin to comparing apples and oranges. There is a further subdivision into methods for two- and three-dimensional data and/or problems. We do not cover methods based on neural-like networks or fuzzy reasoning systems. These topics are covered in a recently published companion survey by keller et al."
},
{
"pmid": "18818051",
"title": "A modified FCM algorithm for MRI brain image segmentation using both local and non-local spatial constraints.",
"abstract": "Image segmentation is often required as a preliminary and indispensable stage in the computer aided medical image process, particularly during the clinical analysis of magnetic resonance (MR) brain images. In this paper, we present a modified fuzzy c-means (FCM) algorithm for MRI brain image segmentation. In order to reduce the noise effect during segmentation, the proposed method incorporates both the local spatial context and the non-local information into the standard FCM cluster algorithm using a novel dissimilarity index in place of the usual distance metric. The efficiency of the proposed algorithm is demonstrated by extensive segmentation experiments using both simulated and real MR images and by comparison with other state of the art algorithms."
},
{
"pmid": "15350623",
"title": "A novel kernelized fuzzy C-means algorithm with application in medical image segmentation.",
"abstract": "Image segmentation plays a crucial role in many medical imaging applications. In this paper, we present a novel algorithm for fuzzy segmentation of magnetic resonance imaging (MRI) data. The algorithm is realized by modifying the objective function in the conventional fuzzy C-means (FCM) algorithm using a kernel-induced distance metric and a spatial penalty on the membership functions. Firstly, the original Euclidean distance in the FCM is replaced by a kernel-induced distance, and thus the corresponding algorithm is derived and called as the kernelized fuzzy C-means (KFCM) algorithm, which is shown to be more robust than FCM. Then a spatial penalty is added to the objective function in KFCM to compensate for the intensity inhomogeneities of MR image and to allow the labeling of a pixel to be influenced by its neighbors in the image. The penalty term acts as a regularizer and has a coefficient ranging from zero to one. Experimental results on both synthetic and real MR images show that the proposed algorithms have better performance when noise and other artifacts are present than the standard algorithms."
},
{
"pmid": "16361080",
"title": "Fuzzy c-means clustering with spatial information for image segmentation.",
"abstract": "A conventional FCM algorithm does not fully utilize the spatial information in the image. In this paper, we present a fuzzy c-means (FCM) algorithm that incorporates spatial information into the membership function for clustering. The spatial function is the summation of the membership function in the neighborhood of each pixel under consideration. The advantages of the new method are the following: (1) it yields regions more homogeneous than those of other methods, (2) it reduces the spurious blobs, (3) it removes noisy spots, and (4) it is less sensitive to noise than other techniques. This technique is a powerful method for noisy image segmentation and works for both single and multiple-feature data with spatial information."
},
{
"pmid": "19395212",
"title": "A fully automated algorithm under modified FCM framework for improved brain MR image segmentation.",
"abstract": "Automated brain magnetic resonance image (MRI) segmentation is a complex problem especially if accompanied by quality depreciating factors such as intensity inhomogeneity and noise. This article presents a new algorithm for automated segmentation of both normal and diseased brain MRI. An entropy driven homomorphic filtering technique has been employed in this work to remove the bias field. The initial cluster centers are estimated using a proposed algorithm called histogram-based local peak merger using adaptive window. Subsequently, a modified fuzzy c-mean (MFCM) technique using the neighborhood pixel considerations is applied. Finally, a new technique called neighborhood-based membership ambiguity correction (NMAC) has been used for smoothing the boundaries between different tissue classes as well as to remove small pixel level noise, which appear as misclassified pixels even after the MFCM approach. NMAC leads to much sharper boundaries between tissues and, hence, has been found to be highly effective in prominently estimating the tissue and tumor areas in a brain MR scan. The algorithm has been validated against MFCM and FMRIB software library using MRI scans from BrainWeb. Superior results to those achieved with MFCM technique have been observed along with the collateral advantages of fully automatic segmentation, faster computation and faster convergence of the objective function."
},
{
"pmid": "21584770",
"title": "Automated diagnosis of Alzheimer disease using the scale-invariant feature transforms in magnetic resonance images.",
"abstract": "In this paper we present an automated method for diagnosing Alzheimer disease (AD) from brain MR images. The approach uses the scale-invariant feature transforms (SIFT) extracted from different slices in MR images for both healthy subjects and subjects with Alzheimer disease. These features are then clustered in a group of features which they can be used to transform a full 3-dimensional image from a subject to a histogram of these features. A feature selection strategy was used to select those bins from these histograms that contribute most in classifying the two groups. This was done by ranking the features using the Fisher's discriminant ratio and a feature subset selection strategy using the genetic algorithm. These selected bins of the histograms are then used for the classification of healthy/patient subjects from MR images. Support vector machines with different kernels were applied to the data for the discrimination of the two groups, namely healthy subjects and patients diagnosed by AD. The results indicate that the proposed method can be used for diagnose of AD from MR images with the accuracy of %86 for the subjects aged from 60 to 80 years old and with mild AD."
},
{
"pmid": "21236349",
"title": "Multimodal classification of Alzheimer's disease and mild cognitive impairment.",
"abstract": "Effective and accurate diagnosis of Alzheimer's disease (AD), as well as its prodromal stage (i.e., mild cognitive impairment (MCI)), has attracted more and more attention recently. So far, multiple biomarkers have been shown to be sensitive to the diagnosis of AD and MCI, i.e., structural MR imaging (MRI) for brain atrophy measurement, functional imaging (e.g., FDG-PET) for hypometabolism quantification, and cerebrospinal fluid (CSF) for quantification of specific proteins. However, most existing research focuses on only a single modality of biomarkers for diagnosis of AD and MCI, although recent studies have shown that different biomarkers may provide complementary information for the diagnosis of AD and MCI. In this paper, we propose to combine three modalities of biomarkers, i.e., MRI, FDG-PET, and CSF biomarkers, to discriminate between AD (or MCI) and healthy controls, using a kernel combination method. Specifically, ADNI baseline MRI, FDG-PET, and CSF data from 51AD patients, 99 MCI patients (including 43 MCI converters who had converted to AD within 18 months and 56 MCI non-converters who had not converted to AD within 18 months), and 52 healthy controls are used for development and validation of our proposed multimodal classification method. In particular, for each MR or FDG-PET image, 93 volumetric features are extracted from the 93 regions of interest (ROIs), automatically labeled by an atlas warping algorithm. For CSF biomarkers, their original values are directly used as features. Then, a linear support vector machine (SVM) is adopted to evaluate the classification accuracy, using a 10-fold cross-validation. As a result, for classifying AD from healthy controls, we achieve a classification accuracy of 93.2% (with a sensitivity of 93% and a specificity of 93.3%) when combining all three modalities of biomarkers, and only 86.5% when using even the best individual modality of biomarkers. Similarly, for classifying MCI from healthy controls, we achieve a classification accuracy of 76.4% (with a sensitivity of 81.8% and a specificity of 66%) for our combined method, and only 72% even using the best individual modality of biomarkers. Further analysis on MCI sensitivity of our combined method indicates that 91.5% of MCI converters and 73.4% of MCI non-converters are correctly classified. Moreover, we also evaluate the classification performance when employing a feature selection method to select the most discriminative MR and FDG-PET features. Again, our combined method shows considerably better performance, compared to the case of using an individual modality of biomarkers."
},
{
"pmid": "23041336",
"title": "Random forest-based similarity measures for multi-modal classification of Alzheimer's disease.",
"abstract": "Neurodegenerative disorders, such as Alzheimer's disease, are associated with changes in multiple neuroimaging and biological measures. These may provide complementary information for diagnosis and prognosis. We present a multi-modality classification framework in which manifolds are constructed based on pairwise similarity measures derived from random forest classifiers. Similarities from multiple modalities are combined to generate an embedding that simultaneously encodes information about all the available features. Multi-modality classification is then performed using coordinates from this joint embedding. We evaluate the proposed framework by application to neuroimaging and biological data from the Alzheimer's Disease Neuroimaging Initiative (ADNI). Features include regional MRI volumes, voxel-based FDG-PET signal intensities, CSF biomarker measures, and categorical genetic information. Classification based on the joint embedding constructed using information from all four modalities out-performs the classification based on any individual modality for comparisons between Alzheimer's disease patients and healthy controls, as well as between mild cognitive impairment patients and healthy controls. Based on the joint embedding, we achieve classification accuracies of 89% between Alzheimer's disease patients and healthy controls, and 75% between mild cognitive impairment patients and healthy controls. These results are comparable with those reported in other recent studies using multi-kernel learning. Random forests provide consistent pairwise similarity measures for multiple modalities, thus facilitating the combination of different types of feature data. We demonstrate this by application to data in which the number of features differs by several orders of magnitude between modalities. Random forest classifiers extend naturally to multi-class problems, and the framework described here could be applied to distinguish between multiple patient groups in the future."
},
{
"pmid": "24045077",
"title": "Inter-modality relationship constrained multi-modality multi-task feature selection for Alzheimer's Disease and mild cognitive impairment identification.",
"abstract": "Previous studies have demonstrated that the use of integrated information from multi-modalities could significantly improve diagnosis of Alzheimer's Disease (AD). However, feature selection, which is one of the most important steps in classification, is typically performed separately for each modality, which ignores the potentially strong inter-modality relationship within each subject. Recent emergence of multi-task learning approach makes the joint feature selection from different modalities possible. However, joint feature selection may unfortunately overlook different yet complementary information conveyed by different modalities. We propose a novel multi-task feature selection method to preserve the complementary inter-modality information. Specifically, we treat feature selection from each modality as a separate task and further impose a constraint for preserving the inter-modality relationship, besides separately enforcing the sparseness of the selected features from each modality. After feature selection, a multi-kernel support vector machine (SVM) is further used to integrate the selected features from each modality for classification. Our method is evaluated using the baseline PET and MRI images of subjects obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Our method achieves a good performance, with an accuracy of 94.37% and an area under the ROC curve (AUC) of 0.9724 for AD identification, and also an accuracy of 78.80% and an AUC of 0.8284 for mild cognitive impairment (MCI) identification. Moreover, the proposed method achieves an accuracy of 67.83% and an AUC of 0.6957 for separating between MCI converters and MCI non-converters (to AD). These performances demonstrate the superiority of the proposed method over the state-of-the-art classification methods."
},
{
"pmid": "12391568",
"title": "Fast robust automated brain extraction.",
"abstract": "An automated method for segmenting magnetic resonance head images into brain and non-brain has been developed. It is very robust and accurate and has been tested on thousands of data sets from a wide variety of scanners and taken with a wide variety of MR sequences. The method, Brain Extraction Tool (BET), uses a deformable model that evolves to fit the brain's surface by the application of a set of locally adaptive model forces. The method is very fast and requires no preregistration or other pre-processing before being applied. We describe the new method and give examples of results and the results of extensive quantitative testing against \"gold-standard\" hand segmentations, and two other popular automated methods."
},
{
"pmid": "10739558",
"title": "Tissue segmentation on MR images of the brain by possibilistic clustering on a 3D wavelet representation.",
"abstract": "An algorithm for the segmentation of a single sequence of three-dimensional magnetic resonance (MR) images into cerebrospinal fluid, gray matter, and white matter classes is proposed. This new method is a possibilistic clustering algorithm using the fuzzy theory as frame and the wavelet coefficients of the voxels as features to be clustered. Fuzzy logic models the uncertainty and imprecision inherent in MR images of the brain, while the wavelet representation allows for both spatial and textural information. The procedure is fast, unsupervised, and totally independent of any statistical assumptions. The method is tested on a phantom image, then applied to normal and Alzheimer's brains, and finally compared with another classic brain tissue segmentation method, affording a relevant classification of voxels into the different tissue classes."
},
{
"pmid": "17714011",
"title": "Open Access Series of Imaging Studies (OASIS): cross-sectional MRI data in young, middle aged, nondemented, and demented older adults.",
"abstract": "The Open Access Series of Imaging Studies is a series of magnetic resonance imaging data sets that is publicly available for study and analysis. The initial data set consists of a cross-sectional collection of 416 subjects aged 18 to 96 years. One hundred of the included subjects older than 60 years have been clinically diagnosed with very mild to moderate Alzheimer's disease. The subjects are all right-handed and include both men and women. For each subject, three or four individual T1-weighted magnetic resonance imaging scans obtained in single imaging sessions are included. Multiple within-session acquisitions provide extremely high contrast-to-noise ratio, making the data amenable to a wide range of analytic approaches including automated computational analysis. Additionally, a reliability data set is included containing 20 subjects without dementia imaged on a subsequent visit within 90 days of their initial session. Automated calculation of whole-brain volume and estimated total intracranial volume are presented to demonstrate use of the data for measuring differences associated with normal aging and Alzheimer's disease."
}
] |
Genes | 31627420 | PMC6827155 | 10.3390/genes10100817 | Detection of Microaneurysms in Fundus Images Based on an Attention Mechanism | Microaneurysms (MAs) are the earliest detectable diabetic retinopathy (DR) lesions. Thus, the ability to automatically detect MAs is critical for the early diagnosis of DR. However, achieving the accurate and reliable detection of MAs remains a significant challenge due to the size and complexity of retinal fundus images. Therefore, this paper presents a novel MA detection method based on a deep neural network with a multilayer attention mechanism for retinal fundus images. First, a series of equalization operations are performed to improve the quality of the fundus images. Then, based on the attention mechanism, multiple feature layers with obvious target features are fused to achieve preliminary MA detection. Finally, the spatial relationships between MAs and blood vessels are utilized to perform a secondary screening of the preliminary test results to obtain the final MA detection results. We evaluated the method on the IDRiD_VOC dataset, which was collected from the open IDRiD dataset. The results show that our method effectively improves the average accuracy and sensitivity of MA detection. | 2.2. Related Work2.2.1. Current Object Detection MethodsAt present, common object detection methods include both traditional machine learning methods and deep learning methods. Traditional object detection methods are mostly based on a sliding window model, and they extract and match manually designed features. Representative traditional methods include the AdaBoost algorithm with Haar features [7,8,9,10], the support vector machine (SVM) algorithm with histogram of oriented gradients (HOG) features [11,12,13], and the data projection method (DPM) [14,15,16] algorithm. Object detection algorithms are based on deep learning extract features autonomously using a deep neural network. Representative deep learning methods include the various versions of region-based convolutional neural networks (R-CNNs) [17,18,19], the various versions of You Only Look Once (YOLO) [20,21,22], and the Single Shot Detector (SSD) [23]. The traditional methods have various shortcomings: they are overly simplistic, require complex calculations, have insufficient applicability, and suffer from poor detection accuracy and speed. By contrast, deep learning methods have shown greater advantages in these respects.Small object detection methods based on deep learning mostly use the image pyramid method for feature fusion, which is widely used in digital signal analysis [24,25]. The authors of [26] proposed a general strategy for selecting the template size to be used in detection. First, the input image is transformed into different scales to construct an image pyramid. These images are later used as input to a convolutional neural network (CNN) for training. Then, the feature size that is best suited to the small target detection task is selected to improve the accuracy of small target detection. The authors of [27] proposed a feature fusion method for SSD whose core idea is to fuse multilevel features to acquire contextual information. This method fuses the features from layers conv4_3 and conv5_3 and then uses a high-level feature map to enhance the semantic information of the low-level feature maps. This approach increases the accuracy of small target detection. In [28], a feature pyramid method was proposed. By combining high-level features with low-level features from earlier layers through downsampling, feature layers with different resolutions can be endowed with rich semantic information, and detection can be performed based on each layer separately. On the basis of the SSD and Feature Pyramid Network (FPN) approaches, the authors in [29] added shallow features to the feature fusion queue for small target detection.MA detection methods based on deep learning mostly employ image segmentation or pixel-by-pixel classification. The model proposed in [30] uses a single CNN to segment pathological features such as exudate, hemorrhage and MA features but does not consider the specificity of different lesions. The authors of [31] were the first to perform vascular removal during the extraction of candidate MAs. However, the incomplete removal of blood vessels caused the model to easily confuse the remaining blood vessel regions with real MAs. The authors of [32] proposed a method for using a CNN’s ability to output probabilities to output a class probability for each pixel. This method was able to simultaneously detect exudation, hemorrhage, and MAs based on the output probability map. In [33], the use of geometrical properties based on connected regions was shown to enable the distinction between lesion pixels and non-lesion pixels. However, pixel-by-pixel classification methods are computationally intensive, and they lack consideration of the surrounding environment.All of the deep-learning-based object detection methods described above are unsuitable for use in the MA detection task. In [17,18,19,20,21], image features were extracted by a CNN, followed by a regression analysis and a confidence calculation of the prediction frame based on the features obtained from the last layer. In [23,26], multiscale targets were detected by inputting feature maps or images of different scales into the network. However, none of these methods are able to focus on the feature information corresponding to small targets; they ignore the correlations between feature layers at different levels, which can easily lead to a loss of shallow features and a reduction in the ability of the model to detect small targets. Several studies [22,27,28,29] have attempted to fuse different feature layers to obtain more abundant image information. For the merging of different pyramid feature layers, convergence schemes such as concat (combination of channels) and eltsum (merging of feature maps) are usually considered. However, when the features are combined, all the information is processed uniformly, and the relative importance of different feature layers and different receptive field information is not considered. At the same time, the idea of SSD, which uses deeper feature maps (conv9-11) for prediction, has been widely adopted. However, unless lower-layer feature maps are included, this approach cannot improve the detection of small targets.To make the model pay more attention to image features that are specifically related to small target detection, an attention mechanism is adopted in this paper.2.2.2. Attention MechanismAttention mechanisms were originally developed to solve the problem of model distraction during machine translation. Bahdanau et al. [34] were the first to use an attention mechanism to achieve differences in the contributions of different input sequences to the output sequence. Subsequently, attention mechanisms have undergone continuous development, and they are widely used in various natural language processing tasks [35,36,37,38,39].Attention mechanisms are also commonly used in image processing, especially in image classification [40,41,42], semantic segmentation [43,44], object detection, and similar tasks. An attention network was proposed in [45] that provides a quantitative weak direction for the object search, ensures that the prediction set will iteratively converge to an accurate object boundary frame, and achieves more accurate object detection. Based on Faster R-CNN, Dai et al. [46] proposed position-sensitive region-of-interest (RoI) pooling, a type of attention mechanism that incorporates spatial information to solve the location sensitivity problem in target detection. Liu et al. [47] applied an attention mechanism for the detection of human heads in images for the estimation of population density.In recent years, attention mechanisms for medical image processing have begun to emerge. Zhang et al. [48] introduced an attention enhancement module (AAS) to assist an attention module in generating a more efficient attention map. The authors of [49] proposed an attention-based CNN for glaucoma detection. Tang et al. [50] proposed an attention mechanism for 3D medical image segmentation, in which a cascaded detection module followed by a segmentation module was applied to produce a set of object region candidates. Nonetheless, very few algorithms have been proposed in which an attention mechanism is applied to MA detection, and this gap in the literature motivates the proposal of our framework. | [
"28968557",
"24290931",
"20949097",
"22609437",
"26425849",
"27564376",
"30400869",
"24529636",
"17495995",
"19822469",
"23956787"
] | [
{
"pmid": "28968557",
"title": "A new unified framework for the early detection of the progression to diabetic retinopathy from fundus images.",
"abstract": "Human retina is a diverse and important tissue, vastly studied for various retinal and other diseases. Diabetic retinopathy (DR), a leading cause of blindness, is one of them. This work proposes a novel and complete framework for the accurate and robust extraction and analysis of a series of retinal vascular geometric features. It focuses on studying the registered bifurcations in successive years of progression from diabetes (no DR) to DR, in order to identify the vascular alterations. Retinal fundus images are utilised, and multiple experimental designs are employed. The framework includes various steps, such as image registration and segmentation, extraction of features, statistical analysis and classification models. Linear mixed models are utilised for making the statistical inferences, alongside the elastic-net logistic regression, boruta algorithm, and regularised random forests for the feature selection and classification phases, in order to evaluate the discriminative potential of the investigated features and also build classification models. A number of geometric features, such as the central retinal artery and vein equivalents, are found to differ significantly across the experiments and also have good discriminative potential. The classification systems yield promising results with the area under the curve values ranging from 0.821 to 0.968, across the four different investigated combinations."
},
{
"pmid": "24290931",
"title": "Computer-aided diagnosis of diabetic retinopathy: a review.",
"abstract": "Diabetes mellitus may cause alterations in the retinal microvasculature leading to diabetic retinopathy. Unchecked, advanced diabetic retinopathy may lead to blindness. It can be tedious and time consuming to decipher subtle morphological changes in optic disk, microaneurysms, hemorrhage, blood vessels, macula, and exudates through manual inspection of fundus images. A computer aided diagnosis system can significantly reduce the burden on the ophthalmologists and may alleviate the inter and intra observer variability. This review discusses the available methods of various retinal feature extractions and automated analysis."
},
{
"pmid": "20949097",
"title": "A computational framework for influenza antigenic cartography.",
"abstract": "Influenza viruses have been responsible for large losses of lives around the world and continue to present a great public health challenge. Antigenic characterization based on hemagglutination inhibition (HI) assay is one of the routine procedures for influenza vaccine strain selection. However, HI assay is only a crude experiment reflecting the antigenic correlations among testing antigens (viruses) and reference antisera (antibodies). Moreover, antigenic characterization is usually based on more than one HI dataset. The combination of multiple datasets results in an incomplete HI matrix with many unobserved entries. This paper proposes a new computational framework for constructing an influenza antigenic cartography from this incomplete matrix, which we refer to as Matrix Completion-Multidimensional Scaling (MC-MDS). In this approach, we first reconstruct the HI matrices with viruses and antibodies using low-rank matrix completion, and then generate the two-dimensional antigenic cartography using multidimensional scaling. Moreover, for influenza HI tables with herd immunity effect (such as those from Human influenza viruses), we propose a temporal model to reduce the inherent temporal bias of HI tables caused by herd immunity. By applying our method in HI datasets containing H3N2 influenza A viruses isolated from 1968 to 2003, we identified eleven clusters of antigenic variants, representing all major antigenic drift events in these 36 years. Our results showed that both the completed HI matrix and the antigenic cartography obtained via MC-MDS are useful in identifying influenza antigenic variants and thus can be used to facilitate influenza vaccine strain selection. The webserver is available at http://sysbio.cvm.msstate.edu/AntigenMap."
},
{
"pmid": "22609437",
"title": "Identifying antigenicity-associated sites in highly pathogenic H5N1 influenza virus hemagglutinin by using sparse learning.",
"abstract": "Since the isolation of A/goose/Guangdong/1/1996 (H5N1) in farmed geese in southern China, highly pathogenic H5N1 avian influenza viruses have posed a continuous threat to both public and animal health. The non-synonymous mutation of the H5 hemagglutinin (HA) gene has resulted in antigenic drift, leading to difficulties in both clinical diagnosis and vaccine strain selection. Characterizing H5N1's antigenic profiles would help resolve these problems. In this study, a novel sparse learning method was developed to identify antigenicity-associated sites in influenza A viruses on the basis of immunologic data sets (i.e., from hemagglutination inhibition and microneutralization assays) and HA protein sequences. Twenty-one potential antigenicity-associated sites were identified. A total of 17 H5N1 mutants were used to validate the effects of 11 of these predicted sites on H5N1's antigenicity, including 7 newly identified sites not located in reported antibody binding sites. The experimental data confirmed that mutations of these tested sites lead to changes in viral antigenicity, validating our method."
},
{
"pmid": "26425849",
"title": "Results of Automated Retinal Image Analysis for Detection of Diabetic Retinopathy from the Nakuru Study, Kenya.",
"abstract": "OBJECTIVE\nDigital retinal imaging is an established method of screening for diabetic retinopathy (DR). It has been established that currently about 1% of the world's blind or visually impaired is due to DR. However, the increasing prevalence of diabetes mellitus and DR is creating an increased workload on those with expertise in grading retinal images. Safe and reliable automated analysis of retinal images may support screening services worldwide. This study aimed to compare the Iowa Detection Program (IDP) ability to detect diabetic eye diseases (DED) to human grading carried out at Moorfields Reading Centre on the population of Nakuru Study from Kenya.\n\n\nPARTICIPANTS\nRetinal images were taken from participants of the Nakuru Eye Disease Study in Kenya in 2007/08 (n = 4,381 participants [NW6 Topcon Digital Retinal Camera]).\n\n\nMETHODS\nFirst, human grading was performed for the presence or absence of DR, and for those with DR this was sub-divided in to referable or non-referable DR. The automated IDP software was deployed to identify those with DR and also to categorize the severity of DR.\n\n\nMAIN OUTCOME MEASURES\nThe primary outcomes were sensitivity, specificity, and positive and negative predictive value of IDP versus the human grader as reference standard.\n\n\nRESULTS\nAltogether 3,460 participants were included. 113 had DED, giving a prevalence of 3.3% (95% CI, 2.7-3.9%). Sensitivity of the IDP to detect DED as by the human grading was 91.0% (95% CI, 88.0-93.4%). The IDP ability to detect DED gave an AUC of 0.878 (95% CI 0.850-0.905). It showed a negative predictive value of 98%. The IDP missed no vision threatening retinopathy in any patients and none of the false negative cases met criteria for treatment.\n\n\nCONCLUSIONS\nIn this epidemiological sample, the IDP's grading was comparable to that of human graders'. It therefore might be feasible to consider inclusion into usual epidemiological grading."
},
{
"pmid": "27564376",
"title": "Retinal Microaneurysms Detection Using Gradient Vector Analysis and Class Imbalance Classification.",
"abstract": "Retinal microaneurysms (MAs) are the earliest clinically observable lesions of diabetic retinopathy. Reliable automated MAs detection is thus critical for early diagnosis of diabetic retinopathy. This paper proposes a novel method for the automated MAs detection in color fundus images based on gradient vector analysis and class imbalance classification, which is composed of two stages, i.e. candidate MAs extraction and classification. In the first stage, a candidate MAs extraction algorithm is devised by analyzing the gradient field of the image, in which a multi-scale log condition number map is computed based on the gradient vectors for vessel removal, and then the candidate MAs are localized according to the second order directional derivatives computed in different directions. Due to the complexity of fundus image, besides a small number of true MAs, there are also a large amount of non-MAs in the extracted candidates. Classifying the true MAs and the non-MAs is an extremely class imbalanced classification problem. Therefore, in the second stage, several types of features including geometry, contrast, intensity, edge, texture, region descriptors and other features are extracted from the candidate MAs and a class imbalance classifier, i.e., RUSBoost, is trained for the MAs classification. With the Retinopathy Online Challenge (ROC) criterion, the proposed method achieves an average sensitivity of 0.433 at 1/8, 1/4, 1/2, 1, 2, 4 and 8 false positives per image on the ROC database, which is comparable with the state-of-the-art approaches, and 0.321 on the DiaRetDB1 V2.1 database, which outperforms the state-of-the-art approaches."
},
{
"pmid": "30400869",
"title": "Fundus images analysis using deep features for detection of exudates, hemorrhages and microaneurysms.",
"abstract": "BACKGROUND\nConvolution neural networks have been considered for automatic analysis of fundus images to detect signs of diabetic retinopathy but suffer from low sensitivity.\n\n\nMETHODS\nThis study has proposed an alternate method using probabilistic output from Convolution neural network to automatically and simultaneously detect exudates, hemorrhages and microaneurysms. The method was evaluated using two approaches: patch and image-based analysis of the fundus images on two public databases: DIARETDB1 and e-Ophtha. The novelty of the proposed method is that the images were analyzed using probability maps generated by score values of the softmax layer instead of the use of the binary output.\n\n\nRESULTS\nThe sensitivity of the proposed approach was 0.96, 0.84 and 0.85 for detection of exudates, hemorrhages and microaneurysms, respectively when considering patch-based analysis. The results show overall accuracy for DIARETDB1 was 97.3% and 86.6% for e-Ophtha. The error rate for image-based analysis was also significantly reduced when compared with other works.\n\n\nCONCLUSION\nThe proposed method provides the framework for convolution neural network-based analysis of fundus images to identify exudates, hemorrhages, and microaneurysms. It obtained accuracy and sensitivity which were significantly better than the reported studies and makes it suitable for automatic diabetic retinopathy signs detection."
},
{
"pmid": "24529636",
"title": "Automated detection of microaneurysms using scale-adapted blob analysis and semi-supervised learning.",
"abstract": "Despite several attempts, automated detection of microaneurysm (MA) from digital fundus images still remains to be an open issue. This is due to the subtle nature of MAs against the surrounding tissues. In this paper, the microaneurysm detection problem is modeled as finding interest regions or blobs from an image and an automatic local-scale selection technique is presented. Several scale-adapted region descriptors are introduced to characterize these blob regions. A semi-supervised based learning approach, which requires few manually annotated learning examples, is also proposed to train a classifier which can detect true MAs. The developed system is built using only few manually labeled and a large number of unlabeled retinal color fundus images. The performance of the overall system is evaluated on Retinopathy Online Challenge (ROC) competition database. A competition performance measure (CPM) of 0.364 shows the competitiveness of the proposed system against state-of-the art techniques as well as the applicability of the proposed features to analyze fundus images."
},
{
"pmid": "17495995",
"title": "Nucleotide composition string selection in HIV-1 subtyping using whole genomes.",
"abstract": "MOTIVATION\nThe availability of the whole genomic sequences of HIV-1 viruses provides an excellent resource for studying the HIV-1 phylogenies using all the genetic materials. However, such huge volumes of data create computational challenges in both memory consumption and CPU usage.\n\n\nRESULTS\nWe propose the complete composition vector representation for an HIV-1 strain, and a string scoring method to extract the nucleotide composition strings that contain the richest evolutionary information for phylogenetic analysis. In this way, a large-scale whole genome phylogenetic analysis for thousands of strains can be done both efficiently and effectively. By using 42 carefully curated strains as references, we apply our method to subtype 1156 HIV-1 strains (10.5 million nucleotides in total), which include 825 pure subtype strains and 331 recombinants. Our results show that our nucleotide composition string selection scheme is computationally efficient, and is able to define both pure subtypes and recombinant forms for HIV-1 strains using the 5000 top ranked nucleotide strings.\n\n\nAVAILABILITY\nThe Java executable and the HIV-1 datasets are accessible through 'http://www.cs.ualberta.ca/~ghlin/src/WebTools/hiv.php.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "19822469",
"title": "Retinopathy online challenge: automatic detection of microaneurysms in digital color fundus photographs.",
"abstract": "The detection of microaneurysms in digital color fundus photographs is a critical first step in automated screening for diabetic retinopathy (DR), a common complication of diabetes. To accomplish this detection numerous methods have been published in the past but none of these was compared with each other on the same data. In this work we present the results of the first international microaneurysm detection competition, organized in the context of the Retinopathy Online Challenge (ROC), a multiyear online competition for various aspects of DR detection. For this competition, we compare the results of five different methods, produced by five different teams of researchers on the same set of data. The evaluation was performed in a uniform manner using an algorithm presented in this work. The set of data used for the competition consisted of 50 training images with available reference standard and 50 test images where the reference standard was withheld by the organizers (M. Niemeijer, B. van Ginneken, and M. D. Abràmoff). The results obtained on the test data was submitted through a website after which standardized evaluation software was used to determine the performance of each of the methods. A human expert detected microaneurysms in the test set to allow comparison with the performance of the automatic methods. The overall results show that microaneurysm detection is a challenging task for both the automatic methods as well as the human expert. There is room for improvement as the best performing system does not reach the performance of the human expert. The data associated with the ROC microaneurysm detection competition will remain publicly available and the website will continue accepting submissions."
},
{
"pmid": "23956787",
"title": "Constructing benchmark databases and protocols for medical image analysis: diabetic retinopathy.",
"abstract": "We address the performance evaluation practices for developing medical image analysis methods, in particular, how to establish and share databases of medical images with verified ground truth and solid evaluation protocols. Such databases support the development of better algorithms, execution of profound method comparisons, and, consequently, technology transfer from research laboratories to clinical practice. For this purpose, we propose a framework consisting of reusable methods and tools for the laborious task of constructing a benchmark database. We provide a software tool for medical image annotation helping to collect class label, spatial span, and expert's confidence on lesions and a method to appropriately combine the manual segmentations from multiple experts. The tool and all necessary functionality for method evaluation are provided as public software packages. As a case study, we utilized the framework and tools to establish the DiaRetDB1 V2.1 database for benchmarking diabetic retinopathy detection algorithms. The database contains a set of retinal images, ground truth based on information from multiple experts, and a baseline algorithm for the detection of retinopathy lesions."
}
] |
Scientific Reports | 31695060 | PMC6834855 | 10.1038/s41598-019-52196-4 | Athena: Automated Tuning of k-mer based Genomic Error Correction Algorithms using Language Models | The performance of most error-correction (EC) algorithms that operate on genomics reads is dependent on the proper choice of its configuration parameters, such as the value of k in k-mer based techniques. In this work, we target the problem of finding the best values of these configuration parameters to optimize error correction and consequently improve genome assembly. We perform this in an adaptive manner, adapted to different datasets and to EC tools, due to the observation that different configuration parameters are optimal for different datasets, i.e., from different platforms and species, and vary with the EC algorithm being applied. We use language modeling techniques from the Natural Language Processing (NLP) domain in our algorithmic suite, Athena, to automatically tune the performance-sensitive configuration parameters. Through the use of N-Gram and Recurrent Neural Network (RNN) language modeling, we validate the intuition that the EC performance can be computed quantitatively and efficiently using the “perplexity” metric, repurposed from NLP. After training the language model, we show that the perplexity metric calculated from a sample of the test (or production) data has a strong negative correlation with the quality of error correction of erroneous NGS reads. Therefore, we use the perplexity metric to guide a hill climbing-based search, converging toward the best configuration parameter value. Our approach is suitable for both de novo and comparative sequencing (resequencing), eliminating the need for a reference genome to serve as the ground truth. We find that Athena can automatically find the optimal value of k with a very high accuracy for 7 real datasets and using 3 different k-mer based EC algorithms, Lighter, Blue, and Racer. The inverse relation between the perplexity metric and alignment rate exists under all our tested conditions—for real and synthetic datasets, for all kinds of sequencing errors (insertion, deletion, and substitution), and for high and low error rates. The absolute value of that correlation is at least 73%. In our experiments, the best value of k found by Athena achieves an alignment rate within 0.53% of the oracle best value of k found through brute force searching (i.e., scanning through the entire range of k values). Athena’s selected value of k lies within the top-3 best k values using N-Gram models and the top-5 best k values using RNN models With best parameter selection by Athena, the assembly quality (NG50) is improved by a Geometric Mean of 4.72X across the 7 real datasets. | Related WorkError correction approachesEC tools can be mainly divided into three categories: k-spectrum based, suffix tree/array-based, and multiple sequence alignment-based (MSA) methods. Each tool takes one or more configuration parameters. While we have experimented with Athena applied to the first kind, with some engineering effort, it can be applied to tuning tools that belong to the other two categories.Language modeling in genomicsIn the genomics domain, LM was used in38 to find the characteristics of organisms in which N-Gram analysis was applied to 44 different bacterial and archaeal genomes and to the human genome. In subsequent work, they used N-Gram-based LM for extracting patterns from whole genome sequences. Others39 have used LM to enhance domain recognition in protein sequences. For example40, has used N-Gram analysis specifically to create a Bayesian classifier to predict the localization of a protein sequence over 10 distinct eukaryotic organisms. RNNs can be thought of as a generalization of Hidden Markov Models (HMMs) and HMMs have been applied in several studies that seek to annotate epigenomic data. For example41, presents a fast method using spectral learning with HMMs for annotating chromatin states in the human genome. Thus, we are seeing a steady rise in the use of ML techniques, traditionally used in NLP, being used to make sense of -omics data.Automatic parameter tuningused a Feature-based Accuracy Estimator as a parameter advisor for the Opal aligner software42,43. The field of computer systems has had several successful solutions for automatic configuration tuning of complex software systems. Our own work44 plus others45 have shown how to do this for distributed databases, while other works have done this for distributed computing frameworks like Hadoop46,47 or cloud configurations48. We take inspiration from them but our constraints and requirements are different (such as, avoiding reliance on ground truth corrected sequences). | [
"21114842",
"25398208",
"28821237",
"22388286",
"3294162",
"12668763",
"17472741",
"25583448"
] | [
{
"pmid": "21114842",
"title": "Quake: quality-aware detection and correction of sequencing errors.",
"abstract": "We introduce Quake, a program to detect and correct errors in DNA sequencing reads. Using a maximum likelihood approach incorporating quality values and nucleotide specific miscall rates, Quake achieves the highest accuracy on realistically simulated reads. We further demonstrate substantial improvements in de novo assembly and SNP detection after using Quake. Quake can be used for any size project, including more than one billion human reads, and is freely available as open source software from http://www.cbcb.umd.edu/software/quake."
},
{
"pmid": "25398208",
"title": "Lighter: fast and memory-efficient sequencing error correction without counting.",
"abstract": "Lighter is a fast, memory-efficient tool for correcting sequencing errors. Lighter avoids counting k-mers. Instead, it uses a pair of Bloom filters, one holding a sample of the input k-mers and the other holding k-mers likely to be correct. As long as the sampling fraction is adjusted in inverse proportion to the depth of sequencing, Bloom filter size can be held constant while maintaining near-constant accuracy. Lighter is parallelized, uses no secondary storage, and is both faster and more memory-efficient than competing approaches while achieving comparable accuracy."
},
{
"pmid": "28821237",
"title": "Evaluation of the impact of Illumina error correction tools on de novo genome assembly.",
"abstract": "BACKGROUND\nRecently, many standalone applications have been proposed to correct sequencing errors in Illumina data. The key idea is that downstream analysis tools such as de novo genome assemblers benefit from a reduced error rate in the input data. Surprisingly, a systematic validation of this assumption using state-of-the-art assembly methods is lacking, even for recently published methods.\n\n\nRESULTS\nFor twelve recent Illumina error correction tools (EC tools) we evaluated both their ability to correct sequencing errors and their ability to improve de novo genome assembly in terms of contig size and accuracy.\n\n\nCONCLUSIONS\nWe confirm that most EC tools reduce the number of errors in sequencing data without introducing many new errors. However, we found that many EC tools suffer from poor performance in certain sequence contexts such as regions with low coverage or regions that contain short repeated or low-complexity sequences. Reads overlapping such regions are often ill-corrected in an inconsistent manner, leading to breakpoints in the resulting assemblies that are not present in assemblies obtained from uncorrected data. Resolving this systematic flaw in future EC tools could greatly improve the applicability of such tools."
},
{
"pmid": "22388286",
"title": "Fast gapped-read alignment with Bowtie 2.",
"abstract": "As the rate of sequencing increases, greater throughput is demanded from read aligners. The full-text minute index is often used to make alignment very fast and memory-efficient, but the approach is ill-suited to finding longer, gapped alignments. Bowtie 2 combines the strengths of the full-text minute index with the flexibility and speed of hardware-accelerated dynamic programming algorithms to achieve a combination of high speed, sensitivity and accuracy."
},
{
"pmid": "3294162",
"title": "Genomic mapping by fingerprinting random clones: a mathematical analysis.",
"abstract": "Results from physical mapping projects have recently been reported for the genomes of Escherichia coli, Saccharomyces cerevisiae, and Caenorhabditis elegans, and similar projects are currently being planned for other organisms. In such projects, the physical map is assembled by first \"fingerprinting\" a large number of clones chosen at random from a recombinant library and then inferring overlaps between clones with sufficiently similar fingerprints. Although the basic approach is the same, there are many possible choices for the fingerprint used to characterize the clones and the rules for declaring overlap. In this paper, we derive simple formulas showing how the progress of a physical mapping project is affected by the nature of the fingerprinting scheme. Using these formulas, we discuss the analytic considerations involved in selecting an appropriate fingerprinting scheme for a particular project."
},
{
"pmid": "12668763",
"title": "Enhanced protein domain discovery by using language modeling techniques from speech recognition.",
"abstract": "Most modern speech recognition uses probabilistic models to interpret a sequence of sounds. Hidden Markov models, in particular, are used to recognize words. The same techniques have been adapted to find domains in protein sequences of amino acids. To increase word accuracy in speech recognition, language models are used to capture the information that certain word combinations are more likely than others, thus improving detection based on context. However, to date, these context techniques have not been applied to protein domain discovery. Here we show that the application of statistical language modeling methods can significantly enhance domain recognition in protein sequences. As an example, we discover an unannotated Tf_Otx Pfam domain on the cone rod homeobox protein, which suggests a possible mechanism for how the V242M mutation on this protein causes cone-rod dystrophy."
},
{
"pmid": "17472741",
"title": "ngLOC: an n-gram-based Bayesian method for estimating the subcellular proteomes of eukaryotes.",
"abstract": "We present a method called ngLOC, an n-gram-based Bayesian classifier that predicts the localization of a protein sequence over ten distinct subcellular organelles. A tenfold cross-validation result shows an accuracy of 89% for sequences localized to a single organelle, and 82% for those localized to multiple organelles. An enhanced version of ngLOC was developed to estimate the subcellular proteomes of eight eukaryotic organisms: yeast, nematode, fruitfly, mosquito, zebrafish, chicken, mouse, and human."
},
{
"pmid": "25583448",
"title": "De novo assembly of bacterial transcriptomes from RNA-seq data.",
"abstract": "Transcriptome assays are increasingly being performed by high-throughput RNA sequencing (RNA-seq). For organisms whose genomes have not been sequenced and annotated, transcriptomes must be assembled de novo from the RNA-seq data. Here, we present novel algorithms, specific to bacterial gene structures and transcriptomes, for analysis of bacterial RNA-seq data and de novo transcriptome assembly. The algorithms are implemented in an open source software system called Rockhopper 2. We find that Rockhopper 2 outperforms other de novo transcriptome assemblers and offers accurate and efficient analysis of bacterial RNA-seq data. Rockhopper 2 is available at http://cs.wellesley.edu/~btjaden/Rockhopper ."
}
] |
Frontiers in Plant Science | 31737019 | PMC6837080 | 10.3389/fpls.2019.01404 | Learning Semantic Graphics Using Convolutional Encoder–Decoder Network for Autonomous Weeding in Paddy | Weeds in agricultural farms are aggressive growers which compete for nutrition and other resources with the crop and reduce production. The increasing use of chemicals to control them has inadvertent consequences to the human health and the environment. In this work, a novel neural network training method combining semantic graphics for data annotation and an advanced encoder–decoder network for (a) automatic crop line detection and (b) weed (wild millet) detection in paddy fields is proposed. The detected crop lines act as a guiding line for an autonomous weeding robot for inter-row weeding, whereas the detection of weeds enables autonomous intra-row weeding. The proposed data annotation method, semantic graphics, is intuitive, and the desired targets can be annotated easily with minimal labor. Also, the proposed “extended skip network” is an improved deep convolutional encoder–decoder neural network for efficient learning of semantic graphics. Quantitative evaluations of the proposed method demonstrated an increment of 6.29% and 6.14% in mean intersection over union (mIoU), over the baseline network on the task of paddy line detection and wild millet detection, respectively. The proposed method also leads to a 3.56% increment in mIoU and a significantly higher recall compared to a popular bounding box-based object detection approach on the task of wild–millet detection. | Related WorkCrop Line DetectionPrevious works on detecting crop rows using vision-based systems primarily detect the position of the crops using different handcrafted features like living tissue indicators (Søgaard and Olsen, 2003), vegetation index (Bakker et al., 2008; Montalvo et al., 2012), morphological features (Choi et al., 2015), and extraction of the crop line using different pattern recognition and machine learning techniques like distribution of pixel values, vanishing point detection, Hough transform, and linear regression (Søgaard and Olsen, 2003; Bakker et al., 2008; Montalvo et al., 2012; Choi et al., 2015; Jiang et al., 2016).Methods based on handcrafted features work well under controlled conditions; however, they can fail to work in real farm conditions, as it is practically infeasible to hand-engineer features which capture the extensive diversity found in real farm environments. The methods based on color index work well in the absence of weeds in between the rows, as the vegetation index or living tissue index of weeds is similar to that of crops. The presence of weeds and different natural conditions like shades or light reflection affects the extraction of binary morphological features, which ultimately affects the accuracy of the extracted crop line.Recent advancements in neural networks have demonstrated that automatic feature learning using convolutional neural networks (CNNs) are more successful than hand-engineered features. Methods based on CNNs have produced state-of-the-art results in different computer vision and pattern recognition problems like object detection and classification (Ren et al., 2015; Redmon et al., 2016; Huang et al., 2017) and semantic segmentation (He et al., 2016).In this work, we use CNN to extract the crop lines. Unlike prior works which segment the input into different regions and extract the crop lines, we propose to train a CNN to directly learn the concept of a crop line using “semantic graphics” as shown in
Figure 1
.Figure 1The proposed approach of training deep neural networks to learn the concept of crop line using semantic graphics.Weed DetectionRecently, DNN-based algorithms for classification of weeds and crops have attracted much attention. Two different CNNs were used to segment and classify image pixels into crop and weeds (Potena et al., 2016). A method based on K-means feature learning combined with CNN was used for weed identification in soybean seedlings (Tang et al., 2017). A fully CNN was used to detect single weed instances in image from winter wheat fields with leaf occlusion (Dyrmann et al., 2017). CNN-based semantic segmentation approaches to separate crops, weeds, and background have also been studied (Milioto et al., 2018; Ma et al., 2019). While semantic segmentation-based approaches are helpful for widely spaced crops and weeds, these approaches are difficult to adopt in fields with heavy overlap and occlusion owing to the difficulty in obtaining per-pixel ground truth annotations. Moreover, the difficulty in obtaining ground truth labels is compounded for crop and weeds, like rice and wild millet, which have similar appearances.In this work, we propose to learn “semantic graphics” using CNN for the identification of rice and wild millet.Semantic GraphicsOne of the factors enabling the increase in performance of DNNs is the availability of a huge amount of data for training. However, for supervised training of DNNs, the data has to be annotated manually with ground truth. It is expensive and time-consuming to prepare large-scale ground truth annotations (Bearman et al., 2016), and hence, there is a bottleneck in extending the application of DNN to new applications which require the network to be trained on custom datasets. Manual annotation is particularly time-consuming for semantic segmentation where per-pixel annotation is required. Per-pixel semantic labeling is also economically not viable without employing methods which reduce human labor.To reduce the dependency on large-scale detailed annotations, weakly or semi-supervised learning techniques have been explored in the literature. In the weakly supervised setting, the training images are annotated only at the image level or sparsely annotated at the pixel level, thus requiring less time and effort for annotation. Different forms of weak supervision have also been explored in the literature such as image-level labels (Pinheiro and Collobert, 2015), bounding boxes (Papandreou et al., 2015), and point annotations and free-form scribbles (Bearman et al., 2016; Lin et al., 2016). However, much of the focus in the literature has been towards detecting or segmenting “objects” with a well-defined shape, appearance, and boundary. Less attention has been paid towards understanding complex scenes that are difficult even to annotate correctly due to similar appearance and ambiguous boundaries.To simplify the process of annotating such complex scenes, we introduce the notion of semantic graphics. Semantic graphics is a graphical sketch where a target concept is expressed in the form of a figure for easy learning by neural networks. Semantic graphics can encode human knowledge directly in intuitive graphics which can be annotated with considerable ease even for complex scenes. For example, in the image of a line-transplanted paddy field shown in
Figure 2
, the lines of paddy have been rendered indistinguishable due to high weed pressure. However, humans can easily figure out the actual rows of paddy in the image, including in those regions where the actual demarcation does not exist due to weeds. One of the meaningful ways to mark the rows is by sketching a line as shown at the bottom of
Figure 2
.Figure 2Semantic graphics: (top) images of row-transplanted paddy field. (bottom) Manually marked semantic graphics representing the rows of paddy is superimposed on the original images. Even at places where the paddy lines are rendered indistinguishable due to the heavy presence of weeds, humans can easily figure out the actual lines and represent those using semantic graphics. (Best viewed in color).Semantic graphics is different from semantic segmentation as pixels belonging to the same semantic region or super-pixel may not be necessarily labeled with the same target category. Semantic graphics is particularly useful for tasks which are otherwise challenging for existing pixel-based semantic segmentation methods. For example, the rows of paddy and the wild millet in between the rows, as shown in
Figure 2
, are semantically similar; therefore, it is difficult and time-consuming to prepare dense per-pixel annotation to be used for semantic segmentation. However, it is easier to figure out the actual crop rows and represent those using semantic graphics. In this work, we demonstrate that semantic graphics are an effective way towards training CNNs to learn higher-order concepts like the crop line and to differentiate between crops and weeds.Convolutional Encoder–Decoder NetworkA convolutional encoder–decoder network is a standard network used for tasks requiring dense pixel-wise predictions like semantic segmentation (Badrinarayanan et al., 2017), computing optical flow and disparity maps (Mayer et al., 2016), and contour detection (Yang et al., 2016). The encoder in the network computes progressively higher-level abstract features as the receptive fields in the encoder increase with the depth of the encoder. The spatial resolution of the feature maps is reduced progressively via a down-sampling operation, whereas the decoder computes feature maps of progressively increasing resolution via un-pooling (Zeiler and Fergus, 2014) or up-sampling. The network has the ability not only to model features like shape or appearance of different classes but also to model long-range spatial relationships. This attribute of modeling local and global features makes this architecture suitable for learning semantics graphics, as shown in
Figure 1
.Different variations of the encoder–decoder network have been explored in the literature for improved performance. Skip connections (Ronneberger et al., 2015) have been used to recover the fine spatial details during reconstruction which get lost due to successive down-sampling operations involved in the encoder. Addition of larger context information using image-level features (Liu et al., 2015), recurrent connections (Pinheiro and Collobert, 2014; Zheng et al., 2015), and larger convolutional kernels (Peng et al., 2017) has also significantly improved the accuracy of semantic segmentation. Other methods studied for improving semantic segmentation accuracy include hierarchical supervision (Chen et al., 2016) and iterative concatenation of feature maps (Jégou et al., 2017).In this work, we design an enhanced encoder–decoder network, named “extended skip network” (ESNet), to learn the semantic graphics. We demonstrate that the enhanced network exhibits significant performance improvement over the baseline network on the problem of crop line detection and weed detection. We also demonstrate that the proposed method has improved performance on the task of weed detection over a popular bounding box-based object detection method. | [
"28060704",
"29410674",
"30998770",
"27713752"
] | [
{
"pmid": "28060704",
"title": "SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.",
"abstract": "We present a novel and practical deep fully convolutional neural network architecture for semantic pixel-wise segmentation termed SegNet. This core trainable segmentation engine consists of an encoder network, a corresponding decoder network followed by a pixel-wise classification layer. The architecture of the encoder network is topologically identical to the 13 convolutional layers in the VGG16 network [1] . The role of the decoder network is to map the low resolution encoder feature maps to full input resolution feature maps for pixel-wise classification. The novelty of SegNet lies is in the manner in which the decoder upsamples its lower resolution input feature map(s). Specifically, the decoder uses pooling indices computed in the max-pooling step of the corresponding encoder to perform non-linear upsampling. This eliminates the need for learning to upsample. The upsampled maps are sparse and are then convolved with trainable filters to produce dense feature maps. We compare our proposed architecture with the widely adopted FCN [2] and also with the well known DeepLab-LargeFOV [3] , DeconvNet [4] architectures. This comparison reveals the memory versus accuracy trade-off involved in achieving good segmentation performance. SegNet was primarily motivated by scene understanding applications. Hence, it is designed to be efficient both in terms of memory and computational time during inference. It is also significantly smaller in the number of trainable parameters than other competing architectures and can be trained end-to-end using stochastic gradient descent. We also performed a controlled benchmark of SegNet and other architectures on both road scenes and SUN RGB-D indoor scene segmentation tasks. These quantitative assessments show that SegNet provides good performance with competitive inference time and most efficient inference memory-wise as compared to other architectures. We also provide a Caffe implementation of SegNet and a web demo at http://mi.eng.cam.ac.uk/projects/segnet."
},
{
"pmid": "29410674",
"title": "Metarhizium brunneum (Ascomycota; Hypocreales) Treatments Targeting Olive Fly in the Soil for Sustainable Crop Production.",
"abstract": "Soil treatments with Metarhizium brunneum EAMa 01/58-Su strain conducted in both Northern and Southern Spain reduced the olive fly (Bactrocera oleae) population density emerging from the soil during spring up to 70% in treated plots compared with controls. A model to determine the influence of rainfall on the conidial wash into different soil types was developed, with most of the conidia retained at the first 5 cm, regardless of soil type, with relative percentages of conidia recovered ranging between 56 and 95%. Furthermore, the possible effect of UV-B exposure time on the pathogenicity of this strain against B. oleae adults coming from surviving preimaginals and carrying conidia from the soil at adult emergence was also evaluated. The UV-B irradiance has no significant effect on M. brunneum EAMa 01/58-Su pathogenicity with B. oleae adult mortalities of 93, 90, 79, and 77% after 0, 2, 4, and 6 of UV-B irradiance exposure, respectively. In a next step for the use of these M. brunneum EAMa 01/58-Sun soil treatments within a B. oleae IPM strategy, its possible effect of on the B. oleae cosmopolitan parasitoid Psyttalia concolor, its compatibility with the herbicide oxyfluorfen 24% commonly used in olive orchards and the possible presence of the fungus in the olive oil resulting from olives previously placed in contact with the fungus were investigated. Only the highest conidial concentration (1 × 108 conidia ml-) caused significant P. concolor adult mortality (22%) with enduing mycosis in 13% of the cadavers. There were no fungal propagules in olive oil samples resulting from olives previously contaminated by EAMa 01/58-Su conidia. Finally, the strain was demonstrated to be compatible with herbicide since the soil application of the fungus reduced the B. oleae population density up to 50% even when it was mixed with the herbicide in the same tank. The fungal inoculum reached basal levels 4 months after treatments (1.6 × 103 conidia g soil-1). These results reveal both the efficacy and environmental and food safety of this B. oleae control method, protecting olive groves and improving olive oil quality without negative effects on the natural enemy P. concolor."
},
{
"pmid": "30998770",
"title": "Fully convolutional network for rice seedling and weed image segmentation at the seedling stage in paddy fields.",
"abstract": "To reduce the cost of production and the pollution of the environment that is due to the overapplication of herbicide in paddy fields, the location information of rice seedlings and weeds must be detected in site-specific weed management (SSWM). With the development of deep learning, a semantic segmentation method with the SegNet that is based on fully convolutional network (FCN) was proposed. In this paper, RGB color images of seedling rice were captured in paddy field, and ground truth (GT) images were obtained by manually labeled the pixels in the RGB images with three separate categories, namely, rice seedlings, background, and weeds. The class weight coefficients were calculated to solve the problem of the unbalance of the number of the classification category. GT images and RGB images were used for data training and data testing. Eighty percent of the samples were randomly selected as the training dataset and 20% of samples were used as the test dataset. The proposed method was compared with a classical semantic segmentation model, namely, FCN, and U-Net models. The average accuracy rate of the SegNet method was 92.7%, whereas the average accuracy rates of the FCN and U-Net methods were 89.5% and 70.8%, respectively. The proposed SegNet method realized higher classification accuracy and could effectively classify the pixels of rice seedlings, background, and weeds in the paddy field images and acquire the positions of their regions."
},
{
"pmid": "27713752",
"title": "Using Deep Learning for Image-Based Plant Disease Detection.",
"abstract": "Crop diseases are a major threat to food security, but their rapid identification remains difficult in many parts of the world due to the lack of the necessary infrastructure. The combination of increasing global smartphone penetration and recent advances in computer vision made possible by deep learning has paved the way for smartphone-assisted disease diagnosis. Using a public dataset of 54,306 images of diseased and healthy plant leaves collected under controlled conditions, we train a deep convolutional neural network to identify 14 crop species and 26 diseases (or absence thereof). The trained model achieves an accuracy of 99.35% on a held-out test set, demonstrating the feasibility of this approach. Overall, the approach of training deep learning models on increasingly large and publicly available image datasets presents a clear path toward smartphone-assisted crop disease diagnosis on a massive global scale."
}
] |
Royal Society Open Science | 31824713 | PMC6837202 | 10.1098/rsos.191068 | Social media and bitcoin metrics: which words matter | We develop a new Data-Driven Phasic Word Identification (DDPWI) methodology to determine which words matter as the bitcoin pricing dynamic changes from one phase to another. With Google search volumes as a baseline, we find that Reddit submissions are both correlated with Google and have a comparable relationship with a variety of bitcoin metrics, using Spearman’s rho. Reddit provides complete access to the text of submissions. Rather than associating sentiment with market activity, we describe the DDPWI method for finding specific ‘price dynamic’ words associated with changes in the bitcoin pricing pattern through 2017 and 2018. We assess the significance of these changes using Wilcoxon Rank-Sum Tests with Bonferroni corrections. These price dynamic words are used to pull out associated words in the submissions thereby providing the context to their use. For example, the price dynamic word ‘ban’, which became significantly higher in frequency as prices fell, occurred in the context of both government regulation and internet companies banning cryptocurrency adverts. This approach could be used more generally to look at social media and discussion forums at a granular level identifying specific words that impact the metric under investigation rather than overall sentiment. | 1.2.Related workFor traditional asset classes, such as equity, a correlation has been shown between Google searches [23] and cumulative weekly stock transaction volume [24] and between searches and stock market moves [25]. Previous bitcoin analyses have also used Google search data [1–13], interpreting internet activity as a proxy for public interest in bitcoin. However, this does not provide any context to the interest, and so is limited in terms of delineating the type of interest. By using Reddit data in conjunction with a new methodology, we are able to determine instead which words are most associated with shifts in the bitcoin price dynamic, and as such we build on existing research by adding a new tool to the existing analytical framework.Much of the analysis of Google search volumes has been dependent on linear regression [1,2,4–10,12,13,26]. Linear regression assumes that large outliers are unlikely [27], which is inconsistent with the observed recent extreme volatility in bitcoin prices. The median change in prices over 2 years (1 January 2017 to 3 December 2018) was only 0.3247%, but the largest rise was 27.97% on 20 July 2017 and greatest fall was 20.21% on 16 January 2018 [22]. Wavelet analysis has been presented as an alternative [3,28] but this assumed that the different time series compared are normally distributed [29], an assumption not found to hold with bitcoin price series [30,31]. Very few articles [9,10] split the data series into distinct time periods reflecting the phasic pattern of behaviour in bitcoin prices over time, so there is a risk of the results being distorted for any model applied on all data [9].Knittel & Wash [32] supported using online community text to analyse why users maintain their trust in bitcoin, focussing on Reddit subreddit ‘r/bitcoin’ because of the higher number and activity of its users compared with the alternatives (see §1.4 for quantitative evidence). Knittel & Wash identified a group of self-described ‘Bitcoiners’ who refuse to sell bitcoin for currencies backed by a government (e.g. US Dollar) in spite of price fluctuations. Discussions on the subreddit were found to reflect bitcoin-relevant events, with particular concern being over changes to price. This study did not involve statistical analysis, focussed on the limited date range of 3–10 December 2018 and did not examine the link between specific word usage and changes to price. | [
"24301322",
"25100315",
"25874694",
"26473051",
"28498843",
"21078644",
"23619126",
"29668765",
"24356666",
"27533113",
"25054439"
] | [
{
"pmid": "24301322",
"title": "BitCoin meets Google Trends and Wikipedia: quantifying the relationship between phenomena of the Internet era.",
"abstract": "Digital currencies have emerged as a new fascinating phenomenon in the financial markets. Recent events on the most popular of the digital currencies--BitCoin--have risen crucial questions about behavior of its exchange rates and they offer a field to study dynamics of the market which consists practically only of speculative traders with no fundamentalists as there is no fundamental value to the currency. In the paper, we connect two phenomena of the latest years--digital currencies, namely BitCoin, and search queries on Google Trends and Wikipedia--and study their relationship. We show that not only are the search queries and the prices connected but there also exists a pronounced asymmetry between the effect of an increased interest in the currency while being above or below its trend value."
},
{
"pmid": "25100315",
"title": "The digital traces of bubbles: feedback cycles between socio-economic signals in the Bitcoin economy.",
"abstract": "What is the role of social interactions in the creation of price bubbles? Answering this question requires obtaining collective behavioural traces generated by the activity of a large number of actors. Digital currencies offer a unique possibility to measure socio-economic signals from such digital traces. Here, we focus on Bitcoin, the most popular cryptocurrency. Bitcoin has experienced periods of rapid increase in exchange rates (price) followed by sharp decline; we hypothesize that these fluctuations are largely driven by the interplay between different social phenomena. We thus quantify four socio-economic signals about Bitcoin from large datasets: price on online exchanges, volume of word-of-mouth communication in online social media, volume of information search and user base growth. By using vector autoregression, we identify two positive feedback loops that lead to price bubbles in the absence of exogenous stimuli: one driven by word of mouth, and the other by new Bitcoin adopters. We also observe that spikes in information search, presumably linked to external events, precede drastic price declines. Understanding the interplay between the socio-economic signals we measured can lead to applications beyond cryptocurrencies to other phenomena that leave digital footprints, such as online social network usage."
},
{
"pmid": "25874694",
"title": "What are the main drivers of the Bitcoin price? Evidence from wavelet coherence analysis.",
"abstract": "The Bitcoin has emerged as a fascinating phenomenon in the Financial markets. Without any central authority issuing the currency, the Bitcoin has been associated with controversy ever since its popularity, accompanied by increased public interest, reached high levels. Here, we contribute to the discussion by examining the potential drivers of Bitcoin prices, ranging from fundamental sources to speculative and technical ones, and we further study the potential influence of the Chinese market. The evolution of relationships is examined in both time and frequency domains utilizing the continuous wavelets framework, so that we not only comment on the development of the interconnections in time but also distinguish between short-term and long-term connections. We find that the Bitcoin forms a unique asset possessing properties of both a standard financial asset and a speculative one."
},
{
"pmid": "26473051",
"title": "Social signals and algorithmic trading of Bitcoin.",
"abstract": "The availability of data on digital traces is growing to unprecedented sizes, but inferring actionable knowledge from large-scale data is far from being trivial. This is especially important for computational finance, where digital traces of human behaviour offer a great potential to drive trading strategies. We contribute to this by providing a consistent approach that integrates various datasources in the design of algorithmic traders. This allows us to derive insights into the principles behind the profitability of our trading strategies. We illustrate our approach through the analysis of Bitcoin, a cryptocurrency known for its large price fluctuations. In our analysis, we include economic signals of volume and price of exchange for USD, adoption of the Bitcoin technology and transaction volume of Bitcoin. We add social signals related to information search, word of mouth volume, emotional valence and opinion polarization as expressed in tweets related to Bitcoin for more than 3 years. Our analysis reveals that increases in opinion polarization and exchange volume precede rising Bitcoin prices, and that emotional valence precedes opinion polarization and rising exchange volumes. We apply these insights to design algorithmic trading strategies for Bitcoin, reaching very high profits in less than a year. We verify this high profitability with robust statistical methods that take into account risk and trading costs, confirming the long-standing hypothesis that trading-based social media sentiment has the potential to yield positive returns on investment."
},
{
"pmid": "28498843",
"title": "When Bitcoin encounters information in an online forum: Using text mining to analyse user opinions and predict value fluctuation.",
"abstract": "Bitcoin is an online currency that is used worldwide to make online payments. It has consequently become an investment vehicle in itself and is traded in a way similar to other open currencies. The ability to predict the price fluctuation of Bitcoin would therefore facilitate future investment and payment decisions. In order to predict the price fluctuation of Bitcoin, we analyse the comments posted in the Bitcoin online forum. Unlike most research on Bitcoin-related online forums, which is limited to simple sentiment analysis and does not pay sufficient attention to note-worthy user comments, our approach involved extracting keywords from Bitcoin-related user comments posted on the online forum with the aim of analytically predicting the price and extent of transaction fluctuation of the currency. The effectiveness of the proposed method is validated based on Bitcoin online forum data ranging over a period of 2.8 years from December 2013 to September 2016."
},
{
"pmid": "21078644",
"title": "Complex dynamics of our economic life on different scales: insights from search engine query data.",
"abstract": "Search engine query data deliver insight into the behaviour of individuals who are the smallest possible scale of our economic life. Individuals are submitting several hundred million search engine queries around the world each day. We study weekly search volume data for various search terms from 2004 to 2010 that are offered by the search engine Google for scientific use, providing information about our economic life on an aggregated collective level. We ask the question whether there is a link between search volume data and financial market fluctuations on a weekly time scale. Both collective 'swarm intelligence' of Internet users and the group of financial market participants can be regarded as a complex system of many interacting subunits that react quickly to external changes. We find clear evidence that weekly transaction volumes of S&P 500 companies are correlated with weekly search volume of corresponding company names. Furthermore, we apply a recently introduced method for quantifying complex correlations in time series with which we find a clear tendency that search volume time series and transaction volume time series show recurring patterns."
},
{
"pmid": "23619126",
"title": "Quantifying trading behavior in financial markets using Google Trends.",
"abstract": "Crises in financial markets affect humans worldwide. Detailed market data on trading decisions reflect some of the complex human behavior that has led to these crises. We suggest that massive new data sources resulting from human interaction with the Internet may offer a new perspective on the behavior of market participants in periods of large market movements. By analyzing changes in Google query volumes for search terms related to finance, we find patterns that may be interpreted as \"early warning signs\" of stock market moves. Our results illustrate the potential that combining extensive behavioral data sets offers for a better understanding of collective human behavior."
},
{
"pmid": "29668765",
"title": "Cryptocurrency price drivers: Wavelet coherence analysis revisited.",
"abstract": "Cryptocurrencies have experienced recent surges in interest and price. It has been discovered that there are time intervals where cryptocurrency prices and certain online and social media factors appear related. In addition it has been noted that cryptocurrencies are prone to experience intervals of bubble-like price growth. The hypothesis investigated here is that relationships between online factors and price are dependent on market regime. In this paper, wavelet coherence is used to study co-movement between a cryptocurrency price and its related factors, for a number of examples. This is used alongside a well-known test for financial asset bubbles to explore whether relationships change dependent on regime. The primary finding of this work is that medium-term positive correlations between online factors and price strengthen significantly during bubble-like regimes of the price series; this explains why these relationships have previously been seen to appear and disappear over time. A secondary finding is that short-term relationships between the chosen factors and price appear to be caused by particular market events (such as hacks / security breaches), and are not consistent from one time interval to another in the effect of the factor upon the price. In addition, for the first time, wavelet coherence is used to explore the relationships between different cryptocurrencies."
},
{
"pmid": "24356666",
"title": "Quantifying the relationship between financial news and the stock market.",
"abstract": "The complex behavior of financial markets emerges from decisions made by many traders. Here, we exploit a large corpus of daily print issues of the Financial Times from 2(nd) January 2007 until 31(st) December 2012 to quantify the relationship between decisions taken in financial markets and developments in financial news. We find a positive correlation between the daily number of mentions of a company in the Financial Times and the daily transaction volume of a company's stock both on the day before the news is released, and on the same day as the news is released. Our results provide quantitative support for the suggestion that movements in financial markets and movements in financial news are intrinsically interlinked."
},
{
"pmid": "27533113",
"title": "Predicting Fluctuations in Cryptocurrency Transactions Based on User Comments and Replies.",
"abstract": "This paper proposes a method to predict fluctuations in the prices of cryptocurrencies, which are increasingly used for online transactions worldwide. Little research has been conducted on predicting fluctuations in the price and number of transactions of a variety of cryptocurrencies. Moreover, the few methods proposed to predict fluctuation in currency prices are inefficient because they fail to take into account the differences in attributes between real currencies and cryptocurrencies. This paper analyzes user comments in online cryptocurrency communities to predict fluctuations in the prices of cryptocurrencies and the number of transactions. By focusing on three cryptocurrencies, each with a large market size and user base, this paper attempts to predict such fluctuations by using a simple and efficient method."
},
{
"pmid": "25054439",
"title": "Realized volatility and absolute return volatility: a comparison indicating market risk.",
"abstract": "Measuring volatility in financial markets is a primary challenge in the theory and practice of risk management and is essential when developing investment strategies. Although the vast literature on the topic describes many different models, two nonparametric measurements have emerged and received wide use over the past decade: realized volatility and absolute return volatility. The former is strongly favored in the financial sector and the latter by econophysicists. We examine the memory and clustering features of these two methods and find that both enable strong predictions. We compare the two in detail and find that although realized volatility has a better short-term effect that allows predictions of near-future market behavior, absolute return volatility is easier to calculate and, as a risk indicator, has approximately the same sensitivity as realized volatility. Our detailed empirical analysis yields valuable guidelines for both researchers and market participants because it provides a significantly clearer comparison of the strengths and weaknesses of the two methods."
}
] |
Scientific Reports | 31704945 | PMC6841925 | 10.1038/s41598-019-52580-0 | FRD-CNN: Object detection based on small-scale convolutional neural networks and feature reuse | Most of the recent successful object detection methods have been based on convolutional neural networks (CNNs). From previous studies, we learned that many feature reuse methods improve the network performance, but they increase the number of parameters. DenseNet uses thin layers that have fewer channels to alleviate the increase in parameters. This motivated us to find other methods for solving the increase in model size problems introduced by feature reuse methods. In this work, we employ different feature reuse methods on fire units and mobile units. We solved the problem and constructed two novel neural networks, fire-FRD-CNN and mobile-FRD-CNN. We conducted experiments with the proposed neural networks on KITTI and PASCAL VOC datasets. | Related WorkCai et al. proposed a multiscale CNN (MS-CNN)5. They proposed that the scale of an object in an image varies. Previous methods always implemented object detection only at the end of a CNN. This resulted in missed detection of objects if they were too small in size. Because the pixels on feature maps of lower layers have a small receptive field, MS-CNN is more suitable for small object detection; however, the pixels on feature maps of higher layers have a large receptive field, which is more suitable for large object detection. The authors designed a CNN that implemented the detection task at different layers of the network. The final detection result was the combination of the detection results of the different layers. This method is more resistant to frequent scale changes in object detection tasks. In the present paper, we show that through complete concatenation, implementing the detection task only at the end of the CNN has an effect similar to that seen in the MS-CNN.Yang et al. proposed scale-dependent pooling and layerwise cascaded rejection classifiers6. By selecting pooling regions for candidate object proposals with proper scale, scale-dependent pooling can improve detection accuracy. Cascaded rejection classifiers exclude “easy” negative object proposals in a cascaded manner, thus improving detection accuracy. While this method obtained relatively high detection precision, it increased the number of calculations.Ren et al. proposed recurrent rolling convolutions (RRCs)7. The authors proposed that feature maps in lower layers had higher resolution and more positional information. Furthermore, feature maps in higher layers have more high-dimensional semantic information. Previous works have often implemented detection tasks using feature information from the last layer. In this work, feature information in each layer was fused in a recurrent rolling manner, which greatly improved detection performance. While this model achieved top detection accuracy on the KITTI dataset, it needed to run for several epochs to fuse the feature information of each layer with that of other layers.Feature pyramid networks (FPNs)8 fuse high-level semantic information to low-level positional information. FPN and its derived neural networks have the advantages of detecting small objects and objects with large-scale variations. Kong et al. proposed deep feature pyramid reconfiguration for object detection9, which consists of global attention and local reconfigurations methods. Both global attention and local reconfigurations are lightweight, so the proposed model achieved consistent and significant improvements without losing real-time processing speed. Sun et al. proposed feature pyramid reconfiguration with consistent loss for object detection10. They reshaped the standard cross-entropy loss and designed a novel consistent loss (CL). It achieved more accurate object localization. Pang et al. proposed the efficient featurized image pyramid network for single-shot detector11, in which a lightweight featurized image pyramid network was introduced to construct a multiscale feature representation. In addition, StairNet12 and two variants of a context-aware single-shot detector13 were proposed to improve the detection performance of small objects.Han et al. proposed network pruning14. In this work, a CNN was trained, and parameters in the model below a predefined threshold were set to zero to construct a sparse model. Then, the sparse model was trained for a few iterations. Finally, a smaller model was produced.Hinton et al. proposed knowledge distillation15. The authors trained a small student CNN beginning with training a large teacher CNN. The small network was trained to mimic its teacher network. The loss function of the student network was the difference between the outputs of the student network and the teacher network, thus a “soft” target with respect to their original loss function. This yielded better results than directly training the student CNN through the original “hard” target.Howard et al. proposed a streamlined network named MobileNet16. It used a depthwise separable convolution, which is a combination of separable convolution and pointwise convolution. These two steps are in essence a filtering stage (separable convolution) and a combination stage (pointwise convolution). Based on this strategy, a small-scale network was constructed. Later, ShuffleNet17 was proposed. In ShuffleNet, a channel shuffle operation is imposed after each pointwise convolution to force feature map information exchange between different channels to improve detection performance.Iandola et al. proposed a small network named SqueezeNet18. SqueezeNet was designed to be especially small based on three strategies. First, 3 × 3 filters were replaced with 1 × 1 filters. Second, the number of input channels was decreased to 3 × 3 filters. Last, downsampling occurred late in the network so that convolution layers had large activation maps. Based on these three strategies, a “fire” module was constructed. The fire module from which SqueezeNet was constructed was comprised of a squeeze layer and an expand layer. By using the fire units as the basic module of SqueezeNet, the number of parameters of SqueezeNet were largely decreased.Based on SqueezeNet, a network used for object detection named SqueezeDet19 was proposed. It used SqueezeNet as its backbone network for feature extraction, and a fully convolutional detection network was designed for final object detection. This model can be trained end-to-end and is small in size. Its detection speed is especially high.Densely connected convolutional networks (DenseNet) were proposed in2. In this work, each layer was connected to all other layers in a feed-forward fashion, which yielded a densely connected network. DenseNet alleviates the vanishing gradient problem, strengthens feature propagation, and encourages feature reuse. | [] | [] |
Biomolecules | 31615116 | PMC6843838 | 10.3390/biom9100607 | Unsupervised and Supervised Learning over the Energy Landscape for Protein Decoy Selection | The energy landscape that organizes microstates of a molecular system and governs the underlying molecular dynamics exposes the relationship between molecular form/structure, changes to form, and biological activity or function in the cell. However, several challenges stand in the way of leveraging energy landscapes for relating structure and structural dynamics to function. Energy landscapes are high-dimensional, multi-modal, and often overly-rugged. Deep wells or basins in them do not always correspond to stable structural states but are instead the result of inherent inaccuracies in semi-empirical molecular energy functions. Due to these challenges, energetics is typically ignored in computational approaches addressing long-standing central questions in computational biology, such as protein decoy selection. In the latter, the goal is to determine over a possibly large number of computationally-generated three-dimensional structures of a protein those structures that are biologically-active/native. In recent work, we have recast our attention on the protein energy landscape and its role in helping us to advance decoy selection. Here, we summarize some of our successes so far in this direction via unsupervised learning. More importantly, we further advance the argument that the energy landscape holds valuable information to aid and advance the state of protein decoy selection via novel machine learning methodologies that leverage supervised learning. Our focus in this article is on decoy selection for the purpose of a rigorous, quantitative evaluation of how leveraging protein energy landscapes advances an important problem in protein modeling. However, the ideas and concepts presented here are generally useful to make discoveries in studies aiming to relate molecular structure and structural dynamics to function. | 1.1. Related WorkIn the early days, when decoy selection was starting to be recognized as a practical necessity in molecular structural biology, proposed methods aggressively used energies of decoys to determine their “nativeness”. This early enthusiasm, however, soon diminished upon the realization that energy was a poor indicator of nativeness [19]. Many studies reported that lower energy did not relate to closer proximity of a decoy to the native structure [20,21,22]. Consequently, other methodologies became more prominent. Clustering-/consensus-based methods, also known as multi-model methods, dominated the decoy selection category (also known as model accuracy/quality assessment) in CASP [23,24], until recently, when methods based on supervised learning made their debut. Currently, there is great diversity among decoy selection methods. Based on the approach they follow, these methods can be roughly grouped into single-model, multi-model, and quasi-single model methods.Single-model methods work on a per decoy basis [25] and employ energy functions designed specifically to aid decoy selection. Some of these methods use physics-based functions based on physical properties of atomic interactions [26,27,28]. Others use knowledge-based/statistical scoring functions that rely on statistical analysis of known native structures [29,30,31]. The latter methods have been more successful [32,33]. Clustering-based methods, on the other hand, do not rely on energy or scoring functions. They group together similar decoys and offer the largest k clusters as prediction. Some recent work has leveraged concepts, such as communities, from network science to cluster decoys [16]. These methods construct clusters as communities as in social networks.Until very recently, clustering-based methods decidedly outperformed single-model methods [7]. However, single-model methods have progressed considerably, to the point that they can now compete with clustering-based methods [8]. Since the most successful single-model methods rely on specially-designed scoring functions that users often have to re-implement, clustering-based methods remain more popular. Clustering-based methods pose their own concerns, some of which are addressed in [34,35,36]. Most notably, they suffer from the curse of dimensionality [37] and carry significant computational costs with decoy data of increasing size. Since they are based on consensus, they have a very hard time identifying good decoys in sparse, low-quality decoy datasets, where near-native decoys are significantly under-sampled by decoy generation algorithms.In the last five years, quasi-single model methods and supervised learning methods have taken hold in the community. These methods currently outperform clustering-based methods. Quasi-single model methods combine concepts of single- and multi-model methods [38,39]. They work by comparing decoys to some selected, high-quality reference structures [40]. Methods based on supervised learning are currently quite diverse, leveraging SVMs [41,42], Random Forest [43], NNs [44,45], and ensemble learning [46]. Feature sets are also diverse, derived from terms of statistical scoring functions [47,48] and/or expert-constructed structural features [49,50]. These methods show great promise.Inspired by outstanding performance in image recognition, decoy selection research has adopted deep learning strategies. For instance, Cao et al. [45] proposes deepQA, a single-model decoy selection method that utilizes energy, structural, and physio-chemical characteristics of a decoy for quality prediction. Improved decoy selection has also been observed with models based on convolutional neural networks (CNNs). For instance, Hou et al. [51] uses a deep one-dimensional CNN (1DCNN) to build a single-model decoy selection method. The authors make use of two 1DCNNs to predict the local and global quality of a decoy. In [52], the authors propose Ornate, a single-model method that applies a deep three-dimensional CNN (3DCNN) for model quality estimation. 3DCNN has also been used successfully in [53]. Hou et al. observe substantial improvement in protein model selection by using contact distance predicted via a deep CNN [54]. These methods are very promising, but they are still challenged by the scarcity of labeled data, imbalanced data distribution, and more. | [
"4124164",
"18556537",
"19841628",
"1749933",
"24608340",
"27124275",
"14634627",
"27171127",
"29351266",
"11714917",
"21625446",
"26344049",
"26733453",
"10329155",
"12631702",
"8627632",
"15954080",
"21310747",
"24023923",
"25737479",
"27530967",
"28419290",
"25222008",
"28113636",
"22004759",
"22963006",
"30874723",
"31487288",
"28062450",
"12381853",
"21060880",
"18260109",
"28430426",
"15207004",
"11108700",
"23184517",
"27041353",
"30590384"
] | [
{
"pmid": "19841628",
"title": "The role of dynamic conformational ensembles in biomolecular recognition.",
"abstract": "Molecular recognition is central to all biological processes. For the past 50 years, Koshland's 'induced fit' hypothesis has been the textbook explanation for molecular recognition events. However, recent experimental evidence supports an alternative mechanism. 'Conformational selection' postulates that all protein conformations pre-exist, and the ligand selects the most favored conformation. Following binding the ensemble undergoes a population shift, redistributing the conformational states. Both conformational selection and induced fit appear to play roles. Following binding by a primary conformational selection event, optimization of side chain and backbone interactions is likely to proceed by an induced fit mechanism. Conformational selection has been observed for protein-ligand, protein-protein, protein-DNA, protein-RNA and RNA-ligand interactions. These data support a new molecular recognition paradigm for processes as diverse as signaling, catalysis, gene regulation and protein aggregation in disease, which has the potential to significantly impact our views and strategies in drug design, biomolecular engineering and molecular evolution."
},
{
"pmid": "1749933",
"title": "The energy landscapes and motions of proteins.",
"abstract": "Recent experiments, advances in theory, and analogies to other complex systems such as glasses and spin glasses yield insight into protein dynamics. The basis of the understanding is the observation that the energy landscape is complex: Proteins can assume a large number of nearly isoenergetic conformations (conformational substates). The concepts that emerge from studies of the conformational substates and the motions between them permit a quantitative discussion of one simple reaction, the binding of small ligands such as carbon monoxide to myoglobin."
},
{
"pmid": "27124275",
"title": "Principles and Overview of Sampling Methods for Modeling Macromolecular Structure and Dynamics.",
"abstract": "Investigation of macromolecular structure and dynamics is fundamental to understanding how macromolecules carry out their functions in the cell. Significant advances have been made toward this end in silico, with a growing number of computational methods proposed yearly to study and simulate various aspects of macromolecular structure and dynamics. This review aims to provide an overview of recent advances, focusing primarily on methods proposed for exploring the structure space of macromolecules in isolation and in assemblies for the purpose of characterizing equilibrium structure and dynamics. In addition to surveying recent applications that showcase current capabilities of computational methods, this review highlights state-of-the-art algorithmic techniques proposed to overcome challenges posed in silico by the disparate spatial and time scales accessed by dynamic macromolecules. This review is not meant to be exhaustive, as such an endeavor is impossible, but rather aims to balance breadth and depth of strategies for modeling macromolecular structure and dynamics for a broad audience of novices and experts."
},
{
"pmid": "27171127",
"title": "Critical assessment of methods of protein structure prediction: Progress and new directions in round XI.",
"abstract": "Modeling of protein structure from amino acid sequence now plays a major role in structural biology. Here we report new developments and progress from the CASP11 community experiment, assessing the state of the art in structure modeling. Notable points include the following: (1) New methods for predicting three dimensional contacts resulted in a few spectacular template free models in this CASP, whereas models based on sequence homology to proteins with experimental structure continue to be the most accurate. (2) Refinement of initial protein models, primarily using molecular dynamics related approaches, has now advanced to the point where the best methods can consistently (though slightly) improve nearly all models. (3) The use of relatively sparse NMR constraints dramatically improves the accuracy of models, and another type of sparse data, chemical crosslinking, introduced in this CASP, also shows promise for producing better models. (4) A new emphasis on modeling protein complexes, in collaboration with CAPRI, has produced interesting results, but also shows the need for more focus on this area. (5) Methods for estimating the accuracy of models have advanced to the point where they are of considerable practical use. (6) A first assessment demonstrates that models can sometimes successfully address biological questions that motivate experimental structure determination. (7) There is continuing progress in accuracy of modeling regions of structure not directly available by comparative modeling, while there is marginal or no progress in some other areas. Proteins 2016; 84(Suppl 1):4-14. © 2016 Wiley Periodicals, Inc."
},
{
"pmid": "29351266",
"title": "From Extraction of Local Structures of Protein Energy Landscapes to Improved Decoy Selection in Template-Free Protein Structure Prediction.",
"abstract": "Due to the essential role that the three-dimensional conformation of a protein plays in regulating interactions with molecular partners, wet and dry laboratories seek biologically-active conformations of a protein to decode its function. Computational approaches are gaining prominence due to the labor and cost demands of wet laboratory investigations. Template-free methods can now compute thousands of conformations known as decoys, but selecting native conformations from the generated decoys remains challenging. Repeatedly, research has shown that the protein energy functions whose minima are sought in the generation of decoys are unreliable indicators of nativeness. The prevalent approach ignores energy altogether and clusters decoys by conformational similarity. Complementary recent efforts design protein-specific scoring functions or train machine learning models on labeled decoys. In this paper, we show that an informative consideration of energy can be carried out under the energy landscape view. Specifically, we leverage local structures known as basins in the energy landscape probed by a template-free method. We propose and compare various strategies of basin-based decoy selection that we demonstrate are superior to clustering-based strategies. The presented results point to further directions of research for improving decoy selection, including the ability to properly consider the multiplicity of native conformations of proteins."
},
{
"pmid": "11714917",
"title": "Free energies of protein decoys provide insight into determinants of protein stability.",
"abstract": "We have calculated the stability of decoy structures of several proteins (from the CASP3 models and the Park and Levitt decoy set) relative to the native structures. The calculations were performed with the force field-consistent ES/IS method, in which an implicit solvent (IS) model is used to calculate the average solvation free energy for snapshots from explicit simulations (ESs). The conformational free energy is obtained by adding the internal energy of the solute from the ESs and an entropic term estimated from the covariance positional fluctuation matrix. The set of atomic Born radii and the cavity-surface free energy coefficient used in the implicit model has been optimized to be consistent with the all-atom force field used in the ESs (cedar/gromos with simple point charge (SPC) water model). The decoys are found to have a consistently higher free energy than that of the native structure; the gap between the native structure and the best decoy varies between 10 and 15 kcal/mole, on the order of the free energy difference that typically separates the native state of a protein from the unfolded state. The correlation between the free energy and the extent to which the decoy structures differ from the native (as root mean square deviation) is very weak; hence, the free energy is not an accurate measure for ranking the structurally most native-like structures from among a set of models. Analysis of the energy components shows that stability is attained as a result of three major driving forces: (1) minimum size of the protein-water surface interface; (2) minimum total electrostatic energy, which includes solvent polarization; and (3) minimum protein packing energy. The detailed fit required to optimize the last term may underlie difficulties encountered in recovering the native fold from an approximate decoy or model structure."
},
{
"pmid": "21625446",
"title": "Four small puzzles that Rosetta doesn't solve.",
"abstract": "A complete macromolecule modeling package must be able to solve the simplest structure prediction problems. Despite recent successes in high resolution structure modeling and design, the Rosetta software suite fares poorly on small protein and RNA puzzles, some as small as four residues. To illustrate these problems, this manuscript presents Rosetta results for four well-defined test cases: the 20-residue mini-protein Trp cage, an even smaller disulfide-stabilized conotoxin, the reactive loop of a serine protease inhibitor, and a UUCG RNA tetraloop. In contrast to previous Rosetta studies, several lines of evidence indicate that conformational sampling is not the major bottleneck in modeling these small systems. Instead, approximations and omissions in the Rosetta all-atom energy function currently preclude discriminating experimentally observed conformations from de novo models at atomic resolution. These molecular \"puzzles\" should serve as useful model systems for developers wishing to make foundational improvements to this powerful modeling suite."
},
{
"pmid": "26344049",
"title": "Methods of model accuracy estimation can help selecting the best models from decoy sets: Assessment of model accuracy estimations in CASP11.",
"abstract": "The article presents assessment of the model accuracy estimation methods participating in CASP11. The results of the assessment are expected to be useful to both-developers of the methods and users who way too often are presented with structural models without annotations of accuracy. The main emphasis is placed on the ability of techniques to identify the best models from among several available. Bivariate descriptive statistics and ROC analysis are used to additionally assess the overall correctness of the predicted model accuracy scores, the correlation between the predicted and observed accuracy of models, the effectiveness in distinguishing between good and bad models, the ability to discriminate between reliable and unreliable regions in models, and the accuracy of the coordinate error self-estimates. A rigid-body measure (GDT_TS) and three local-structure-based scores (LDDT, CADaa, and SphereGrinder) are used as reference measures for evaluating methods' performance. Consensus methods, taking advantage of the availability of several models for the same target protein, perform well on the majority of tasks. Methods that predict accuracy on the basis of a single model perform comparably to consensus methods in picking the best models and in the estimation of how accurate is the local structure. More groups than in previous experiments submitted reasonable error estimates of their own models, most likely in response to a recommendation from CASP and the increasing demand from users. Proteins 2016; 84(Suppl 1):349-369. © 2015 Wiley Periodicals, Inc."
},
{
"pmid": "26733453",
"title": "ProQ2: estimation of model accuracy implemented in Rosetta.",
"abstract": "MOTIVATION\nModel quality assessment programs are used to predict the quality of modeled protein structures. They can be divided into two groups depending on the information they are using: ensemble methods using consensus of many alternative models and methods only using a single model to do its prediction. The consensus methods excel in achieving high correlations between prediction and true quality measures. However, they frequently fail to pick out the best possible model, nor can they be used to generate and score new structures. Single-model methods on the other hand do not have these inherent shortcomings and can be used both to sample new structures and to improve existing consensus methods.\n\n\nRESULTS\nHere, we present an implementation of the ProQ2 program to estimate both local and global model accuracy as part of the Rosetta modeling suite. The current implementation does not only make it possible to run large batch runs locally, but it also opens up a whole new arena for conformational sampling using machine learned scoring functions and to incorporate model accuracy estimation in to various existing modeling schemes. ProQ2 participated in CASP11 and results from CASP11 are used to benchmark the current implementation. Based on results from CASP11 and CAMEO-QE, a continuous benchmark of quality estimation methods, it is clear that ProQ2 is the single-model method that performs best in both local and global model accuracy.\n\n\nAVAILABILITY AND IMPLEMENTATION\nhttps://github.com/bjornwallner/ProQ_scripts\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "10329155",
"title": "Discrimination of the native from misfolded protein models with an energy function including implicit solvation.",
"abstract": "An essential requirement for theoretical protein structure prediction is an energy function that can discriminate the native from non-native protein conformations. To date most of the energy functions used for this purpose have been extracted from a statistical analysis of the protein structure database, without explicit reference to the physical interactions responsible for protein stability. The use of the statistical functions has been supported by the widespread belief that they are superior for such discrimination to physics-based energy functions. An effective energy function which combined the CHARMM vacuum potential with a Gaussian model for the solvation free energy is tested for its ability to discriminate the native structure of a protein from misfolded conformations; the results are compared with those obtained with the vacuum CHARMM potential. The test is performed on several sets of misfolded structures prepared by others, including sets of about 650 good decoys for six proteins, as well as on misfolded structures of chymotrypsin inhibitor 2. The vacuum CHARMM potential is successful in most cases when energy minimized conformations are considered, but fails when applied to structures relaxed by molecular dynamics. With the effective energy function the native state is always more stable than grossly misfolded conformations both in energy minimized and molecular dynamics-relaxed structures. The present results suggest that molecular mechanics (physics-based) energy functions, complemented by a simple model for the solvation free energy, should be tested for use in the inverse folding problem, and supports their use in studies of the effective energy surface of proteins in solution. Moreover, the study suggests that the belief in the superiority of statistical functions for these purposes may be ill founded."
},
{
"pmid": "12631702",
"title": "Discrimination of native protein structures using atom-atom contact scoring.",
"abstract": "We introduce a method for discriminating correctly folded proteins from well designed decoy structures using atom-atom and atom-solvent contact surfaces. The measure used to quantify contact surfaces integrates the solvent accessible surface and interatomic contacts into one quantity, allowing solvent to be treated as an atom contact. A scoring function was derived from statistical contact preferences within known protein structures and validated by using established protein decoy sets, including the \"Rosetta\" decoys and data from the CASP4 structure predictions. The scoring function effectively distinguished native structures from all corresponding decoys in >90% of the cases, using isolated protein subunits as target structures. If contacts between subunits within quaternary structures are included, the accuracy increases to 97%. Interactions beyond atom-atom contact range were not required to distinguish native structures from the decoys using this method. The contact scoring performed as well or better than existing statistical and physicochemical potentials and may be applied as an independent means of evaluating putative structural models."
},
{
"pmid": "8627632",
"title": "Energy functions that discriminate X-ray and near native folds from well-constructed decoys.",
"abstract": "This study generates ensembles of decoy or test structures for eight small proteins with a variety of different folds. Between 35,000 and 200,000 decoys were generated for each protein using our four-state off-lattice model together with a novel relaxation method. These give compact self-avoiding conformations each constrained to have native secondary structure. Ensembles of these decoy conformations were used to test the ability of several types of empirical contact, surface area and distance-dependent energy functions to distinguish between correct and incorrect conformations. These tests have shown that none of the functions is able to distinguish consistently either the X-ray conformation or the near-native conformations from others which are incorrect. Certain combinations of two of these energy functions were able, however, consistently to identify X-ray structures from amongst the decoy conformations. These same combinations are better also at identifying near-native conformations, consistently finding them with a hundred-fold higher frequency than chance. The fact that these combination energy functions perform better than generally accepted energy functions suggests their future use in folding simulations and perhaps threading predictions."
},
{
"pmid": "15954080",
"title": "SCUD: fast structure clustering of decoys using reference state to remove overall rotation.",
"abstract": "We developed a method for fast decoy clustering by using reference root-mean-squared distance (rRMSD) rather than commonly used pairwise RMSD (pRMSD) values. For 41 proteins with 2000 decoys each, the computing efficiency increases nine times without a significant change in the accuracy of near-native selections. Tests on additional protein decoys based on different reference conformations confirmed this result. Further analysis indicates that the pRMSD and rRMSD values are highly correlated (with an average correlation coefficient of 0.82) and the clusters obtained from pRMSD and rRMSD values are highly similar (the representative structures of the top five largest clusters from the two methods are 74% identical). SCUD (Structure ClUstering of Decoys) with an automatic cutoff value is available at http://theory.med.buffalo.edu."
},
{
"pmid": "21310747",
"title": "Entropy-accelerated exact clustering of protein decoys.",
"abstract": "MOTIVATION\nClustering is commonly used to identify the best decoy among many generated in protein structure prediction when using energy alone is insufficient. Calculation of the pairwise distance matrix for a large decoy set is computationally expensive. Typically, only a reduced set of decoys using energy filtering is subjected to clustering analysis. A fast clustering method for a large decoy set would be beneficial to protein structure prediction and this still poses a challenge.\n\n\nRESULTS\nWe propose a method using propagation of geometric constraints to accelerate exact clustering, without compromising the distance measure. Our method can be used with any metric distance. Metrics that are expensive to compute and have known cheap lower and upper bounds will benefit most from the method. We compared our method's accuracy against published results from the SPICKER clustering software on 40 large decoy sets from the I-TASSER protein folding engine. We also performed some additional speed comparisons on six targets from the 'semfold' decoy set. In our tests, our method chose a better decoy than the energy criterion in 25 out of 40 cases versus 20 for SPICKER. Our method also was shown to be consistently faster than another fast software performing exact clustering named Calibur. In some cases, our approach can even outperform the speed of an approximate method.\n\n\nAVAILABILITY\nOur C++ software is released under the GNU General Public License. It can be downloaded from http://www.riken.jp/zhangiru/software/durandal_released.tgz."
},
{
"pmid": "24023923",
"title": "Protein structural model selection by combining consensus and single scoring methods.",
"abstract": "Quality assessment (QA) for predicted protein structural models is an important and challenging research problem in protein structure prediction. Consensus Global Distance Test (CGDT) methods assess each decoy (predicted structural model) based on its structural similarity to all others in a decoy set and has been proved to work well when good decoys are in a majority cluster. Scoring functions evaluate each single decoy based on its structural properties. Both methods have their merits and limitations. In this paper, we present a novel method called PWCom, which consists of two neural networks sequentially to combine CGDT and single model scoring methods such as RW, DDFire and OPUS-Ca. Specifically, for every pair of decoys, the difference of the corresponding feature vectors is input to the first neural network which enables one to predict whether the decoy-pair are significantly different in terms of their GDT scores to the native. If yes, the second neural network is used to decide which one of the two is closer to the native structure. The quality score for each decoy in the pool is based on the number of winning times during the pairwise comparisons. Test results on three benchmark datasets from different model generation methods showed that PWCom significantly improves over consensus GDT and single scoring methods. The QA server (MUFOLD-Server) applying this method in CASP 10 QA category was ranked the second place in terms of Pearson and Spearman correlation performance."
},
{
"pmid": "25737479",
"title": "MQAPsingle: A quasi single-model approach for estimation of the quality of individual protein structure models.",
"abstract": "We present a Model Quality Assessment Program (MQAP), called MQAPsingle, for ranking and assessing the absolute global quality of single protein models. MQAPsingle is quasi single-model MQAP, a method that combines advantages of both \"pure\" single-model MQAPs and clustering MQAPs. This approach results in higher accuracy compared to the state-of-the-art single-model MQAPs. Notably, the prediction for a given model is the same regardless if this model is submitted to our server alone or together with other models. Proteins 2016; 84:1021-1028. © 2015 Wiley Periodicals, Inc."
},
{
"pmid": "27530967",
"title": "Sorting protein decoys by machine-learning-to-rank.",
"abstract": "Much progress has been made in Protein structure prediction during the last few decades. As the predicted models can span a broad range of accuracy spectrum, the accuracy of quality estimation becomes one of the key elements of successful protein structure prediction. Over the past years, a number of methods have been developed to address this issue, and these methods could be roughly divided into three categories: the single-model methods, clustering-based methods and quasi single-model methods. In this study, we develop a single-model method MQAPRank based on the learning-to-rank algorithm firstly, and then implement a quasi single-model method Quasi-MQAPRank. The proposed methods are benchmarked on the 3DRobot and CASP11 dataset. The five-fold cross-validation on the 3DRobot dataset shows the proposed single model method outperforms other methods whose outputs are taken as features of the proposed method, and the quasi single-model method can further enhance the performance. On the CASP11 dataset, the proposed methods also perform well compared with other leading methods in corresponding categories. In particular, the Quasi-MQAPRank method achieves a considerable performance on the CASP11 Best150 dataset."
},
{
"pmid": "28419290",
"title": "SVMQA: support-vector-machine-based protein single-model quality assessment.",
"abstract": "MOTIVATION\nThe accurate ranking of predicted structural models and selecting the best model from a given candidate pool remain as open problems in the field of structural bioinformatics. The quality assessment (QA) methods used to address these problems can be grouped into two categories: consensus methods and single-model methods. Consensus methods in general perform better and attain higher correlation between predicted and true quality measures. However, these methods frequently fail to generate proper quality scores for native-like structures which are distinct from the rest of the pool. Conversely, single-model methods do not suffer from this drawback and are better suited for real-life applications where many models from various sources may not be readily available.\n\n\nRESULTS\nIn this study, we developed a support-vector-machine-based single-model global quality assessment (SVMQA) method. For a given protein model, the SVMQA method predicts TM-score and GDT_TS score based on a feature vector containing statistical potential energy terms and consistency-based terms between the actual structural features (extracted from the three-dimensional coordinates) and predicted values (from primary sequence). We trained SVMQA using CASP8, CASP9 and CASP10 targets and determined the machine parameters by 10-fold cross-validation. We evaluated the performance of our SVMQA method on various benchmarking datasets. Results show that SVMQA outperformed the existing best single-model QA methods both in ranking provided protein models and in selecting the best model from the pool. According to the CASP12 assessment, SVMQA was the best method in selecting good-quality models from decoys in terms of GDTloss.\n\n\nAVAILABILITY AND IMPLEMENTATION\nSVMQA method can be freely downloaded from http://lee.kias.re.kr/SVMQA/SVMQA_eval.tar.gz.\n\n\nCONTACT\[email protected].\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "25222008",
"title": "Random forest-based protein model quality assessment (RFMQA) using structural features and potential energy terms.",
"abstract": "Recently, predicting proteins three-dimensional (3D) structure from its sequence information has made a significant progress due to the advances in computational techniques and the growth of experimental structures. However, selecting good models from a structural model pool is an important and challenging task in protein structure prediction. In this study, we present the first application of random forest based model quality assessment (RFMQA) to rank protein models using its structural features and knowledge-based potential energy terms. The method predicts a relative score of a model by using its secondary structure, solvent accessibility and knowledge-based potential energy terms. We trained and tested the RFMQA method on CASP8 and CASP9 targets using 5-fold cross-validation. The correlation coefficient between the TM-score of the model selected by RFMQA (TMRF) and the best server model (TMbest) is 0.945. We benchmarked our method on recent CASP10 targets by using CASP8 and 9 server models as a training set. The correlation coefficient and average difference between TMRF and TMbest over 95 CASP10 targets are 0.984 and 0.0385, respectively. The test results show that our method works better in selecting top models when compared with other top performing methods. RFMQA is available for download from http://lee.kias.re.kr/RFMQA/RFMQA_eval.tar.gz."
},
{
"pmid": "28113636",
"title": "Purely Structural Protein Scoring Functions Using Support Vector Machine and Ensemble Learning.",
"abstract": "The function of a protein is determined by its structure, which creates a need for efficient methods of protein structure determination to advance scientific and medical research. Because current experimental structure determination methods carry a high price tag, computational predictions are highly desirable. Given a protein sequence, computational methods produce numerous 3D structures known as decoys. Selection of the best quality decoys is both challenging and essential as the end users can handle only a few ones. Therefore, scoring functions are central to decoy selection. They combine measurable features into a single number indicator of decoy quality. Unfortunately, current scoring functions do not consistently select the best decoys. Machine learning techniques offer great potential to improve decoy scoring. This paper presents two machine-learning based scoring functions to predict the quality of proteins structures, i.e., the similarity between the predicted structure and the experimental one without knowing the latter. We use different metrics to compare these scoring functions against three state-of-the-art scores. This is a first attempt at comparing different scoring functions using the same non-redundant dataset for training and testing and the same features. The results show that adding informative features may be more significant than the method used."
},
{
"pmid": "22004759",
"title": "GOAP: a generalized orientation-dependent, all-atom statistical potential for protein structure prediction.",
"abstract": "An accurate scoring function is a key component for successful protein structure prediction. To address this important unsolved problem, we develop a generalized orientation and distance-dependent all-atom statistical potential. The new statistical potential, generalized orientation-dependent all-atom potential (GOAP), depends on the relative orientation of the planes associated with each heavy atom in interacting pairs. GOAP is a generalization of previous orientation-dependent potentials that consider only representative atoms or blocks of side-chain or polar atoms. GOAP is decomposed into distance- and angle-dependent contributions. The DFIRE distance-scaled finite ideal gas reference state is employed for the distance-dependent component of GOAP. GOAP was tested on 11 commonly used decoy sets containing 278 targets, and recognized 226 native structures as best from the decoys, whereas DFIRE recognized 127 targets. The major improvement comes from decoy sets that have homology-modeled structures that are close to native (all within ∼4.0 Å) or from the ROSETTA ab initio decoy set. For these two kinds of decoys, orientation-independent DFIRE or only side-chain orientation-dependent RWplus performed poorly. Although the OPUS-PSP block-based orientation-dependent, side-chain atom contact potential performs much better (recognizing 196 targets) than DFIRE, RWplus, and dDFIRE, it is still ∼15% worse than GOAP. Thus, GOAP is a promising advance in knowledge-based, all-atom statistical potentials. GOAP is available for download at http://cssb.biology.gatech.edu/GOAP."
},
{
"pmid": "22963006",
"title": "Improved model quality assessment using ProQ2.",
"abstract": "BACKGROUND\nEmploying methods to assess the quality of modeled protein structures is now standard practice in bioinformatics. In a broad sense, the techniques can be divided into methods relying on consensus prediction on the one hand, and single-model methods on the other. Consensus methods frequently perform very well when there is a clear consensus, but this is not always the case. In particular, they frequently fail in selecting the best possible model in the hard cases (lacking consensus) or in the easy cases where models are very similar. In contrast, single-model methods do not suffer from these drawbacks and could potentially be applied on any protein of interest to assess quality or as a scoring function for sampling-based refinement.\n\n\nRESULTS\nHere, we present a new single-model method, ProQ2, based on ideas from its predecessor, ProQ. ProQ2 is a model quality assessment algorithm that uses support vector machines to predict local as well as global quality of protein models. Improved performance is obtained by combining previously used features with updated structural and predicted features. The most important contribution can be attributed to the use of profile weighting of the residue specific features and the use features averaged over the whole model even though the prediction is still local.\n\n\nCONCLUSIONS\nProQ2 is significantly better than its predecessors at detecting high quality models, improving the sum of Z-scores for the selected first-ranked models by 20% and 32% compared to the second-best single-model method in CASP8 and CASP9, respectively. The absolute quality assessment of the models at both local and global level is also improved. The Pearson's correlation between the correct and local predicted score is improved from 0.59 to 0.70 on CASP8 and from 0.62 to 0.68 on CASP9; for global score to the correct GDT_TS from 0.75 to 0.80 and from 0.77 to 0.80 again compared to the second-best single methods in CASP8 and CASP9, respectively. ProQ2 is available at http://proq2.wallnerlab.org."
},
{
"pmid": "30874723",
"title": "Protein model quality assessment using 3D oriented convolutional neural networks.",
"abstract": "MOTIVATION\nProtein model quality assessment (QA) is a crucial and yet open problem in structural bioinformatics. The current best methods for single-model QA typically combine results from different approaches, each based on different input features constructed by experts in the field. Then, the prediction model is trained using a machine-learning algorithm. Recently, with the development of convolutional neural networks (CNN), the training paradigm has changed. In computer vision, the expert-developed features have been significantly overpassed by automatically trained convolutional filters. This motivated us to apply a three-dimensional (3D) CNN to the problem of protein model QA.\n\n\nRESULTS\nWe developed Ornate (Oriented Routed Neural network with Automatic Typing)-a novel method for single-model QA. Ornate is a residue-wise scoring function that takes as input 3D density maps. It predicts the local (residue-wise) and the global model quality through a deep 3D CNN. Specifically, Ornate aligns the input density map, corresponding to each residue and its neighborhood, with the backbone topology of this residue. This circumvents the problem of ambiguous orientations of the initial models. Also, Ornate includes automatic identification of atom types and dynamic routing of the data in the network. Established benchmarks (CASP 11 and CASP 12) demonstrate the state-of-the-art performance of our approach among single-model QA methods.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe method is available at https://team.inria.fr/nano-d/software/Ornate/. It consists of a C++ executable that transforms molecular structures into volumetric density maps, and a Python code based on the TensorFlow framework for applying the Ornate model to these maps.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "31487288",
"title": "Protein model accuracy estimation based on local structure quality assessment using 3D convolutional neural network.",
"abstract": "In protein tertiary structure prediction, model quality assessment programs (MQAPs) are often used to select the final structural models from a pool of candidate models generated by multiple templates and prediction methods. The 3-dimensional convolutional neural network (3DCNN) is an expansion of the 2DCNN and has been applied in several fields, including object recognition. The 3DCNN is also used for MQA tasks, but the performance is low due to several technical limitations related to protein tertiary structures, such as orientation alignment. We proposed a novel single-model MQA method based on local structure quality evaluation using a deep neural network containing 3DCNN layers. The proposed method first assesses the quality of local structures for each residue and then evaluates the quality of whole structures by integrating estimated local qualities. We analyzed the model using the CASP11, CASP12, and 3D-Robot datasets and compared the performance of the model with that of the previous 3DCNN method based on whole protein structures. The proposed method showed a significant improvement compared to the previous 3DCNN method for multiple evaluation measures. We also compared the proposed method to other state-of-the-art methods. Our method showed better performance than the previous 3DCNN-based method and comparable accuracy as the current best single-model methods; particularly, in CASP11 stage2, our method showed a Pearson coefficient of 0.486, which was better than those of the best single-model methods (0.366-0.405). A standalone version of the proposed method and data files are available at https://github.com/ishidalab-titech/3DCNN_MQA."
},
{
"pmid": "28062450",
"title": "The structural bioinformatics library: modeling in biomolecular science and beyond.",
"abstract": "Motivation\nSoftware in structural bioinformatics has mainly been application driven. To favor practitioners seeking off-the-shelf applications, but also developers seeking advanced building blocks to develop novel applications, we undertook the design of the Structural Bioinformatics Library ( SBL , http://sbl.inria.fr ), a generic C ++/python cross-platform software library targeting complex problems in structural bioinformatics. Its tenet is based on a modular design offering a rich and versatile framework allowing the development of novel applications requiring well specified complex operations, without compromising robustness and performances.\n\n\nResults\nThe SBL involves four software components (1-4 thereafter). For end-users, the SBL provides ready to use, state-of-the-art (1) applications to handle molecular models defined by unions of balls, to deal with molecular flexibility, to model macro-molecular assemblies. These applications can also be combined to tackle integrated analysis problems. For developers, the SBL provides a broad C ++ toolbox with modular design, involving core (2) algorithms , (3) biophysical models and (4) modules , the latter being especially suited to develop novel applications. The SBL comes with a thorough documentation consisting of user and reference manuals, and a bugzilla platform to handle community feedback.\n\n\nAvailability and Implementation\nThe SBL is available from http://sbl.inria.fr.\n\n\nContact\[email protected].\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "12381853",
"title": "Distance-scaled, finite ideal-gas reference state improves structure-derived potentials of mean force for structure selection and stability prediction.",
"abstract": "The distance-dependent structure-derived potentials developed so far all employed a reference state that can be characterized as a residue (atom)-averaged state. Here, we establish a new reference state called the distance-scaled, finite ideal-gas reference (DFIRE) state. The reference state is used to construct a residue-specific all-atom potential of mean force from a database of 1011 nonhomologous (less than 30% homology) protein structures with resolution less than 2 A. The new all-atom potential recognizes more native proteins from 32 multiple decoy sets, and raises an average Z-score by 1.4 units more than two previously developed, residue-specific, all-atom knowledge-based potentials. When only backbone and C(beta) atoms are used in scoring, the performance of the DFIRE-based potential, although is worse than that of the all-atom version, is comparable to those of the previously developed potentials on the all-atom level. In addition, the DFIRE-based all-atom potential provides the most accurate prediction of the stabilities of 895 mutants among three knowledge-based all-atom potentials. Comparison with several physical-based potentials is made."
},
{
"pmid": "21060880",
"title": "A novel side-chain orientation dependent potential derived from random-walk reference state for protein fold selection and structure prediction.",
"abstract": "BACKGROUND\nAn accurate potential function is essential to attack protein folding and structure prediction problems. The key to developing efficient knowledge-based potential functions is to design reference states that can appropriately counteract generic interactions. The reference states of many knowledge-based distance-dependent atomic potential functions were derived from non-interacting particles such as ideal gas, however, which ignored the inherent sequence connectivity and entropic elasticity of proteins.\n\n\nMETHODOLOGY\nWe developed a new pair-wise distance-dependent, atomic statistical potential function (RW), using an ideal random-walk chain as reference state, which was optimized on CASP models and then benchmarked on nine structural decoy sets. Second, we incorporated a new side-chain orientation-dependent energy term into RW (RWplus) and found that the side-chain packing orientation specificity can further improve the decoy recognition ability of the statistical potential.\n\n\nSIGNIFICANCE\nRW and RWplus demonstrate a significantly better ability than the best performing pair-wise distance-dependent atomic potential functions in both native and near-native model selections. It has higher energy-RMSD and energy-TM-score correlations compared with other potentials of the same type in real-life structure assembly decoys. When benchmarked with a comprehensive list of publicly available potentials, RW and RWplus shows comparable performance to the state-of-the-art scoring functions, including those combining terms from multiple resources. These data demonstrate the usefulness of random-walk chain as reference states which correctly account for sequence connectivity and entropic elasticity of proteins. It shows potential usefulness in structure recognition and protein folding simulations. The RW and RWplus potentials, as well as the newly generated I-TASSER decoys, are freely available in http://zhanglab.ccmb.med.umich.edu/RW."
},
{
"pmid": "18260109",
"title": "Specific interactions for ab initio folding of protein terminal regions with secondary structures.",
"abstract": "Proteins fold into unique three-dimensional structures by specific, orientation-dependent interactions between amino acid residues. Here, we extract orientation-dependent interactions from protein structures by treating each polar atom as a dipole with a direction. The resulting statistical energy function successfully refolds 13 out of 16 fully unfolded secondary-structure terminal regions of 10-23 amino acid residues in 15 small proteins. Dissecting the orientation-dependent energy function reveals that the orientation preference between hydrogen-bonded atoms is not enough to account for the structural specificity of proteins. The result has significant implications on the theoretical and experimental searches for specific interactions involved in protein folding and molecular recognition between proteins and other biologically active molecules."
},
{
"pmid": "28430426",
"title": "The Rosetta All-Atom Energy Function for Macromolecular Modeling and Design.",
"abstract": "Over the past decade, the Rosetta biomolecular modeling suite has informed diverse biological questions and engineering challenges ranging from interpretation of low-resolution structural data to design of nanomaterials, protein therapeutics, and vaccines. Central to Rosetta's success is the energy function: a model parametrized from small-molecule and X-ray crystal structure data used to approximate the energy associated with each biomolecule conformation. This paper describes the mathematical models and physical concepts that underlie the latest Rosetta energy function, called the Rosetta Energy Function 2015 (REF15). Applying these concepts, we explain how to use Rosetta energies to identify and analyze the features of biomolecular models. Finally, we discuss the latest advances in the energy function that extend its capabilities from soluble proteins to also include membrane proteins, peptides containing noncanonical amino acids, small molecules, carbohydrates, nucleic acids, and other macromolecules."
},
{
"pmid": "15207004",
"title": "Improved protein structure selection using decoy-dependent discriminatory functions.",
"abstract": "BACKGROUND\nA key component in protein structure prediction is a scoring or discriminatory function that can distinguish near-native conformations from misfolded ones. Various types of scoring functions have been developed to accomplish this goal, but their performance is not adequate to solve the structure selection problem. In addition, there is poor correlation between the scores and the accuracy of the generated conformations.\n\n\nRESULTS\nWe present a simple and nonparametric formula to estimate the accuracy of predicted conformations (or decoys). This scoring function, called the density score function, evaluates decoy conformations by performing an all-against-all Calpha RMSD (Root Mean Square Deviation) calculation in a given decoy set. We tested the density score function on 83 decoy sets grouped by their generation methods (4state_reduced, fisa, fisa_casp3, lmds, lattice_ssfit, semfold and Rosetta). The density scores have correlations as high as 0.9 with the Calpha RMSDs of the decoy conformations, measured relative to the experimental conformation for each decoy. We previously developed a residue-specific all-atom probability discriminatory function (RAPDF), which compiles statistics from a database of experimentally determined conformations, to aid in structure selection. Here, we present a decoy-dependent discriminatory function called self-RAPDF, where we compiled the atom-atom contact probabilities from all the conformations in a decoy set instead of using an ensemble of native conformations, with a weighting scheme based on the density scores. The self-RAPDF has a higher correlation with Calpha RMSD than RAPDF for 76/83 decoy sets, and selects better near-native conformations for 62/83 decoy sets. Self-RAPDF may be useful not only for selecting near-native conformations from decoy sets, but also for fold simulations and protein structure refinement.\n\n\nCONCLUSIONS\nBoth the density score and the self-RAPDF functions are decoy-dependent scoring functions for improved protein structure selection. Their success indicates that information from the ensemble of decoy conformations can be used to derive statistical probabilities and facilitate the identification of near-native structures."
},
{
"pmid": "11108700",
"title": "MaxSub: an automated measure for the assessment of protein structure prediction quality.",
"abstract": "MOTIVATION\nEvaluating the accuracy of predicted models is critical for assessing structure prediction methods. Because this problem is not trivial, a large number of different assessment measures have been proposed by various authors, and it has already become an active subfield of research (Moult et al. (1997,1999) and CAFASP (Fischer et al. 1999) prediction experiments have demonstrated that it has been difficult to choose one single, 'best' method to be used in the evaluation. Consequently, the CASP3 evaluation was carried out using an extensive set of especially developed numerical measures, coupled with human-expert intervention. As part of our efforts towards a higher level of automation in the structure prediction field, here we investigate the suitability of a fully automated, simple, objective, quantitative and reproducible method that can be used in the automatic assessment of models in the upcoming CAFASP2 experiment. Such a method should (a) produce one single number that measures the quality of a predicted model and (b) perform similarly to human-expert evaluations.\n\n\nRESULTS\nMaxSub is a new and independently developed method that further builds and extends some of the evaluation methods introduced at CASP3. MaxSub aims at identifying the largest subset of C(alpha) atoms of a model that superimpose 'well' over the experimental structure, and produces a single normalized score that represents the quality of the model. Because there exists no evaluation method for assessment measures of predicted models, it is not easy to evaluate how good our new measure is. Even though an exact comparison of MaxSub and the CASP3 assessment is not straightforward, here we use a test-bed extracted from the CASP3 fold-recognition models. A rough qualitative comparison of the performance of MaxSub vis-a-vis the human-expert assessment carried out at CASP3 shows that there is a good agreement for the more accurate models and for the better predicting groups. As expected, some differences were observed among the medium to poor models and groups. Overall, the top six predicting groups ranked using the fully automated MaxSub are also the top six groups ranked at CASP3. We conclude that MaxSub is a suitable method for the automatic evaluation of models."
},
{
"pmid": "23184517",
"title": "Fast algorithm for population-based protein structural model analysis.",
"abstract": "De novo protein structure prediction often generates a large population of candidates (models), and then selects near-native models through clustering. Existing structural model clustering methods are time consuming due to pairwise distance calculation between models. In this paper, we present a novel method for fast model clustering without losing the clustering accuracy. Instead of the commonly used pairwise root mean square deviation and TM-score values, we propose two new distance measures, Dscore1 and Dscore2, based on the comparison of the protein distance matrices for describing the difference and the similarity among models, respectively. The analysis indicates that both the correlation between Dscore1 and root mean square deviation and the correlation between Dscore2 and TM-score are high. Compared to the existing methods with calculation time quadratic to the number of models, our Dscore1-based clustering achieves a linearly time complexity while obtaining almost the same accuracy for near-native model selection. By using Dscore2 to select representatives of clusters, we can further improve the quality of the representatives with little increase in computing time. In addition, for large size (~500 k) models, we can give a fast data visualization based on the Dscore distribution in seconds to minutes. Our method has been implemented in a package named MUFOLD-CL, available at http://mufold.org/clustering.php."
},
{
"pmid": "27041353",
"title": "Protein single-model quality assessment by feature-based probability density functions.",
"abstract": "Protein quality assessment (QA) has played an important role in protein structure prediction. We developed a novel single-model quality assessment method-Qprob. Qprob calculates the absolute error for each protein feature value against the true quality scores (i.e. GDT-TS scores) of protein structural models, and uses them to estimate its probability density distribution for quality assessment. Qprob has been blindly tested on the 11th Critical Assessment of Techniques for Protein Structure Prediction (CASP11) as MULTICOM-NOVEL server. The official CASP result shows that Qprob ranks as one of the top single-model QA methods. In addition, Qprob makes contributions to our protein tertiary structure predictor MULTICOM, which is officially ranked 3rd out of 143 predictors. The good performance shows that Qprob is good at assessing the quality of models of hard targets. These results demonstrate that this new probability density distribution based method is effective for protein single-model quality assessment and is useful for protein structure prediction. The webserver of Qprob is available at: http://calla.rnet.missouri.edu/qprob/. The software is now freely available in the web server of Qprob."
},
{
"pmid": "30590384",
"title": "Smooth orientation-dependent scoring function for coarse-grained protein quality assessment.",
"abstract": "MOTIVATION\nProtein quality assessment (QA) is a crucial element of protein structure prediction, a fundamental and yet open problem in structural bioinformatics. QA aims at ranking predicted protein models to select the best candidates. The assessment can be performed based either on a single model or on a consensus derived from an ensemble of models. The latter strategy can yield very high performance but substantially depends on the pool of available candidate models, which limits its applicability. Hence, single-model QA methods remain an important research target, also because they can assist the sampling of candidate models.\n\n\nRESULTS\nWe present a novel single-model QA method called SBROD. The SBROD (Smooth Backbone-Reliant Orientation-Dependent) method uses only the backbone protein conformation, and hence it can be applied to scoring coarse-grained protein models. The proposed method deduces its scoring function from a training set of protein models. The SBROD scoring function is composed of four terms related to different structural features: residue-residue orientations, contacts between backbone atoms, hydrogen bonding and solvent-solute interactions. It is smooth with respect to atomic coordinates and thus is potentially applicable to continuous gradient-based optimization of protein conformations. Furthermore, it can also be used for coarse-grained protein modeling and computational protein design. SBROD proved to achieve similar performance to state-of-the-art single-model QA methods on diverse datasets (CASP11, CASP12 and MOULDER).\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe standalone application implemented in C++ and Python is freely available at https://gitlab.inria.fr/grudinin/sbrod and supported on Linux, MacOS and Windows.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
}
] |
Frontiers in Chemistry | 31750290 | PMC6848380 | 10.3389/fchem.2019.00707 | A New Genetic Algorithm Approach Applied to Atomic and Molecular Cluster Studies | A new procedure is suggested to improve genetic algorithms for the prediction of structures of nanoparticles. The strategy focuses on managing the creation of new individuals by evaluating the efficiency of operators (o1, o2,…,o13) in generating well-adapted offspring. This is done by increasing the creation rate of operators with better performance and decreasing that rate for the ones which poorly fulfill the task of creating favorable new generation. Additionally, several strategies (thirteen at this level of approach) from different optimization techniques were implemented on the actual genetic algorithm. Trials were performed on the general case studies of 26 and 55-atom clusters with binding energy governed by a Lennard-Jones empirical potential with all individuals being created by each of the particular thirteen operators tested. A 18-atom carbon cluster and some polynitrogen systems were also studied within REBO potential and quantum approaches, respectively. Results show that our management strategy could avoid bad operators, keeping the overall method performance with great confidence. Moreover, amongst the operators taken from the literature and tested herein, the genetic algorithm was faster when the generation of new individuals was carried out by the twist operator, even when compared to commonly used operators such as Deaven and Ho cut-and-splice crossover. Operators typically designed for basin-hopping methodology also performed well on the proposed genetic algorithm scheme. | 2. Related WorkIt is already well discussed in the literature that, in order to guarantee efficiency in convergence and appropriate exploration of the PES associated with atomic and molecular clusters, evolutionary algorithms employed in global optimization problems must ensure population diversity (Hartke, 1999; Cheng et al., 2004; Grosso et al., 2007; Pereira and Marques, 2009; Marques et al., 2018). Therefore, estimating how similar are the structures composing the evolving population can provide valuable information to assist the evolutionary procedure. In the work of Hartke (1999), it is proposed that a minimum degree of exploration of the PES is ensured by making part of the population always composed by mutants. That means a set of structures that have been randomly modified will be present throughout the evolutionary procedure, regardless of whether they are better adapted or not. In the same work, a minimum energy difference between structures is established to maintain diversity, as well as a balance between optimization performance and exploration of the PES is proposed through the simultaneous use of a random operator such as Deaven and Ho (1995) cutting plane and a biased version of this operator in which the cluster is separated into its best and worst halves. Hartke (1999) also proposes a measure based on the two-dimensional projections of clusters structures that can distribute different types of geometries into niches. Thus, different ranges of values can be assigned to different types of geometries, allowing the evaluation of structure similarities and enabling one to avoid population stagnation.Cheng et al. (2004) propose that structure similarity checking should always be based on topological information, and that measurements of the distance between energy minimum structures should be carried out by comparing numerical values associated with structure similarities. In their work, a connectivity table for cluster similarity checking is proposed, in which the connectivity information of a cluster is characterized according to the number of atoms having i nearest neighbors within the cluster. By using this connectivity table together with the evaluation of the fitness of each individual, they managed to balance diversity and convergence efficiency. Pereira and Marques (2009) state that one should consider structural information for estimating dissimilarities among cluster structures when searching for energy minima within an evolutionary algorithm approach, instead of taking into account fitness values. They have employed a combination of an evolutionary approach with a local search method that uses derivative information to search for the nearest local minimum without requiring any previous knowledge about the problem being solved. The authors show that maintaining diversity is the main issue to guarantee effectiveness, which was carried out by the application of three distinct distance measures to estimate the dissimilarity between structures.As for recent advances in the development of genetic algorithms, Heiles et al. coupled Plane-Wave Self-Consistent Field (PWscf) package with Birmingham Cluster Genetic Algorithm (BCGA), allowing the study of Au-Ag nanoalloys through density functional theory (Heiles et al., 2012). Zayed et al. implemented what they called universal genetic algorithm, making use of Python's large collection of libraries and of the scaling capabilities of a pool genetic algorithm (Zayed et al., 2017). Vilhelmsen and Hammer proposed an inexpensive strategy to eliminate similar structures from the population (Vilhelmsen and Hammer, 2012). Lazauskas et al. proposed a pre-screening to eliminate structures with high probability of convergence failure during local minimization (Lazauskas et al., 2017).In the past we proposed two new operators, namely annihilator and history operators (Guimarães et al., 2002), that demonstrated along the years (Lordeiro et al., 2003; Rodrigues et al., 2008; Silva et al., 2014a,b) to be quite efficient for determining global minima in atomic and molecular cluster studies where many local minima were present. Regarding the creation of new individuals, one can observe a broad variation among methodologies available in the literature. In general, each operator application rate is kept constant throughout the GA execution. For instance, Wang et al. used the values 0.5, 0.3, and 0.2 for mating, mutation and exchange rates, respectively, in their global minimization (Wang et al., 2018). Zhao et al. propose values between 10% and 30% for mutation rate (Zhao et al., 2016), while in an outline of the evolutionary principles of GAs, Heiles and Johnston describe a parameter that defines the probability of mutation, pmut (Heiles and Johnston, 2013). Let ntot be the total number of individuals to be created after energy minimization of an arbitrary generation; among them, pmutntot individuals are created by mutation operators, while (1−pmut)ntot are created by crossover or recombination methods, on average (Heiles and Johnston, 2013). Finally, Rondina et al. used a dynamic strategy to manage operators in a basin-hopping technique (Rondina and Da Silva, 2013).In this work, we propose a method with dynamic management of evolutionary operators for genetic algorithms that, in principle, could lead to a more efficient way to survey the PES of atomic and molecular clusters than our previous older GA version (Guimarães et al., 2002; Lordeiro et al., 2003). The paper is divided as follows: section 3 outlines a standard GA procedure, gives the details of our algorithm and describes all the operators employed as well as the management strategy proposed. The comparison between the different builds tested and the evaluation of their behavior according to the model system employed are presented and discussed in section 4. The main conclusions are gathered in section 5. | [
"26495908",
"23758367",
"10059656",
"22583313",
"22012270",
"17983228",
"31023921",
"31416933",
"18412466",
"28323250",
"28252128",
"14525223",
"31015599",
"11289976",
"12908434",
"23957311",
"25208555",
"24691391",
"29982860",
"17238254",
"22540598",
"21332209",
"28068082"
] | [
{
"pmid": "26495908",
"title": "APL: An angle probability list to improve knowledge-based metaheuristics for the three-dimensional protein structure prediction.",
"abstract": "Tertiary protein structure prediction is one of the most challenging problems in structural bioinformatics. Despite the advances in algorithm development and computational strategies, predicting the folded structure of a protein only from its amino acid sequence remains as an unsolved problem. We present a new computational approach to predict the native-like three-dimensional structure of proteins. Conformational preferences of amino acid residues and secondary structure information were obtained from protein templates stored in the Protein Data Bank and represented as an Angle Probability List. Two knowledge-based prediction methods based on Genetic Algorithms and Particle Swarm Optimization were developed using this information. The proposed method has been tested with twenty-six case studies selected to validate our approach with different classes of proteins and folding patterns. Stereochemical and structural analysis were performed for each predicted three-dimensional structure. Results achieved suggest that the Angle Probability List can improve the effectiveness of metaheuristics used to predicted the three-dimensional structure of protein molecules by reducing its conformational search space."
},
{
"pmid": "23758367",
"title": "A sphere-cut-splice crossover for the evolution of cluster structures.",
"abstract": "A new crossover operator is proposed to evolve the structures of the atomic clusters. It uses a sphere rather than a plane to cut and splice the parent structures. The child cluster is constructed by the atoms of one parent which lie inside the sphere, and the atoms of the other parent which lie outside the sphere. It can reliably produce reasonable offspring and preserve the good schemata in parent structures, avoiding the drawbacks of the classical plane-cut-splice crossover in the global searching ability and the local optimization speed. Results of Lennard-Jones clusters (30 ≤ N ≤ 500) show that at the same settings the genetic algorithm with the sphere-cut-splice crossover exhibits better performance than the one with the plane-cut-splice crossover. The average number of local minimizations needed to find the global minima and the average number of energy evaluation of each local minimization in the sphere scheme is 0.8075 and 0.8386 of that in the plane scheme, respectively. The mean speed-up ratio for the entire testing clusters reaches 1.8207. Moreover, the sphere scheme is particularly suitable for large clusters and the mean speed-up ratio reaches 2.3520 for the clusters with 110 ≤ N ≤ 500. The comparison with other successful methods in previous studies also demonstrates its good performance. Finally, a further analysis is presented on the statistical features of the cutting sphere and a modified strategy that reduces the probability of using tiny and large spheres exhibits better global search."
},
{
"pmid": "22012270",
"title": "Dopant-induced 2D-3D transition in small Au-containing clusters: DFT-global optimisation of 8-atom Au-Ag nanoalloys.",
"abstract": "A genetic algorithm (GA) coupled with density functional theory (DFT) calculations is used to perform global optimisations for all compositions of 8-atom Au-Ag bimetallic clusters. The performance of this novel GA-DFT approach for bimetallic nanoparticles is tested for structures reported in the literature. New global minimum structures for various compositions are predicted and the 2D-3D transition is located. Results are explained with the aid of an analysis of the electronic density of states. The chemical ordering of the predicted lowest energy isomers are explained via a detailed analysis of the charge separation and mixing energies of the bimetallic clusters. Finally, dielectric properties are computed and the composition and dimensionality dependence of the electronic polarizability and dipole moment is discussed, enabling predictions to be made for future electric beam deflection experiments."
},
{
"pmid": "17983228",
"title": "Boron rings enclosing planar hypercoordinate group 14 elements.",
"abstract": "Sets of boron rings enclosing planar hypercoordinate group 14 elements (ABn(n-8); A = group 14 element; n = 6-10) are designed systematically based on geometrical and electronic fit principles: the size of a boron ring must accommodate the central atom comfortably. The electronic structures of the planar minima with hypercoordinate group 14 elements are doubly aromatic with six pi and six in-plane radial MO systems (radial MOs are comprised of boron p orbitals pointing toward the ring center). This is confirmed by induced magnetic field and nucleus-independent chemical shift (NICS) computations. The weakness of the \"partial\" A-B bonds is compensated by their unusually large number. Although a C7v pyramidal SiB8 structure is more stable than the D8h isomer, Born-Oppenheimer molecular dynamics simulations show the resistance of the D8h local minimum against deformation and isomerization. Such evidence of the viability of the boron ring minima with group 14 elements encourages experimental realization."
},
{
"pmid": "31023921",
"title": "Imaging covalent bond formation by H atom scattering from graphene.",
"abstract": "Viewing the atomic-scale motion and energy dissipation pathways involved in forming a covalent bond is a longstanding challenge for chemistry. We performed scattering experiments of H atoms from graphene and observed a bimodal translational energy loss distribution. Using accurate first-principles dynamics simulations, we show that the quasi-elastic channel involves scattering through the physisorption well where collision sites are near the centers of the six-membered C-rings. The second channel results from transient C-H bond formation, where H atoms lose 1 to 2 electron volts of energy within a 10-femtosecond interaction time. This remarkably rapid form of intramolecular vibrational relaxation results from the C atom's rehybridization during bond formation and is responsible for an unexpectedly high sticking probability of H on graphene."
},
{
"pmid": "31416933",
"title": "An sp-hybridized molecular carbon allotrope, cyclo[18]carbon.",
"abstract": "Carbon allotropes built from rings of two-coordinate atoms, known as cyclo[n]carbons, have fascinated chemists for many years, but until now they could not be isolated or structurally characterized because of their high reactivity. We generated cyclo[18]carbon (C18) using atom manipulation on bilayer NaCl on Cu(111) at 5 kelvin by eliminating carbon monoxide from a cyclocarbon oxide molecule, C24O6 Characterization of cyclo[18]carbon by high-resolution atomic force microscopy revealed a polyynic structure with defined positions of alternating triple and single bonds. The high reactivity of cyclocarbon and cyclocarbon oxides allows covalent coupling between molecules to be induced by atom manipulation, opening an avenue for the synthesis of other carbon allotropes and carbon-rich materials from the coalescence of cyclocarbon molecules."
},
{
"pmid": "18412466",
"title": "New algorithm in the basin hopping Monte Carlo to find the global minimum structure of unary and binary metallic nanoclusters.",
"abstract": "The basin-hopping Monte Carlo algorithm was modified to more effectively determine a global minimum structure in pure and binary metallic nanoclusters. For a pure metallic Ag55 nanocluster, the newly developed quadratic basin-hopping Monte Carlo algorithm is 3.8 times more efficient than the standard basin-hopping Monte Carlo algorithm. For a bimetallic Ag42Pd13 nanocluster, the new algorithm succeeds in finding the global minimum structure by 18.3% even though the standard basin-hopping Monte Carlo algorithm fails to achieve it."
},
{
"pmid": "28323250",
"title": "The atomic simulation environment-a Python library for working with atoms.",
"abstract": "The atomic simulation environment (ASE) is a software package written in the Python programming language with the aim of setting up, steering, and analyzing atomistic simulations. In ASE, tasks are fully scripted in Python. The powerful syntax of Python combined with the NumPy array library make it possible to perform very complex simulation tasks. For example, a sequence of calculations may be performed with the use of a simple 'for-loop' construction. Calculations of energy, forces, stresses and other quantities are performed through interfaces to many external electronic structure codes or force fields using a uniform interface. On top of this calculator interface, ASE provides modules for performing many standard simulation tasks such as structure optimization, molecular dynamics, handling of constraints and performing nudged elastic band calculations."
},
{
"pmid": "28252128",
"title": "An efficient genetic algorithm for structure prediction at the nanoscale.",
"abstract": "We have developed and implemented a new global optimization technique based on a Lamarckian genetic algorithm with the focus on structure diversity. The key process in the efficient search on a given complex energy landscape proves to be the removal of duplicates that is achieved using a topological analysis of candidate structures. The careful geometrical prescreening of newly formed structures and the introduction of new mutation move classes improve the rate of success further. The power of the developed technique, implemented in the Knowledge Led Master Code, or KLMC, is demonstrated by its ability to locate and explore a challenging double funnel landscape of a Lennard-Jones 38 atom system (LJ38). We apply the redeveloped KLMC to investigate three chemically different systems: ionic semiconductor (ZnO)1-32, metallic Ni13 and covalently bonded C60. All four systems have been systematically explored on the energy landscape defined using interatomic potentials. The new developments allowed us to successfully locate the double funnels of LJ38, find new local and global minima for ZnO clusters, extensively explore the Ni13 and C60 (the buckminsterfullerene, or buckyball) potential energy surfaces."
},
{
"pmid": "14525223",
"title": "Unbiased global optimization of Lennard-Jones clusters for N < or =201 using the conformational space annealing method.",
"abstract": "We apply the conformational space annealing method to the Lennard-Jones clusters and find all known lowest energy configurations up to 201 atoms, without using extra information of the problem such as the structures of the known global energy minima. In addition, the robustness of the algorithm with respect to the randomness of initial conditions of the problem is demonstrated by ten successful independent runs up to 183 atoms. Our results indicate that this method is a general and yet efficient global optimization algorithm applicable to many systems."
},
{
"pmid": "31015599",
"title": "Iron oxide nanoclusters for T 1 magnetic resonance imaging of non-human primates.",
"abstract": "Iron-oxide-based contrast agents for magnetic resonance imaging (MRI) had been clinically approved in the United States and Europe, yet most of these nanoparticle products were discontinued owing to failures to meet rigorous clinical requirements. Significant advances have been made in the synthesis of magnetic nanoparticles and their biomedical applications, but several major challenges remain for their clinical translation, in particular large-scale and reproducible synthesis, systematic toxicity assessment, and their preclinical evaluation in MRI of large animals. Here, we report the results of a toxicity study of iron oxide nanoclusters of uniform size in large animal models, including beagle dogs and the more clinically relevant macaques. We also show that iron oxide nanoclusters can be used as T 1 MRI contrast agents for high-resolution magnetic resonance angiography in beagle dogs and macaques, and that dynamic MRI enables the detection of cerebral ischaemia in these large animals. Iron oxide nanoclusters show clinical potential as next-generation MRI contrast agents."
},
{
"pmid": "11289976",
"title": "Structure and magnetism of neutral and anionic palladium clusters.",
"abstract": "The properties of neutral and anionic Pd(N) clusters were investigated with spin-density-functional calculations. The ground-state structures are three dimensional for N>3 and they are magnetic with a spin triplet for 2 < or = N < or = 7 and a spin nonet for N = 13 neutral clusters. Structural and spin isomers were determined and an anomalous increase of the magnetic moment with temperature is predicted for a Pd7 ensemble. Vertical electron detachment and ionization energies were calculated and the former agrees well with measured values for Pd(-)(N)."
},
{
"pmid": "12908434",
"title": "Computational engineering of metallic nanostructures and nanomachines.",
"abstract": "Small structures with dimensions in the nanometer regime play an important role within a lot of modern technological branches like, for example, genetics, chip fabrication, material science, medicine, or chemistry. While highly sophisticated characterization methods would be necessary to study such nanostructures, computational methods and models have made their entrance into the field of nanotechnology. The present work gives an overview of the problems connected with quantum mechanics, many-particle systems, and nanophysical models. Further, the application of molecular dynamics (MD)--a typical computational method suitable for modelling at the nanolevel--is introduced and outlined. The setup and use of specific MD models, advanced computation techniques, and efficient algorithms are discussed, while the focus is laid on the subjects nanodesign and nanoengineering which are demonstrated for the example of metallic nanostructures. Finally, the introduced techniques and methods are applied to stability studies of theoretical nanomachines."
},
{
"pmid": "23957311",
"title": "Revised basin-hopping Monte Carlo algorithm for structure optimization of clusters and nanoparticles.",
"abstract": "Suggestions for improving the Basin-Hopping Monte Carlo (BHMC) algorithm for unbiased global optimization of clusters and nanoparticles are presented. The traditional basin-hopping exploration scheme with Monte Carlo sampling is improved by bringing together novel strategies and techniques employed in different global optimization methods, however, with the care of keeping the underlying algorithm of BHMC unchanged. The improvements include a total of eleven local and nonlocal trial operators tailored for clusters and nanoparticles that allow an efficient exploration of the potential energy surface, two different strategies (static and dynamic) of operator selection, and a filter operator to handle unphysical solutions. In order to assess the efficiency of our strategies, we applied our implementation to several classes of systems, including Lennard-Jones and Sutton-Chen clusters with up to 147 and 148 atoms, respectively, a set of Lennard-Jones nanoparticles with sizes ranging from 200 to 1500 atoms, binary Lennard-Jones clusters with up to 100 atoms, (AgPd)55 alloy clusters described by the Sutton-Chen potential, and aluminum clusters with up to 30 atoms described within the density functional theory framework. Using unbiased global search our implementation was able to reproduce successfully the great majority of all published results for the systems considered and in many cases with more efficiency than the standard BHMC. We were also able to locate previously unknown global minimum structures for some of the systems considered. This revised BHMC method is a valuable tool for aiding theoretical investigations leading to a better understanding of atomic structures of clusters and nanoparticles."
},
{
"pmid": "25208555",
"title": "Growth analysis of sodium-potassium alloy clusters from 7 to 55 atoms through a genetic algorithm approach.",
"abstract": "The potential energy hypersurface associated with sodium-potassium alloy clusters is explored via an enhanced genetic algorithm, where two different operators are added to the standard evolutionary procedure. Based on the recent result that the empirical Gupta many-body potential yields reasonable results for clusters with more than seven atoms, we have employed this function in the evaluation of the energies. Agglomerates from seven to the well-established 55-atom structure are studied, and their second-order energy difference and excess energies are calculated. It is found that the most stable alloys (compared to the homonuclear counterparts) are found with the proportion of sodium atoms in the range of 30 to 40%. The experimental propensity of core-shell segregation is successfully predicted by the current approach."
},
{
"pmid": "24691391",
"title": "Theoretical study of small sodium-potassium alloy clusters through genetic algorithm and quantum chemical calculations.",
"abstract": "Genetic algorithm is employed to survey an empirical potential energy surface for small Na(x)K(y) clusters with x + y ≤ 15, providing initial conditions for electronic structure methods. The minima of such empirical potential are assessed and corrected using high level ab initio methods such as CCSD(T), CR-CCSD(T)-L and MP2, and benchmark results are obtained for specific cases. The results are the first calculations for such small alloy clusters and may serve as a reference for further studies. The validity and choice of a proper functional and basis set for DFT calculations are then explored using the benchmark data, where it was found that the usual DFT approach may fail to provide the correct qualitative result for specific systems. The best general agreement to the benchmark calculations is achieved with def2-TZVPP basis set with SVWN5 functional, although the LANL2DZ basis set (with effective core potential) and SVWN5 functional provided the most cost-effective results."
},
{
"pmid": "29982860",
"title": "A genetic algorithm survey on closed-shell atomic nitrogen clusters employing a quantum chemical approach.",
"abstract": "The DFT potential energy hypersurfaces of closed-shell nitrogen clusters up to ten atoms are explored via a genetic algorithm (GA). An atom-atom distance threshold parameter, controlled by the user, and an \"operator manager\" were added to the standard evolutionary procedure. Both B3LYP and PBE exchange-correlation functionals with 6-31G basis set were explored using the GA. Further evaluation of the structures generated were performed through reoptimization and vibrational analysis within MP2 and CCSD(T) levels employing larger correlation consistent basis set. The binding energies of all stable structures found are calculated and compared, as well as their energies relative to the dissociation into N2, [Formula: see text] and [Formula: see text] molecules. With the present approach, we confirmed some previously reported polynitrogen structures and predicted the stability of new ones. We can also conclude that the energy surface profile clearly depends on the calculation method employed."
},
{
"pmid": "17238254",
"title": "Novel method for geometry optimization of molecular clusters: application to benzene clusters.",
"abstract": "A heuristic and unbiased method for searching optimal geometries of clusters of nonspherical molecules was constructed from the algorithm recently proposed for Lennard-Jones atomic clusters. In the method, global minima are searched by using three operators, interior, surface, and orientation operators. The first operator gives a perturbation on a cluster configuration by moving molecules near the center of mass of a cluster, and the second one modifies a cluster configuration by moving molecules to the most stable positions on the surface of a cluster. The moved molecules are selected by employing a contribution of the molecules to the potential energy of a cluster. The third operator randomly changes the orientations of all molecules. The proposed method was applied to benzene clusters. It was possible to find new global minima for (C6H6)11, (C6H6)14, and (C6H6)15. Global minima for (C6H6)16 to (C6H6)30 are first reported in this article."
},
{
"pmid": "22540598",
"title": "Systematic study of Au6 to Au12 gold clusters on MgO(100) F centers using density-functional theory.",
"abstract": "We present an optimized genetic algorithm used in conjunction with density-functional theory in the search for stable gold clusters and O2 adsorption ensembles in F centers at MgO(100). For Au8 the method recovers known structures and identifies several more stable ones. When O2 adsorption is investigated, the genetic algorithm is used to imitate structural fluxionality, increasing the O2 bond strength by up to 1 eV. Extending the method to Au(6,10,12), strong O2 adsorption configurations are found for all sizes. However, the effect of fluxionality appears to wear off with increasing cluster size."
},
{
"pmid": "21332209",
"title": "Global optimization of binary Lennard-Jones clusters using three perturbation operators.",
"abstract": "Global optimization of binary Lennard-Jones clusters is a challenging problem in computational chemistry. The difficulty lies in not only that there are enormous local minima on the potential energy surface but also that we must determine both the coordinate position and the atom type for each atom and thus have to deal with both continuous and combinatorial optimization. This paper presents a heuristic algorithm (denoted by 3OP) which makes extensive use of three perturbation operators. With these operators, the proposed 3OP algorithm can efficiently move from a poor local minimum to another better local minimum and detect the global minimum through a sequence of local minima with decreasing energy. The proposed 3OP algorithm has been evaluated on a set of 96 × 6 instances with up to 100 atoms. We have found most putative global minima listed in the Cambridge Cluster Database as well as discovering 12 new global minima missed in previous research."
},
{
"pmid": "28068082",
"title": "Gold Nanoclusters Promote Electrocatalytic Water Oxidation at the Nanocluster/CoSe2 Interface.",
"abstract": "Electrocatalytic water splitting to produce hydrogen comprises the hydrogen and oxygen evolution half reactions (HER and OER), with the latter as the bottleneck process. Thus, enhancing the OER performance and understanding the mechanism are critically important. Herein, we report a strategy for OER enhancement by utilizing gold nanoclusters to form cluster/CoSe2 composites; the latter exhibit largely enhanced OER activity in alkaline solutions. The Au25/CoSe2 composite affords a current density of 10 mA cm-2 at small overpotential of ∼0.43 V (cf. CoSe2: ∼0.52 V). The ligand and gold cluster size can also tune the catalytic performance of the composites. Based upon XPS analysis and DFT simulations, we attribute the activity enhancement to electronic interactions between nanocluster and CoSe2, which favors the formation of the important intermediate (OOH) as well as the desorption of oxygen molecules over Aun/CoSe2 composites in the process of water oxidation. Such an atomic level understanding may provide some guidelines for design of OER catalysts."
}
] |
IEEE Journal of Translational Engineering in Health and Medicine | 32309061 | PMC6850034 | 10.1109/JTEHM.2019.2948604 | A Practical Electronic Health Record-Based Dry Weight Supervision Model for Hemodialysis Patients | Objective: Dry Weight (DW) is a typical hemodialysis (HD) prescription for End-Stage Renal Disease (ESRD) patients. However, an accurate DW assessment is difficult due to the complication of body components and individual variations. Our objective is to model a clinically practicable DW estimator. Method: We proposed a time series-based regression method to evaluate the weight fluctuation of HD patients according to Electronic Health Record (EHR). A total of 34 patients with 5100 HD sessions data were selected and partitioned into three groups; in HD-stabilized, HD-intolerant, and near-death. Each group’s most recent 150 HD sessions data were adopted to evaluate the proposed model. Results: Within a 0.5 kg absolute error margin, our model achieved 95.44%, 91.95%, and 83.12% post-dialysis weight prediction accuracies for the HD-stabilized, HD-intolerant, and near-death groups, respectively. Within a 1%relative error margin, the proposed method achieved 97.99%, 95.36%, and 66.38% accuracies. For HD-stabilized patients, the Mean Absolute Error (MAE) of the proposed method was 0.17 kg ± 0.04 kg. In the model comparison experiment, the performance test showed that the quality of the proposed model was superior to those of the state-of-the-art models. Conclusion: The outcome of this research indicates that the proposed model could potentially automate the clinical weight management for HD patients. Clinical Impact: This work can aid physicians to monitor and estimate DW. It can also be a health risk indicator for HD patients. | A.Related WorkTime series modeling aims to establish the connections between observed data and future events. It fits excellently with demands on medical applications where patient records often correlated with time and observatory markers. Many studies have focused on time series model developments [12], [13]. We summarized the conventional methods into two categories: (1) statistics-based techniques—works under this class usually elaborate models with problem-dependent assumptions, such as linearity, periodicity, data distributions, the order of the model, and more. Autoregressive Integrated Moving Average (ARIMA) is a representative model in a stochastic time series analysis [13]. According to different applications, it derives various subclass models, such as Autoregressive (AR), Moving Average (MA), Autoregressive Moving Average (ARMA), and Seasonal ARIMA (SARIMA) [14]–[16]. Though there are limitations for statistics-based methods, they have still proven capable of achieving remarkable performances with specific problem. (2) Machine learning based methods — works in this category have high feasibility to solve real-world problems with the characteristics of non-linearity, minimal knowledge of priori distributions, high dimensional variables. Artificial Neural Network (ANN) [17] is a a typical scheme that can approach the complex system with rigorous precision. There are also many variations of ANN-based methodologies, such as Long Short Term Memory (LSTM) network [18], Time Lagged Neural Network [19], and other implementations [20]. We also noticed that Random Forest (RF) [21], Support Vector Machines (SVM) [22], and Bayesian Networks (BN) [23] are wildly applied in literature because of their steady performance. Worth mentioning is that machine learning schemes face the difficulties in parameter optimizations, over-fitting functions, and model interpretability. Though many techniques have been proposed to provide solutions to these problems, some questions remained to be solved.Studies exist on the use of machine learning assisted HD applications. For example, in an HD anemia treatment, the ANN method [24] was utilized to determine hemoglobin levels in HD patients, and [25] a reinforcement learning method to optimize the dose of erythropoiesis-stimulating agents has also been proposed. The RF model [26] has been applied to predict HD patients’ cardiovascular risk, and a decision tree method [27] to detect early Arteriovenous Fistula (AVF) Failure has been adopted. In regards to HD quality control, a Temporal Abstractions (TA) method [28] to monitor the quality of HD process has been proposed, and an applied Bayesian network [29] to recognize patient temporal-state transition patterns and detect the exception events is in place. A study [30] proposed a Bioimpedance analysis (BIA) based on a multiple variable regression model to predict DW, and the accuracy is controlled within 0.5 kg with a standard deviation of 2 kg. Research [31] has applied the Multi-Layer Perceptron (MLP) neural network to predict DW using the inputs of patients’ BIA and blood volume monitoring data, with findings revealing a 0.5 kg outcome with a standard deviation of 1.3 kg. Although studies [30], [31] have reported significant DW modeling progress, the model remains dependent on crowd data, which may lead to data bias. Moreover, we propose that DW is a dynamic value, and a personalized training model is necessary for achieving the better precision. The studies listed here illustrate the remarkable potential of the application of machine learning methodology in HD.The rest of the paper is organized as follows: Section II explains the proposed methodology, Section III shows the experimental results and comparisons with other methods, and Section IV is a detailed discussion based on the observed results. The final part is the conclusion. | [
"29791905",
"23245604",
"25777668",
"10215341",
"27928711",
"29213184",
"28965299",
"25462637",
"25070755",
"25091172",
"15885564",
"17699481",
"8629727"
] | [
{
"pmid": "29791905",
"title": "Disparities in Chronic Kidney Disease Prevalence among Males and Females in 195 Countries: Analysis of the Global Burden of Disease 2016 Study.",
"abstract": "BACKGROUND\nChronic kidney disease (CKD) imposes a substantial burden on health care systems. There are some especially vulnerable groups with a high CKD burden, one of which is women. We performed an analysis of gender disparities in the prevalence of all CKD stages and renal replacement therapy (defined as impaired kidney function [IKF]) in 195 countries.\n\n\nMETHODS\nWe used estimates produced by the Global Burden of Disease (GBD) Study 2016 revision using a Bayesian-regression analytic tool, DisMoD-MR 2.1. Data on gross domestic product based on purchasing power parity per capita (GDP PPP) was obtained via the World Bank International Comparison Program database. To estimate gender disparities, we calculated the male:female all-age prevalence rate ratio for each IKF condition.\n\n\nRESULTS\nIn 2016, the global number of individuals with IKF reached 752.7 million, including 417.0 million females and 335.7 million males. The most prevalent form of IKF in both groups was albuminuria with preserved glomerular filtration rate. Geospatial analysis shows a very heterogeneous distribution of the male:female ratio for all IKF conditions, with the most prominent contrast found in kidney transplant patients. The median male:female ratio varies substantially according to GDP PPP quintiles; however, countries with different economic states could have similar male:female ratios. A strong correlation of GDP PPP with dialysis-to-transplant ratio was found.\n\n\nCONCLUSIONS\nThe GBD study highlights the prominent gender disparities in CKD prevalence among 195 countries. The nature of these disparities, however, is complex and must be interpreted cautiously taking into account all possible circumstances."
},
{
"pmid": "23245604",
"title": "Global and regional mortality from 235 causes of death for 20 age groups in 1990 and 2010: a systematic analysis for the Global Burden of Disease Study 2010.",
"abstract": "BACKGROUND\nReliable and timely information on the leading causes of death in populations, and how these are changing, is a crucial input into health policy debates. In the Global Burden of Diseases, Injuries, and Risk Factors Study 2010 (GBD 2010), we aimed to estimate annual deaths for the world and 21 regions between 1980 and 2010 for 235 causes, with uncertainty intervals (UIs), separately by age and sex.\n\n\nMETHODS\nWe attempted to identify all available data on causes of death for 187 countries from 1980 to 2010 from vital registration, verbal autopsy, mortality surveillance, censuses, surveys, hospitals, police records, and mortuaries. We assessed data quality for completeness, diagnostic accuracy, missing data, stochastic variations, and probable causes of death. We applied six different modelling strategies to estimate cause-specific mortality trends depending on the strength of the data. For 133 causes and three special aggregates we used the Cause of Death Ensemble model (CODEm) approach, which uses four families of statistical models testing a large set of different models using different permutations of covariates. Model ensembles were developed from these component models. We assessed model performance with rigorous out-of-sample testing of prediction error and the validity of 95% UIs. For 13 causes with low observed numbers of deaths, we developed negative binomial models with plausible covariates. For 27 causes for which death is rare, we modelled the higher level cause in the cause hierarchy of the GBD 2010 and then allocated deaths across component causes proportionately, estimated from all available data in the database. For selected causes (African trypanosomiasis, congenital syphilis, whooping cough, measles, typhoid and parathyroid, leishmaniasis, acute hepatitis E, and HIV/AIDS), we used natural history models based on information on incidence, prevalence, and case-fatality. We separately estimated cause fractions by aetiology for diarrhoea, lower respiratory infections, and meningitis, as well as disaggregations by subcause for chronic kidney disease, maternal disorders, cirrhosis, and liver cancer. For deaths due to collective violence and natural disasters, we used mortality shock regressions. For every cause, we estimated 95% UIs that captured both parameter estimation uncertainty and uncertainty due to model specification where CODEm was used. We constrained cause-specific fractions within every age-sex group to sum to total mortality based on draws from the uncertainty distributions.\n\n\nFINDINGS\nIn 2010, there were 52·8 million deaths globally. At the most aggregate level, communicable, maternal, neonatal, and nutritional causes were 24·9% of deaths worldwide in 2010, down from 15·9 million (34·1%) of 46·5 million in 1990. This decrease was largely due to decreases in mortality from diarrhoeal disease (from 2·5 to 1·4 million), lower respiratory infections (from 3·4 to 2·8 million), neonatal disorders (from 3·1 to 2·2 million), measles (from 0·63 to 0·13 million), and tetanus (from 0·27 to 0·06 million). Deaths from HIV/AIDS increased from 0·30 million in 1990 to 1·5 million in 2010, reaching a peak of 1·7 million in 2006. Malaria mortality also rose by an estimated 19·9% since 1990 to 1·17 million deaths in 2010. Tuberculosis killed 1·2 million people in 2010. Deaths from non-communicable diseases rose by just under 8 million between 1990 and 2010, accounting for two of every three deaths (34·5 million) worldwide by 2010. 8 million people died from cancer in 2010, 38% more than two decades ago; of these, 1·5 million (19%) were from trachea, bronchus, and lung cancer. Ischaemic heart disease and stroke collectively killed 12·9 million people in 2010, or one in four deaths worldwide, compared with one in five in 1990; 1·3 million deaths were due to diabetes, twice as many as in 1990. The fraction of global deaths due to injuries (5·1 million deaths) was marginally higher in 2010 (9·6%) compared with two decades earlier (8·8%). This was driven by a 46% rise in deaths worldwide due to road traffic accidents (1·3 million in 2010) and a rise in deaths from falls. Ischaemic heart disease, stroke, chronic obstructive pulmonary disease (COPD), lower respiratory infections, lung cancer, and HIV/AIDS were the leading causes of death in 2010. Ischaemic heart disease, lower respiratory infections, stroke, diarrhoeal disease, malaria, and HIV/AIDS were the leading causes of years of life lost due to premature mortality (YLLs) in 2010, similar to what was estimated for 1990, except for HIV/AIDS and preterm birth complications. YLLs from lower respiratory infections and diarrhoea decreased by 45-54% since 1990; ischaemic heart disease and stroke YLLs increased by 17-28%. Regional variations in leading causes of death were substantial. Communicable, maternal, neonatal, and nutritional causes still accounted for 76% of premature mortality in sub-Saharan Africa in 2010. Age standardised death rates from some key disorders rose (HIV/AIDS, Alzheimer's disease, diabetes mellitus, and chronic kidney disease in particular), but for most diseases, death rates fell in the past two decades; including major vascular diseases, COPD, most forms of cancer, liver cirrhosis, and maternal disorders. For other conditions, notably malaria, prostate cancer, and injuries, little change was noted.\n\n\nINTERPRETATION\nPopulation growth, increased average age of the world's population, and largely decreasing age-specific, sex-specific, and cause-specific death rates combine to drive a broad shift from communicable, maternal, neonatal, and nutritional causes towards non-communicable diseases. Nevertheless, communicable, maternal, neonatal, and nutritional causes remain the dominant causes of YLLs in sub-Saharan Africa. Overlaid on this general pattern of the epidemiological transition, marked regional variation exists in many causes, such as interpersonal violence, suicide, liver cancer, diabetes, cirrhosis, Chagas disease, African trypanosomiasis, melanoma, and others. Regional heterogeneity highlights the importance of sound epidemiological assessments of the causes of death on a regular basis.\n\n\nFUNDING\nBill & Melinda Gates Foundation."
},
{
"pmid": "10215341",
"title": "Assessment of dry weight in hemodialysis: an overview.",
"abstract": "Fluid balance is an integral component of hemodialysis treatments to prevent under- or overhydration, both of which have been demonstrated to have significant effects on intradialytic morbidity and long-term cardiovascular complications. Fluid removal is usually achieved by ultrafiltration to achieve a clinically derived value for \"dry weight.\" Unfortunately, there is no standard measure of dry weight and as a consequence it is difficult to ascertain adequacy of fluid removal for an individual patient. Additionally, there is a lack of information on the effect of ultrafiltration on fluid shifts in the extracellular and intracellular fluid spaces. It is evident that a better understanding of both interdialytic fluid status and fluid changes during hemodialysis is required to develop a precise measure of fluid balance. This article describes the current status of dry weight estimation and reviews emerging techniques for evaluation of fluid shifts. Additionally, it explores the need for a marker of adequacy for fluid removal."
},
{
"pmid": "27928711",
"title": "Dry weight assessment by combined ultrasound and bioimpedance monitoring in low cardiovascular risk hemodialysis patients: a randomized controlled trial.",
"abstract": "PURPOSE\nFluid overload is associated with adverse outcomes in hemodialysis (HD) patients. The precise assessment of hydration status in HD patients remains a major challenge for nephrologists. Our study aimed to explore whether combining two bedside methods, lung ultrasonography (LUS) and bioimpedance, may provide complementary information to guide treatment in specific HD patients.\n\n\nMETHODS\nIn total, 250 HD patients from two dialysis units were included in this randomized clinical trial. Patients were randomized 1:1 to have a dry weight assessment based on clinical (control) or LUS with bioimpedance in case of clinical hypovolemia (active)-guided protocol. The primary outcome was to assess the difference between the two groups on a composite of all-cause mortality and first cardiovascular event (CVE)-including death, stroke, and myocardial infarction.\n\n\nRESULTS\nDuring a mean follow-up period was 21.3 ± 5.6 months, there were 54 (21.6%) composite events in the entire population. There was a nonsignificant 9% increase in the risk of this outcome in the active arm (HR = 1.09, 95% CI 0.64-1.86, p = 0.75). Similarly, there were no differences between the two groups when analyzing separately the all-cause mortality and CVE outcomes. However, patients in the active arm had a 19% lower relative risk of pre-dialytic dyspnea (rate ratio-0.81, 95% CI 0.68-0.96), but a 26% higher relative risk of intradialytic cramps (rate ratio-1.26, 95% CI 1.16-1.37).\n\n\nCONCLUSIONS\nThis study shows that a LUS-bioimpedance-guided dry weight adjustment protocol, as compared to clinical evaluation, does not reduce all-cause mortality and/or CVE in HD patients. A fluid management protocol based on bioimpedance with LUS on indication might be a better strategy."
},
{
"pmid": "29213184",
"title": "A comparison of methods for the non-destructive fresh weight determination of filamentous algae for growth rate analysis and dry weight estimation.",
"abstract": "The determination of rates of macroalgal growth and productivity via temporal fresh weight (FW) measurements is attractive, as it does not necessitate the sacrifice of biomass. However, there is no standardised method for FW analysis; this may lead to potential discrepancies when determining growth rates or productivity and make literature comparison problematic. This study systematically assessed a variety of lab-scale methods for macroalgal FW measurement for growth rate determination. Method efficacy was assessed over a 14-day period as impact upon algal physiology, growth rate on basis of FW and dry weight (DW), nitrate removal, and maintenance of structural integrity. The choice of method is critical to both accuracy and inter-study comparability of the data generated. In this study, it was observed that the choice of protocol had an impact upon the DW yield (P values = 0.036-0.51). For instance, those involving regular mechanical pressing resulted in a >25% reduction in the final DW in two of the three species studied when compared to algae not subjected to any treatment. This study proposes a standardised FW determination method employing a reticulated spinner that is rapid, reliable, and non-destructive and provides an accurate growth estimation."
},
{
"pmid": "28965299",
"title": "Value of bioimpedance analysis estimated \"dry weight\" in maintenance dialysis patients: a systematic review and meta-analysis.",
"abstract": "BACKGROUND\nVolume overload is a common complication in patients with end-stage kidney disease who undergo maintenance dialysis therapy and associated with hypertension, left ventricular hypertrophy and mortality in this population. Although bioimpedance analysis (BIA), an objective method to assess overhydration, is associated with poor outcomes in observational studies, in randomized controlled trials (RCTs) the results were conflicting. We have examined the role of BIA for assessing the \"dry weight\" and fluid status in order to improve fluid overload in comparison with a control or clinical-based prescription in patients with ESKD receiving haemodialysis or peritoneal dialysis.\n\n\nMETHODS\nAll RCTs and quasi-RCTs in which BIA was used to improve fluid overload and assess the effect on all-cause mortality, cardiovascular morbidity, systolic blood pressure and volume control and arterial stiffness were included.\n\n\nRESULTS\nSeven RCTs with 1312 patients could be included in this review. In low-to-medium quality of the evidence, the use of BIA did not reduce all-cause mortality (relative risk 0.87, 95% CI 0.54-1.39) and had small to no effect on body change, but it improved systolic blood pressure control (mean difference (MD) -2.73 mmHg, 95% CI -5.00 to -0.46 mmHg) and reduce overhydration, as measured by BIA, with 0.43 L [(MD), 95% CI 0.71-0.15 L].\n\n\nCONCLUSION\nIn ESKD patients, BIA-based interventions for correction of overhydration have little to no effect on all-cause mortality, whereas BIA improved systolic blood pressure control. Our results should be interpreted with caution as the size and power of the included studies are low. Further studies, larger or with a longer follow-up period, should be performed to better describe the effect of BIA-based strategies on survival."
},
{
"pmid": "25462637",
"title": "Deep learning in neural networks: an overview.",
"abstract": "In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks."
},
{
"pmid": "25070755",
"title": "Prediction of the hemoglobin level in hemodialysis patients using machine learning techniques.",
"abstract": "Patients who suffer from chronic renal failure (CRF) tend to suffer from an associated anemia as well. Therefore, it is essential to know the hemoglobin (Hb) levels in these patients. The aim of this paper is to predict the hemoglobin (Hb) value using a database of European hemodialysis patients provided by Fresenius Medical Care (FMC) for improving the treatment of this kind of patients. For the prediction of Hb, both analytical measurements and medication dosage of patients suffering from chronic renal failure (CRF) are used. Two kinds of models were trained, global and local models. In the case of local models, clustering techniques based on hierarchical approaches and the adaptive resonance theory (ART) were used as a first step, and then, a different predictor was used for each obtained cluster. Different global models have been applied to the dataset such as Linear Models, Artificial Neural Networks (ANNs), Support Vector Machines (SVM) and Regression Trees among others. Also a relevance analysis has been carried out for each predictor model, thus finding those features that are most relevant for the given prediction."
},
{
"pmid": "25091172",
"title": "Optimization of anemia treatment in hemodialysis patients via reinforcement learning.",
"abstract": "OBJECTIVE\nAnemia is a frequent comorbidity in hemodialysis patients that can be successfully treated by administering erythropoiesis-stimulating agents (ESAs). ESAs dosing is currently based on clinical protocols that often do not account for the high inter- and intra-individual variability in the patient's response. As a result, the hemoglobin level of some patients oscillates around the target range, which is associated with multiple risks and side-effects. This work proposes a methodology based on reinforcement learning (RL) to optimize ESA therapy.\n\n\nMETHODS\nRL is a data-driven approach for solving sequential decision-making problems that are formulated as Markov decision processes (MDPs). Computing optimal drug administration strategies for chronic diseases is a sequential decision-making problem in which the goal is to find the best sequence of drug doses. MDPs are particularly suitable for modeling these problems due to their ability to capture the uncertainty associated with the outcome of the treatment and the stochastic nature of the underlying process. The RL algorithm employed in the proposed methodology is fitted Q iteration, which stands out for its ability to make an efficient use of data.\n\n\nRESULTS\nThe experiments reported here are based on a computational model that describes the effect of ESAs on the hemoglobin level. The performance of the proposed method is evaluated and compared with the well-known Q-learning algorithm and with a standard protocol. Simulation results show that the performance of Q-learning is substantially lower than FQI and the protocol. When comparing FQI and the protocol, FQI achieves an increment of 27.6% in the proportion of patients that are within the targeted range of hemoglobin during the period of treatment. In addition, the quantity of drug needed is reduced by 5.13%, which indicates a more efficient use of ESAs.\n\n\nCONCLUSION\nAlthough prospective validation is required, promising results demonstrate the potential of RL to become an alternative to current protocols."
},
{
"pmid": "15885564",
"title": "Temporal data mining for the quality assessment of hemodialysis services.",
"abstract": "OBJECTIVE\nThis paper describes the temporal data mining aspects of a research project that deals with the definition of methods and tools for the assessment of the clinical performance of hemodialysis (HD) services, on the basis of the time series automatically collected during hemodialysis sessions.\n\n\nMETHODS\nIntelligent data analysis and temporal data mining techniques are applied to gain insight and to discover knowledge on the causes of unsatisfactory clinical results. In particular, two new methods for association rule discovery and temporal rule discovery are applied to the time series. Such methods exploit several pre-processing techniques, comprising data reduction, multi-scale filtering and temporal abstractions.\n\n\nRESULTS\nWe have analyzed the data of more than 5800 dialysis sessions coming from 43 different patients monitored for 19 months. The qualitative rules associating the outcome parameters and the measured variables were examined by the domain experts, which were able to distinguish between rules confirming available background knowledge and unexpected but plausible rules.\n\n\nCONCLUSION\nThe new methods proposed in the paper are suitable tools for knowledge discovery in clinical time series. Their use in the context of an auditing system for dialysis management helped clinicians to improve their understanding of the patients' behavior."
},
{
"pmid": "17699481",
"title": "Development and validation of bioimpedance analysis prediction equations for dry weight in hemodialysis patients.",
"abstract": "BACKGROUND\nAccurate assessment of hydration status and specification of dry weight (DW) are major problems in the clinical treatment of hemodialysis (HD) patients. Bioelectrical impedance analysis (BIA) has been recognized as a noninvasive and simple technique for the determination of DW in HD patients.\n\n\nDESIGN, SETTING, PARTICIPANTS, AND MEASUREMENTS\nThis study was designed to develop and validate BIA prediction equations for DW in HD patients. It included white adults (1540 disease-free adults with normal body mass index [BMI] and 456 prevalent and 27 incident HD patients). All participants underwent at least one single-frequency BIA measurement (800 muA and 50 kHz alternating sinusoidal current with a standard tetrapolar technique). The BIA variable measured was resistance (R). Data of 1463 (95% of the cohort) disease-free individuals with normal BMI (prediction sample) were used to establish best-fitting BIA prediction equations of body weight. The latter were cross-validated in the residual 5% subset (77 individuals) of the same cohort (validation sample).\n\n\nRESULTS\nMultiple regression analysis showed a significant relationship among body weight, R, age, and height in 739 men (R(2) = 0.82, P < 0.0001) and among body weight, R, and height in 724 women (R(2) = 0.68, P < 0.0001) in the prediction sample. The Bland Altman analysis showed a mean difference between predicted and measured body weight of 0.3 +/- 1.0 kg (95% confidence interval +/- 2.0 kg) in the validation sample. The BIA prediction equations that were obtained in disease-free individuals with normal BMI were applied to a cohort of 456 prevalent HD patients: The mean difference between achieved and estimated DW was 0.1 +/- 1.0 kg (P = 0.53) in men and -0.3 +/- 1.0 (P = 0.76) in women. Finally, BIA prediction equations were tested in a cohort of 27 incident HD patients. The mean difference between predicted and achieved DW was -0.6 +/- 1.0 kg (P = 0.76) in men and 0.6 +/- 1.0 (P = 0.50) in women.\n\n\nCONCLUSIONS\nThis study was able to develop and validate BIA prediction equations for DW in HD patients. They seem to be a promising tool; however, they still need external validation."
},
{
"pmid": "8629727",
"title": "Adjusting for multiple testing when reporting research results: the Bonferroni vs Holm methods.",
"abstract": "Public health researchers are sometimes required to make adjustments for multiple testing in reporting their results, which reduces the apparent significance of effects and thus reduces statistical power. The Bonferroni procedure is the most widely recommended way of doing this, but another procedure, that of Holm, is uniformly better. Researchers may have neglected Holm's procedure because it has been framed in terms of hypothesis test rejection rather than in terms of P values. An adjustment to P values based on Holm's method is presented in order to promote the method's use in public health research."
}
] |
Journal of Cheminformatics | 33430958 | PMC6852942 | 10.1186/s13321-019-0392-1 | Multi-task learning with a natural metric for quantitative structure activity relationship learning | The goal of quantitative structure activity relationship (QSAR) learning is to learn a function that, given the structure of a small molecule (a potential drug), outputs the predicted activity of the compound. We employed multi-task learning (MTL) to exploit commonalities in drug targets and assays. We used datasets containing curated records about the activity of specific compounds on drug targets provided by ChEMBL. Totally, 1091 assays have been analysed. As a baseline, a single task learning approach that trains random forest to predict drug activity for each drug target individually was considered. We then carried out feature-based and instance-based MTL to predict drug activities. We introduced a natural metric of evolutionary distance between drug targets as a measure of tasks relatedness. Instance-based MTL significantly outperformed both, feature-based MTL and the base learner, on 741 drug targets out of 1091. Feature-based MTL won on 179 occasions and the base learner performed best on 171 drug targets. We conclude that MTL QSAR is improved by incorporating the evolutionary distance between targets. These results indicate that QSAR learning can be performed effectively, even if little data is available for specific drug targets, by leveraging what is known about similar drug targets. | Related workMulti-task learningMTL has been used in many areas. For example, Chen et al. employed MTL to learn a common feature space from multiple related tasks and applied it for web page categorization [11]. Bickel et al. applied MTL for HIV therapy screening data with the focus on assigning weights to instances from multiple tasks so that tasks can be learned jointly even if data for different tasks have arbitrary different distributions [12]. Bickel et al. introduced a new MTL method for weighting groups in tree guided group-lasso regression and applied it for the analysis of genotype and gene expression data [13].Zhang et al. reported on a multi-modal multi-task (M3T) method for simultaneously predicting multiple outcomes for multi-modal data [2]. The method is based on selecting common relevant features, applying kernel based data fusion and then applying multi-outcome support vector regression. Experiments were performed to jointly predict clinical scores in Alzheimer’s disease.Deep learningDeep learning has gained significant attention over the last years and there are attempts to employ it for MTL. For example, deep relationship networks (DRN) were proposed to estimate the relationships between tasks in the area of computer vision [14]. In natural language processing (NLP), MTL was used with deep learning for identifying better hierarchies for tasks to improve performance [15].Task relatednessA number of approaches have been reported in the literature for the specification of task similarity, an important element of MTL. One common approach is to build models on the individual tasks, and then to learn a common prior over the trained model parameters. For instance, this prior can be inferred using Dirichlet processes [16], matrix-variate normal distributions [17], or a maximum likelihood procedure [18]. Clustered multi-task learning (CMTL) preforms clustering of tasks into groups prior to applying MTL. This clustering can be done both on the task level [3, 19, 20] and on the level of shared feature representations among tasks [21–24].Discovering highly important marker genes was the main focus of the work reported in [25] where the aim was to identify a shared gene subspace across different gene expression datasets using MTL. Zhou et al. modeled disease progression by considering predictions at different time points as different tasks and transform the problem into MTL [26]. The relatedness between tasks was obtained by using a temporal group Lasso regularizer.Taxonomy-based MTL was used to conduct biological sequence classification for the purpose of predicting the splice sites in various drug targets [27]. In this approach, the relatedness of tasks was defined by a phylogenic tree based structure and learning was performed at different levels of the tree. Furthermore, taxonomy- and graph-based transfer learning and MTL were used to predict the binding of the major histocompatibility complex (MHC)-I [28]. Although task relatedness can be derived from the hierarchy, the authors report an interesting approach to quantify this relatedness using multi-kernel SVMs. Also, a two step MTL approach was employed for the prediction of small interfering RNA (siRNA) efficacy [29]. In the first step, shared-task representations are learned, and in the second step, these representations are fed into a regressor to model each task.A methodology that employs sequence based distance is described in [30]. In this approach an attempt was made to predict the similarity in binding profile between any pair of kinases from the human kinome. A binding profile was built for each kinase and it was used to compute pairwise similarity between kinases. This similarity was compared with the sequence based distance in order to check whether there is any correlation between the two. The difference between our approach and this approach is that we use the pairwise sequence based similarity between drug targets as input features to the classifier. Also, unlike our work, this method does not allow predicting the activity of individual molecules on drug targets.Multi-task learning for QSAR learningMTL employing neural networks is reported in [31]. Multi-target predictions were made for a total of 19 assays at the same time. Although training is conducted by combining data from multiple assays, this method does not take advantage of the task relatedness. The QSAR problem is considered as a classification problem (i.e. whether a compound is active or inactive in a certain assay). This is different from our approach where we treat QSAR as a regression problem, and we work with a considerably larger number of assays (1091 assays).Work applying MTL in QSAR learning includes applications in sequence biology [28] using a graph-based regularization method [3, 32] based on SVM [33]. Experiments were performed on data from the human kinome, and the relatedness between tasks was extracted from the taxonomy of kinase targets. A distance matrix was derived from the taxonomy by considering the distance between two taxa as the weight of the shortest path between them in the taxonomy [34]. This matrix was then transformed into a similarity matrix and the values were used to perform MTL. This measure of similarity is different from the homology used in our work, and it is less biologically meaningful. Ning et al. used SVM-based MTL approach to learn a classification model for a drug target together with other related drug targets, where compound- and target-specific kernel functions were used to capture intrinsic commonalities [35].One of the key QSAR studies that employed MTL as well as transfer learning was reported in [36]. In addition to MTL, the approach uses feature nets (FN) to construct neural network and partial least squares (PLS) models for the modeling of 11 types of tissue-air partition coefficients. A total of 56 and 50 models for H/tissue and R/tissue respectively were obtained in the experiments which demonstrated the usefulness of MTL and transfer learning in general. The reported approaches showed that these techniques are specially useful when data is scarce. Our approache is different in multiple ways. We performed experiments on a much larger scale. Also, the authors did not evaluate traditional machine learning methods to select the best performing ones for STL. In particular, random forest (RF) was not considered [36]. This could be due to the used descriptors: we worked with fingerprints whereas they worked with some physicochemical properties as well as ISIDA descriptors [37]. In addition, our results are more statistically significant.A recent approach, that reports significant improvements over traditional baseline machine learning approaches, applied massively multi-task neural networks for drug discovery [38]. In this work, an attempt was made to use deep learning to provide a framework for sharing information across a large number of datasets. The end goal was to classify compounds as either active or inactive.Another approach that employs deep neural networks (DNN) is the work presented in [39] which tried to not only demonstrate that multi-task DNNs work in QSAR but also to explain why this is the case. The authors report that some form of signal transfer takes place between structurally similar molecules during the training process, and this can lead to better performance when molecule activities are correlated. A recent review of applications and challenges of MTL and transfer learning in QSAR can be found in [40].Advantages of the proposed approachThe proposed approach has the following advantages compared with the previous MTL work:The QSAR learning problem is considered as a regression problem. This is more natural as finding the best threshold value to determine whether a specific compound is active or inactive is problematic and often results in loss of information.We employ RF as the base learner. We showed in a previous study that RF outperforms other learners on QSAR data in the majority of scenarios [41].We employ the functional-class fingerprints (FCFP) method to represent molecular structures. We have empirically found them to generally be the most successful QSAR prediction representation. We have done this by performing tests and comparisons using thousands of datasets and several learners [41].One of the contributions of our work is the use of the drug target similarities in an MTL setting. The majority of existing MTL approaches focus on learning the task similarities, whereas in our case, we exploit the sequence based similarities and incorporate them in our experiments. There are often commonalities in QSAR assays as the target proteins may be evolutionary related. We took advantage of this and used protein sequence similarity values as our task similarities. This enables the inference of a natural metric of evolutionary distance between the drug targets.In this paper we introduce an intuitive, simple and effective method of learning QSARs jointly. We test whether our MTL method can improve on standard QSAR learning through the use of related targets, and evaluate whether QSAR MTL can be improved by incorporating the evolutionary distance between targets. Our method is based on the classification of drug targets into families and the use of sequence similarity values between those drug targets [42]. | [
"21992749",
"30034911",
"24351051",
"22145530",
"19639957",
"23842210",
"12471243",
"19842624",
"19125628",
"27464350",
"28872869",
"29467659",
"17016423",
"5420325",
"7265238",
"24524735",
"26099013",
"17880194"
] | [
{
"pmid": "21992749",
"title": "Multi-modal multi-task learning for joint prediction of multiple regression and classification variables in Alzheimer's disease.",
"abstract": "Many machine learning and pattern classification methods have been applied to the diagnosis of Alzheimer's disease (AD) and its prodromal stage, i.e., mild cognitive impairment (MCI). Recently, rather than predicting categorical variables as in classification, several pattern regression methods have also been used to estimate continuous clinical variables from brain images. However, most existing regression methods focus on estimating multiple clinical variables separately and thus cannot utilize the intrinsic useful correlation information among different clinical variables. On the other hand, in those regression methods, only a single modality of data (usually only the structural MRI) is often used, without considering the complementary information that can be provided by different modalities. In this paper, we propose a general methodology, namely multi-modal multi-task (M3T) learning, to jointly predict multiple variables from multi-modal data. Here, the variables include not only the clinical variables used for regression but also the categorical variable used for classification, with different tasks corresponding to prediction of different variables. Specifically, our method contains two key components, i.e., (1) a multi-task feature selection which selects the common subset of relevant features for multiple variables from each modality, and (2) a multi-modal support vector machine which fuses the above-selected features from all modalities to predict multiple (regression and classification) variables. To validate our method, we perform two sets of experiments on ADNI baseline MRI, FDG-PET, and cerebrospinal fluid (CSF) data from 45 AD patients, 91 MCI patients, and 50 healthy controls (HC). In the first set of experiments, we estimate two clinical variables such as Mini Mental State Examination (MMSE) and Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-Cog), as well as one categorical variable (with value of 'AD', 'MCI' or 'HC'), from the baseline MRI, FDG-PET, and CSF data. In the second set of experiments, we predict the 2-year changes of MMSE and ADAS-Cog scores and also the conversion of MCI to AD from the baseline MRI, FDG-PET, and CSF data. The results on both sets of experiments demonstrate that our proposed M3T learning scheme can achieve better performance on both regression and classification tasks than the conventional learning methods."
},
{
"pmid": "24351051",
"title": "QSAR modeling: where have you been? Where are you going to?",
"abstract": "Quantitative structure-activity relationship modeling is one of the major computational tools employed in medicinal chemistry. However, throughout its entire history it has drawn both praise and criticism concerning its reliability, limitations, successes, and failures. In this paper, we discuss (i) the development and evolution of QSAR; (ii) the current trends, unsolved problems, and pressing challenges; and (iii) several novel and emerging applications of QSAR modeling. Throughout this discussion, we provide guidelines for QSAR development, validation, and application, which are summarized in best practices for building rigorously validated and externally predictive QSAR models. We hope that this Perspective will help communications between computational and experimental chemists toward collaborative development and use of QSAR models. We also believe that the guidelines presented here will help journal editors and reviewers apply more stringent scientific standards to manuscripts reporting new QSAR studies, as well as encourage the use of high quality, validated QSARs for regulatory decision making."
},
{
"pmid": "22145530",
"title": "Multi-platform gene-expression mining and marker gene analysis.",
"abstract": "Gene-expression data are now widely available and used for a wide range of clinical and diagnostic purposes. A key challenge is to select a few significant marker genes for biological studies. While it is feasible to find important genes from a single gene-expression data set, it is often more meaningful to compare the results from different but related data sets together, especially for multiple gene-expression data sets arising from different studies of a common organism or phenotype. In this paper, we present a novel framework to exploit the commonalities across different data sets by jointly learning from different data sets simultaneously through multi-task feature learning. By identifying a common subspace of genes, we can help biologists find important marker genes that span different evolutionary periods in the life cycle of cancer development. The genes thus found are more stable and more significant. Our experimental results demonstrate that more accurate models can be built using multiple data sets based on fewer labelled examples. To the best of our knowledge, we are among the first to introduce multi-task learning in the bioinformatics community to solve the lack of data problem."
},
{
"pmid": "19639957",
"title": "QSAR models for predicting the similarity in binding profiles for pairs of protein kinases and the variation of models between experimental data sets.",
"abstract": "We propose a direct QSAR methodology to predict how similar the inhibitor-binding profiles of two protein kinases are likely to be, based on the properties of the residues surrounding the ATP-binding site. We produce a random forest model for each of five data sets (one in-house, four from the literature) where multiple compounds are tested on many kinases. Each model is self-consistent by cross-validation, and all models point to only a few residues in the active site controlling the binding profiles. While all models include the \"gatekeeper\" as one of the important residues, consistent with previous literature, some models suggest other residues as being more important. We apply each model to predict the similarity in binding profile to all pairs in a set of 411 kinases from the human genome and get very different predictions from each model. This turns out not to be an issue with model-building but with the fact that the experimental data sets disagree about which kinases are similar to which others. It is possible to build a model combining all the data from the five data sets that is reasonably self-consistent but not surprisingly, given the disagreement between data sets, less self-consistent than the individual models."
},
{
"pmid": "23842210",
"title": "Inferring multi-target QSAR models with taxonomy-based multi-task learning.",
"abstract": "BACKGROUND\nA plethora of studies indicate that the development of multi-target drugs is beneficial for complex diseases like cancer. Accurate QSAR models for each of the desired targets assist the optimization of a lead candidate by the prediction of affinity profiles. Often, the targets of a multi-target drug are sufficiently similar such that, in principle, knowledge can be transferred between the QSAR models to improve the model accuracy. In this study, we present two different multi-task algorithms from the field of transfer learning that can exploit the similarity between several targets to transfer knowledge between the target specific QSAR models.\n\n\nRESULTS\nWe evaluated the two methods on simulated data and a data set of 112 human kinases assembled from the public database ChEMBL. The relatedness between the kinase targets was derived from the taxonomy of the humane kinome. The experiments show that multi-task learning increases the performance compared to training separate models on both types of data given a sufficient similarity between the tasks. On the kinase data, the best multi-task approach improved the mean squared error of the QSAR models of 58 kinase targets.\n\n\nCONCLUSIONS\nMulti-task learning is a valuable approach for inferring multi-target QSAR models for lead optimization. The application of multi-task learning is most beneficial if knowledge can be transferred from a similar task with a lot of in-domain knowledge to a task with little in-domain knowledge. Furthermore, the benefit increases with a decreasing overlap between the chemical space spanned by the tasks."
},
{
"pmid": "12471243",
"title": "The protein kinase complement of the human genome.",
"abstract": "We have catalogued the protein kinase complement of the human genome (the \"kinome\") using public and proprietary genomic, complementary DNA, and expressed sequence tag (EST) sequences. This provides a starting point for comprehensive analysis of protein phosphorylation in normal and disease states, as well as a detailed view of the current state of human genome analysis through a focus on one large gene family. We identify 518 putative protein kinase genes, of which 71 have not previously been reported or described as kinases, and we extend or correct the protein sequences of 56 more kinases. New genes include members of well-studied families as well as previously unidentified families, some of which are conserved in model organisms. Classification and comparison with model organism kinomes identified orthologous groups and highlighted expansions specific to human and other lineages. We also identified 106 protein kinase pseudogenes. Chromosomal mapping revealed several small clusters of kinase genes and revealed that 244 kinases map to disease loci or cancer amplicons."
},
{
"pmid": "19842624",
"title": "Multi-assay-based structure-activity relationship models: improving structure-activity relationship models by incorporating activity information from related targets.",
"abstract": "Structure-activity relationship (SAR) models are used to inform and to guide the iterative optimization of chemical leads, and they play a fundamental role in modern drug discovery. In this paper, we present a new class of methods for building SAR models, referred to as multi-assay based, that utilize activity information from different targets. These methods first identify a set of targets that are related to the target under consideration, and then they employ various machine learning techniques that utilize activity information from these targets in order to build the desired SAR model. We developed different methods for identifying the set of related targets, which take into account the primary sequence of the targets or the structure of their ligands, and we also developed different machine learning techniques that were derived by using principles of semi-supervised learning, multi-task learning, and classifier ensembles. The comprehensive evaluation of these methods shows that they lead to considerable improvements over the standard SAR models that are based only on the ligands of the target under consideration. On a set of 117 protein targets, obtained from PubChem, these multi-assay-based methods achieve a receiver-operating characteristic score that is, on the average, 7.0 -7.2% higher than that achieved by the standard SAR models. Moreover, on a set of targets belonging to six protein families, the multi-assay-based methods outperform chemogenomics-based approaches by 4.33%."
},
{
"pmid": "19125628",
"title": "Inductive transfer of knowledge: application of multi-task learning and feature net approaches to model tissue-air partition coefficients.",
"abstract": "Two inductive knowledge transfer approaches - multitask learning (MTL) and Feature Net (FN) - have been used to build predictive neural networks (ASNN) and PLS models for 11 types of tissue-air partition coefficients (TAPC). Unlike conventional single-task learning (STL) modeling focused only on a single target property without any relations to other properties, in the framework of inductive transfer approach, the individual models are viewed as nodes in the network of interrelated models built in parallel (MTL) or sequentially (FN). It has been demonstrated that MTL and FN techniques are extremely useful in structure-property modeling on small and structurally diverse data sets, when conventional STL modeling is unable to produce any predictive model. The predictive STL individual models were obtained for 4 out of 11 TAPC, whereas application of inductive knowledge transfer techniques resulted in models for 9 TAPC. Differences in prediction performances of the models as a function of the machine-learning method, and of the number of properties simultaneously involved in the learning, has been discussed."
},
{
"pmid": "27464350",
"title": "ISIDA Property-Labelled Fragment Descriptors.",
"abstract": "ISIDA Property-Labelled Fragment Descriptors (IPLF) were introduced as a general framework to numerically encode molecular structures in chemoinformatics, as counts of specific subgraphs in which atom vertices are coloured with respect to some local property/feature. Combining various colouring strategies of the molecular graph - notably pH-dependent pharmacophore and electrostatic potential-based flagging - with several fragmentation schemes, the different subtypes of IPLFs may range from classical atom pair and sequence counts, to monitoring population levels of branched fragments or feature multiplets. The pH-dependent feature flagging, pursued at the level of each significantly populated microspecies involved in the proteolytic equilibrium, may furthermore add some competitive advantage over classical descriptors, even when the chosen fragmentation scheme is one of the state-of-the-art pattern extraction procedures (feature sequence or pair counts, etc.) in chemoinformatics. The implemented fragmentation schemes support counting (1) linear feature sequences, (2) feature pairs, (3) circular feature fragments a.k.a. \"augmented atoms\" or (4) feature trees. Fuzzy rendering - optionally allowing nonterminal fragment atoms to be counted as wildcards, ignoring their specific colours/features - ensures for a seamless transition between the \"strict\" counts (sequences or circular fragments) and the \"fuzzy\" multiplet counts (pairs or trees). Also, bond information may be represented or ignored, thus leaving the user a vast choice in terms of the level of resolution at which chemical information should be extracted into the descriptors. Selected IPLF subsets were - tree descriptors, in particular - successfully tested in both neighbourhood behaviour and QSAR modelling challenges, with very promising results. They showed excellent results in similarity-based virtual screening for analogue protease inhibitors, and generated highly predictive octanol-water partition coefficient and hERG channel inhibition models."
},
{
"pmid": "28872869",
"title": "Demystifying Multitask Deep Neural Networks for Quantitative Structure-Activity Relationships.",
"abstract": "Deep neural networks (DNNs) are complex computational models that have found great success in many artificial intelligence applications, such as computer vision1,2 and natural language processing.3,4 In the past four years, DNNs have also generated promising results for quantitative structure-activity relationship (QSAR) tasks.5,6 Previous work showed that DNNs can routinely make better predictions than traditional methods, such as random forests, on a diverse collection of QSAR data sets. It was also found that multitask DNN models-those trained on and predicting multiple QSAR properties simultaneously-outperform DNNs trained separately on the individual data sets in many, but not all, tasks. To date there has been no satisfactory explanation of why the QSAR of one task embedded in a multitask DNN can borrow information from other unrelated QSAR tasks. Thus, using multitask DNNs in a way that consistently provides a predictive advantage becomes a challenge. In this work, we explored why multitask DNNs make a difference in predictive performance. Our results show that during prediction a multitask DNN does borrow \"signal\" from molecules with similar structures in the training sets of the other tasks. However, whether this borrowing leads to better or worse predictive performance depends on whether the activities are correlated. On the basis of this, we have developed a strategy to use multitask DNNs that incorporate prior domain knowledge to select training sets with correlated activities, and we demonstrate its effectiveness on several examples."
},
{
"pmid": "29467659",
"title": "Transfer and Multi-task Learning in QSAR Modeling: Advances and Challenges.",
"abstract": "Medicinal chemistry projects involve some steps aiming to develop a new drug, such as the analysis of biological targets related to a given disease, the discovery and the development of drug candidates for these targets, performing parallel biological tests to validate the drug effectiveness and side effects. Approaches as quantitative study of activity-structure relationships (QSAR) involve the construction of predictive models that relate a set of descriptors of a chemical compound series and its biological activities with respect to one or more targets in the human body. Datasets used to perform QSAR analyses are generally characterized by a small number of samples and this makes them more complex to build accurate predictive models. In this context, transfer and multi-task learning techniques are very suitable since they take information from other QSAR models to the same biological target, reducing efforts and costs for generating new chemical compounds. Therefore, this review will present the main features of transfer and multi-task learning studies, as well as some applications and its potentiality in drug design projects."
},
{
"pmid": "17016423",
"title": "Drugs, their targets and the nature and number of drug targets.",
"abstract": "What is a drug target? And how many such targets are there? Here, we consider the nature of drug targets, and by classifying known drug substances on the basis of the discussed principles we provide an estimation of the total number of current drug targets."
},
{
"pmid": "24524735",
"title": "QSAR modeling of imbalanced high-throughput screening data in PubChem.",
"abstract": "Many of the structures in PubChem are annotated with activities determined in high-throughput screening (HTS) assays. Because of the nature of these assays, the activity data are typically strongly imbalanced, with a small number of active compounds contrasting with a very large number of inactive compounds. We have used several such imbalanced PubChem HTS assays to test and develop strategies to efficiently build robust QSAR models from imbalanced data sets. Different descriptor types [Quantitative Neighborhoods of Atoms (QNA) and \"biological\" descriptors] were used to generate a variety of QSAR models in the program GUSAR. The models obtained were compared using external test and validation sets. We also report on our efforts to incorporate the most predictive of our models in the publicly available NCI/CADD Group Web services ( http://cactus.nci.nih.gov/chemical/apps/cap)."
},
{
"pmid": "26099013",
"title": "Beware of R(2): Simple, Unambiguous Assessment of the Prediction Accuracy of QSAR and QSPR Models.",
"abstract": "The statistical metrics used to characterize the external predictivity of a model, i.e., how well it predicts the properties of an independent test set, have proliferated over the past decade. This paper clarifies some apparent confusion over the use of the coefficient of determination, R(2), as a measure of model fit and predictive power in QSAR and QSPR modeling. R(2) (or r(2)) has been used in various contexts in the literature in conjunction with training and test data for both ordinary linear regression and regression through the origin as well as with linear and nonlinear regression models. We analyze the widely adopted model fit criteria suggested by Golbraikh and Tropsha ( J. Mol. Graphics Modell. 2002 , 20 , 269 - 276 ) in a strict statistical manner. Shortcomings in these criteria are identified, and a clearer and simpler alternative method to characterize model predictivity is provided. The intent is not to repeat the well-documented arguments for model validation using test data but rather to guide the application of R(2) as a model fit statistic. Examples are used to illustrate both correct and incorrect uses of R(2). Reporting the root-mean-square error or equivalent measures of dispersion, which are typically of more practical importance than R(2), is also encouraged, and important challenges in addressing the needs of different categories of users such as computational chemists, experimental scientists, and regulatory decision support specialists are outlined."
},
{
"pmid": "17880194",
"title": "y-Randomization and its variants in QSPR/QSAR.",
"abstract": "y-Randomization is a tool used in validation of QSPR/QSAR models, whereby the performance of the original model in data description (r2) is compared to that of models built for permuted (randomly shuffled) response, based on the original descriptor pool and the original model building procedure. We compared y-randomization and several variants thereof, using original response, permuted response, or random number pseudoresponse and original descriptors or random number pseudodescriptors, in the typical setting of multilinear regression (MLR) with descriptor selection. For each combination of number of observations (compounds), number of descriptors in the final model, and number of descriptors in the pool to select from, computer experiments using the same descriptor selection method result in two different mean highest random r2 values. A lower one is produced by y-randomization or a variant likewise based on the original descriptors, while a higher one is obtained from variants that use random number pseudodescriptors. The difference is due to the intercorrelation of real descriptors in the pool. We propose to compare an original model's r2 to both of these whenever possible. The meaning of the three possible outcomes of such a double test is discussed. Often y-randomization is not available to a potential user of a model, due to the values of all descriptors in the pool for all compounds not being published. In such cases random number experiments as proposed here are still possible. The test was applied to several recently published MLR QSAR equations, and cases of failure were identified. Some progress also is reported toward the aim of obtaining the mean highest r2 of random pseudomodels by calculation rather than by tedious multiple simulations on random number variables."
}
] |
Artificial Intelligence in Medicine | 31521254 | PMC6855300 | 10.1016/j.artmed.2019.06.004 | Labeling images with facial emotion and the potential for pediatric healthcare | Highlights•Autism spectrum disorder (ASD) affects 750,000 American Children under the age of 10.•Emotion classifiers integrated into mobile solutions can be used for screening and therapy.•Emotion classifiers do not generalize well to children due to a lack of labeled training data.•We propose a method of aggregating emotive video through a mobile game.•We demonstrate that several algorithms can automatically label frames from video derived from the game. | 2Related workOur primary aim is to develop methods to crowdsource facial-emotion labeled data from children with ASD, with the greater goal of training classifiers suitable for the pediatric population for use as outcome measures, therapies, and screening tools. These systems fall within the scope of affective computing: a field that broadly covers the development and application of methods to give computers the ability to recognize and express emotions [24]. An overview of this area was provided in [25], in which Picard described emerging trends in emotion recognition research using electrodermal activity, speech, motion, facial expression, and other sensing paradigms. Picard outlined a vision for future affective computing research that partners psychologists with engineers to interweave emotion detection into everyday life.Various research efforts have explored whether children with ASD differ in their ability to emote compared to their neurotypical peers. For example, Brewer et al. [26] investigated if individuals with and without ASD can correctly identify emotional facial expressions. The results indicated that regardless of the status of the recognizer, emotions produced by individuals with ASD were more poorly recognized compared to their typically developing peers. By contrast, Faso et al. [27] conducted a study in which 38 observers evaluate the expressions of individuals with and without ASD and showed that ASD expressions were identified with greater accuracy, though they were rated as less natural and more intense compared to those from typically developing individuals. In another study, Capps et al. [28] explored parents’ perceptions of the emotional expressiveness of their children. The findings of this study contradict older studies which suggest an absence of emotional reactions from children with ASD. In fact, the results demonstrated that older children with ASD displayed more facial affect than typically developing children. Other research efforts [29] examine facial muscle movements associated with emotion expression in children with ASD based on videotapes from semi-structured play sessions. This study found that children with autism exhibited reduced muscle movements in certain facial regions compared to typically developing peers.While several systems have been developed to help children recognize and express facial emotion [14], [30], other studies focused on improving the ability of neurotypical children and adults to interact with individuals with ASD. For example, Tang et al. [31] described an IoT-based play environment designed to allow neurotypical children to better understand the emotions of their peers with autism using a variety of sensors including pressure, temperature, humidity, and a Kinect camera. The authors later conducted a computational study in which they evaluated children's facial expressions during naturalistic tasks in which the children view cartoons while being recorded by a Kinect camera [32]. As before, the aim of this preliminary study was to develop tools to assist typically developing individuals in understanding the emotions of children with autism.More broadly, Aztiria et al. provided an overview of the field of affect aware ambient intelligence [33]. The authors describe the various forms of affect that can be characterized using wearable and ambient sensors, including voice, body language, posture, and physiological signals such as EEG and EMG. This work provided a broad overview of these techniques as well as several relevant applications such as intelligent tutoring services (ITS)-systems capable of recognizing student affect to assist in the student's learning process. Further work by Karyotis et al. [34] proposed a computational methodology for incorporating emotion into intelligence system design, validated through multiple simulations. The authors proposed a fuzzy emotion representation framework, and demonstrated its utility in big data applications such as social networks, data queries, and sentiment analysis. The work by Maniak et al. [35] proposed a deep neural network model for hierarchical feature extraction to model human reasoning within the context of sound classification.In recent years, computer vision-based systems have received increasing interest in ASD research. In [36], Marcu et al. proposed a system in which wearable cameras are affixed to children for understanding their needs and preferences while improving their engagement. In [37], Picard et al. provided an overview of methods to automatically detect autonomic nervous-system activation (ASM) in children with ASD to identify and avoid incidents of cognitive overload. Another mobile assistance technology, MOSOC, was presented by Escobedo et al. in [10]. Here, the authors developed a tool that provides visual support of a validated curriculum to help children with ASD practice social skills in real-life situations. These systems are indicative of a general transition from traditional healthcare practices to modern mobile and digital solutions that leverage recent advances in computer vision, augmented reality, robotics, and artificial intelligence. This trend motivates an investigation of methods to augment existing datasets to train new classifiers that generalize to children with ASD.Several methods of crowdsourced labeled data acquisition have been proposed in recent years. In [38], Barsoum et al. proposed a deep convolutional neural network architecture to evaluate four different labeling techniques. Specifically, the authors explored techniques to combine scores from ten raters into a final label for each image while minimizing errors. Other research efforts [39] have also explored the efficacy of multi-class labels for each image to mitigate the impact of ambiguities on data labeling. In [40], Yu et al. demonstrated that an ensemble of deep learning classifiers can significantly outperform a single classifier for facial emotion recognition. This approach is similar to our own ensemble method, though our technique fuses minimum likelihood with game meta information rather than assigning the label with the maximum probability. This technique, which used variations in probability scores to search for relevant frames and regions within time-series data are inspired partially by prior work on time-series segmentation [41], [42]. | [
"30478241",
"18606031",
"19948568",
"23101741",
"16322174",
"3681648",
"26053037",
"8326050",
"24342850",
"30481180",
"26430167"
] | [
{
"pmid": "30478241",
"title": "The Prevalence of Parent-Reported Autism Spectrum Disorder Among US Children.",
"abstract": ": media-1vid110.1542/5839990273001PEDS-VA_2017-4161Video Abstract OBJECTIVES: To estimate the national prevalence of parent-reported autism spectrum disorder (ASD) diagnosis among US children aged 3 to 17 years as well as their treatment and health care experiences using the 2016 National Survey of Children's Health (NSCH).\n\n\nMETHODS\nThe 2016 NSCH is a nationally representative survey of 50 212 children focused on the health and well-being of children aged 0 to 17 years. The NSCH collected parent-reported information on whether children ever received an ASD diagnosis by a care provider, current ASD status, health care use, access and challenges, and methods of treatment. We calculated weighted prevalence estimates of ASD, compared health care experiences of children with ASD to other children, and examined factors associated with increased likelihood of medication and behavioral treatment.\n\n\nRESULTS\nParents of an estimated 1.5 million US children aged 3 to 17 years (2.50%) reported that their child had ever received an ASD diagnosis and currently had the condition. Children with parent-reported ASD diagnosis were more likely to have greater health care needs and difficulties accessing health care than children with other emotional or behavioral disorders (attention-deficit/hyperactivity disorder, anxiety, behavioral or conduct problems, depression, developmental delay, Down syndrome, intellectual disability, learning disability, Tourette syndrome) and children without these conditions. Of children with current ASD, 27% were taking medication for ASD-related symptoms, whereas 64% received behavioral treatments in the last 12 months, with variations by sociodemographic characteristics and co-occurring conditions.\n\n\nCONCLUSIONS\nThe estimated prevalence of US children with a parent-reported ASD diagnosis is now 1 in 40, with rates of ASD-specific treatment usage varying by children's sociodemographic and co-occurring conditions."
},
{
"pmid": "18606031",
"title": "Early behavioral intervention, brain plasticity, and the prevention of autism spectrum disorder.",
"abstract": "Advances in the fields of cognitive and affective developmental neuroscience, developmental psychopathology, neurobiology, genetics, and applied behavior analysis have contributed to a more optimistic outcome for individuals with autism spectrum disorder (ASD). These advances have led to new methods for early detection and more effective treatments. For the first time, prevention of ASD is plausible. Prevention will entail detecting infants at risk before the full syndrome is present and implementing treatments designed to alter the course of early behavioral and brain development. This article describes a developmental model of risk, risk processes, symptom emergence, and adaptation in ASD that offers a framework for understanding early brain plasticity in ASD and its role in prevention of the disorder."
},
{
"pmid": "19948568",
"title": "Randomized, controlled trial of an intervention for toddlers with autism: the Early Start Denver Model.",
"abstract": "OBJECTIVE\nTo conduct a randomized, controlled trial to evaluate the efficacy of the Early Start Denver Model (ESDM), a comprehensive developmental behavioral intervention, for improving outcomes of toddlers diagnosed with autism spectrum disorder (ASD).\n\n\nMETHODS\nForty-eight children diagnosed with ASD between 18 and 30 months of age were randomly assigned to 1 of 2 groups: (1) ESDM intervention, which is based on developmental and applied behavioral analytic principles and delivered by trained therapists and parents for 2 years; or (2) referral to community providers for intervention commonly available in the community.\n\n\nRESULTS\nCompared with children who received community-intervention, children who received ESDM showed significant improvements in IQ, adaptive behavior, and autism diagnosis. Two years after entering intervention, the ESDM group on average improved 17.6 standard score points (1 SD: 15 points) compared with 7.0 points in the comparison group relative to baseline scores. The ESDM group maintained its rate of growth in adaptive behavior compared with a normative sample of typically developing children. In contrast, over the 2-year span, the comparison group showed greater delays in adaptive behavior. Children who received ESDM also were more likely to experience a change in diagnosis from autism to pervasive developmental disorder, not otherwise specified, than the comparison group.\n\n\nCONCLUSIONS\nThis is the first randomized, controlled trial to demonstrate the efficacy of a comprehensive developmental behavioral intervention for toddlers with ASD for improving cognitive and adaptive behavior and reducing severity of ASD diagnosis. Results of this study underscore the importance of early detection of and intervention in autism."
},
{
"pmid": "23101741",
"title": "Early behavioral intervention is associated with normalized brain activity in young children with autism.",
"abstract": "OBJECTIVE\nA previously published randomized clinical trial indicated that a developmental behavioral intervention, the Early Start Denver Model (ESDM), resulted in gains in IQ, language, and adaptive behavior of children with autism spectrum disorder. This report describes a secondary outcome measurement from this trial, EEG activity.\n\n\nMETHOD\nForty-eight 18- to 30-month-old children with autism spectrum disorder were randomized to receive the ESDM or referral to community intervention for 2 years. After the intervention (age 48 to 77 months), EEG activity (event-related potentials and spectral power) was measured during the presentation of faces versus objects. Age-matched typical children were also assessed.\n\n\nRESULTS\nThe ESDM group exhibited greater improvements in autism symptoms, IQ, language, and adaptive and social behaviors than the community intervention group. The ESDM group and typical children showed a shorter Nc latency and increased cortical activation (decreased α power and increased θ power) when viewing faces, whereas the community intervention group showed the opposite pattern (shorter latency event-related potential [ERP] and greater cortical activation when viewing objects). Greater cortical activation while viewing faces was associated with improved social behavior.\n\n\nCONCLUSIONS\nThis was the first trial to demonstrate that early behavioral intervention is associated with normalized patterns of brain activity, which is associated with improvements in social behavior, in young children with autism spectrum disorder."
},
{
"pmid": "16322174",
"title": "Factors associated with age of diagnosis among children with autism spectrum disorders.",
"abstract": "OBJECTIVE\nEarly diagnosis of children with autism spectrum disorders (ASD) is critical but often delayed until school age. Few studies have identified factors that may delay diagnosis. This study attempted to identify these factors among a community sample of children with ASD.\n\n\nMETHODS\nSurvey data were collected in Pennsylvania from 969 caregivers of children who had ASD and were younger than 21 years regarding their service experiences. Linear regression was used to identify clinical and demographic characteristics associated with age of diagnosis.\n\n\nRESULTS\nThe average age of diagnosis was 3.1 years for children with autistic disorder, 3.9 years for pervasive developmental disorder not otherwise specified, and 7.2 years for Asperger's disorder. The average age of diagnosis increased 0.2 years for each year of age. Rural children received a diagnosis 0.4 years later than urban children. Near-poor children received a diagnosis 0.9 years later than those with incomes >100% above the poverty level. Children with severe language deficits received a diagnosis an average of 1.2 years earlier than other children. Hand flapping, toe walking, and sustained odd play were associated with a decrease in the age of diagnosis, whereas oversensitivity to pain and hearing impairment were associated with an increase. Children who had 4 or more primary care physicians before diagnosis received a diagnosis 0.5 years later than other children, whereas those whose pediatricians referred them to a specialist received a diagnosis 0.3 years sooner.\n\n\nCONCLUSION\nThese findings suggest improvements over time in decreasing the age at which children with ASD, especially higher functioning children, receive a diagnosis. They also suggest a lack of resources in rural areas and for near-poor families and the importance of continuous pediatric care and specialty referrals. That only certain ASD-related behaviors, some of which are not required to satisfy diagnostic criteria, decreased the age of diagnosis suggests the importance of continued physician education."
},
{
"pmid": "3681648",
"title": "Universals and cultural differences in the judgments of facial expressions of emotion.",
"abstract": "We present here new evidence of cross-cultural agreement in the judgement of facial expression. Subjects in 10 cultures performed a more complex judgment task than has been used in previous cross-cultural studies. Instead of limiting the subjects to selecting only one emotion term for each expression, this task allowed them to indicate that multiple emotions were evident and the intensity of each emotion. Agreement was very high across cultures about which emotion was the most intense. The 10 cultures also agreed about the second most intense emotion signaled by an expression and about the relative intensity among expressions of the same emotion. However, cultural differences were found in judgments of the absolute level of emotional intensity."
},
{
"pmid": "26053037",
"title": "Can Neurotypical Individuals Read Autistic Facial Expressions? Atypical Production of Emotional Facial Expressions in Autism Spectrum Disorders.",
"abstract": "The difficulties encountered by individuals with autism spectrum disorder (ASD) when interacting with neurotypical (NT, i.e. nonautistic) individuals are usually attributed to failure to recognize the emotions and mental states of their NT interaction partner. It is also possible, however, that at least some of the difficulty is due to a failure of NT individuals to read the mental and emotional states of ASD interaction partners. Previous research has frequently observed deficits of typical facial emotion recognition in individuals with ASD, suggesting atypical representations of emotional expressions. Relatively little research, however, has investigated the ability of individuals with ASD to produce recognizable emotional expressions, and thus, whether NT individuals can recognize autistic emotional expressions. The few studies which have investigated this have used only NT observers, making it impossible to determine whether atypical representations are shared among individuals with ASD, or idiosyncratic. This study investigated NT and ASD participants' ability to recognize emotional expressions produced by NT and ASD posers. Three posing conditions were included, to determine whether potential group differences are due to atypical cognitive representations of emotion, impaired understanding of the communicative value of expressions, or poor proprioceptive feedback. Results indicated that ASD expressions were recognized less well than NT expressions, and that this is likely due to a genuine deficit in the representation of typical emotional expressions in this population. Further, ASD expressions were equally poorly recognized by NT individuals and those with ASD, implicating idiosyncratic, rather than common, atypical representations of emotional expressions in ASD."
},
{
"pmid": "8326050",
"title": "Parental perception of emotional expressiveness in children with autism.",
"abstract": "Parents' perceptions of their children's emotional expressiveness, and possible bases for these perceptions, were investigated in a study comparing older, nonretarded autistic and normal children and in another study comparing young autistic, mentally retarded, and normal children. Both groups of autistic children were perceived as showing more negative emotion and less positive emotion than comparison children. In the younger sample, parental perceptions correlated with the children's attention and responsiveness to others' displays of emotion in 2 laboratory situations. Findings contradict the view that autism involves the \"absence of emotional reaction\" (American Psychiatric Association, 1987, p. 35)."
},
{
"pmid": "24342850",
"title": "A quarter century of progress on the early detection and treatment of autism spectrum disorder.",
"abstract": "The last 25 years have witnessed tremendous changes in our ability to detect autism very early in life and provide interventions that can significantly influence children's outcomes. It was once questioned whether autism could be recognized before children had developed language and symbolic play skills; now changes in early behaviors, as well as structural brain changes, have been documented in infants 6-12 months of age who later develop autism. Advances in brain imaging and genetics offer the possibility of detecting autism before the syndrome is fully manifest, thereby reducing or preventing symptoms from developing. Whereas the primary mode of behavioral intervention a few decades ago relied on operant conditioning, recent approaches integrate the methods of applied behavioral analysis within a developmental, relationship-focused intervention model that are implemented by both parents and clinicians. These interventions have been found to have positive effects on children's developmental trajectory, as measured by both behavioral and neurophysiological assessments. Future approaches will likely combine both behavioral and pharmacological treatments for children who have less robust responses to behavioral interventions. There has been a paradigm shift in the way that autism is viewed, evolving from a lifelong condition with a very poor prognosis to one in which significant gains and neuroplasticity is expected, especially when the condition is detected early and appropriate interventions are provided. The grand challenge for the future is to bridge the tremendous gap between research and the implementation of evidence-based practices in the broader community, both in the United States and worldwide. Significant disparities in access to appropriate health care for children with autism exist that urgently require advocacy and more resources."
},
{
"pmid": "30481180",
"title": "Mobile detection of autism through machine learning on home video: A development and prospective validation study.",
"abstract": "BACKGROUND\nThe standard approaches to diagnosing autism spectrum disorder (ASD) evaluate between 20 and 100 behaviors and take several hours to complete. This has in part contributed to long wait times for a diagnosis and subsequent delays in access to therapy. We hypothesize that the use of machine learning analysis on home video can speed the diagnosis without compromising accuracy. We have analyzed item-level records from 2 standard diagnostic instruments to construct machine learning classifiers optimized for sparsity, interpretability, and accuracy. In the present study, we prospectively test whether the features from these optimized models can be extracted by blinded nonexpert raters from 3-minute home videos of children with and without ASD to arrive at a rapid and accurate machine learning autism classification.\n\n\nMETHODS AND FINDINGS\nWe created a mobile web portal for video raters to assess 30 behavioral features (e.g., eye contact, social smile) that are used by 8 independent machine learning models for identifying ASD, each with >94% accuracy in cross-validation testing and subsequent independent validation from previous work. We then collected 116 short home videos of children with autism (mean age = 4 years 10 months, SD = 2 years 3 months) and 46 videos of typically developing children (mean age = 2 years 11 months, SD = 1 year 2 months). Three raters blind to the diagnosis independently measured each of the 30 features from the 8 models, with a median time to completion of 4 minutes. Although several models (consisting of alternating decision trees, support vector machine [SVM], logistic regression (LR), radial kernel, and linear SVM) performed well, a sparse 5-feature LR classifier (LR5) yielded the highest accuracy (area under the curve [AUC]: 92% [95% CI 88%-97%]) across all ages tested. We used a prospectively collected independent validation set of 66 videos (33 ASD and 33 non-ASD) and 3 independent rater measurements to validate the outcome, achieving lower but comparable accuracy (AUC: 89% [95% CI 81%-95%]). Finally, we applied LR to the 162-video-feature matrix to construct an 8-feature model, which achieved 0.93 AUC (95% CI 0.90-0.97) on the held-out test set and 0.86 on the validation set of 66 videos. Validation on children with an existing diagnosis limited the ability to generalize the performance to undiagnosed populations.\n\n\nCONCLUSIONS\nThese results support the hypothesis that feature tagging of home videos for machine learning classification of autism can yield accurate outcomes in short time frames, using mobile devices. Further work will be needed to confirm that this approach can accelerate autism diagnosis at scale."
}
] |
Frontiers in Neurorobotics | 31798437 | PMC6861514 | 10.3389/fnbot.2019.00093 | Bootstrapping Knowledge Graphs From Images and Text | The problem of generating structured Knowledge Graphs (KGs) is difficult and open but relevant to a range of tasks related to decision making and information augmentation. A promising approach is to study generating KGs as a relational representation of inputs (e.g., textual paragraphs or natural images), where nodes represent the entities and edges represent the relations. This procedure is naturally a mixture of two phases: extracting primary relations from input, and completing the KG with reasoning. In this paper, we propose a hybrid KG builder that combines these two phases in a unified framework and generates KGs from scratch. Specifically, we employ a neural relation extractor resolving primary relations from input and a differentiable inductive logic programming (ILP) model that iteratively completes the KG. We evaluate our framework in both textual and visual domains and achieve comparable performance on relation extraction datasets based on Wikidata and the Visual Genome. The framework surpasses neural baselines by a noticeable gap in reasoning out dense KGs and overall performs particularly well for rare relations. | 2. Related WorkRelation extraction is an important task and necessary to obtain a detailed understanding of texts or images. In the following we first describe current approaches for relation extraction from textual data, before continuing to describe relation extraction from images.2.1. Relation Extraction From TextsRelation extraction has been widely used to obtain structured knowledge from plain text. The resulting structured relational facts are crucial to understanding large-scale corpora and can be utilized to automatically complete missing facts in KGs. Early neural relation extraction methods generally attempted a supervised paradigm (Zeng et al., 2014; Nguyen and Grishman, 2015; Santos et al., 2015) and heavily rely on human-labeled datasets. However, the annotation of these datasets is labor-intensive and time-consuming. Recent relation extraction methods address the problem by creating large-scale training data via distant supervision. However, the assumption of distant supervision is very strong and often introduces noise. Much work has been invested in order to alleviate the wrong-labeling problem in distant supervision and to extract global relations between two entities from multiple supporting sentences (Riedel et al., 2010; Zeng et al., 2015; Lin et al., 2017; Feng et al., 2018; Qin et al., 2018). Recently many approaches also explore the extraction of relations between entities on the sentence level in rich context (Sorokin and Gurevych, 2017; Zeng et al., 2017; Christopoulou et al., 2019; Zhu et al., 2019).Mintz et al. (2009) propose distant supervision to automatically generate a large-scale dataset for relation extraction by aligning plain text with knowledge graphs. The assumption of distant supervision is that all sentences containing an entity will express the corresponding relation in KGs. Zeng et al. (2015) further formulate distantly supervised relation extraction as a multi-instance learning problem, where instance bags consist of multiple sentences containing an entity pair, and take the uncertainty of instance label into consideration by selecting the most confident supporting instance for relation prediction. Lin et al. (2017) propose to obtain bag representations by semantic composition of instances, where instance weights are determined by selective attention. Feng et al. (2018) propose to filter false positive relation instances via reinforcement learning. Qin et al. (2018) propose an adversarial framework that jointly learns a generator and discriminator to distinguish false positive relation instances from distant supervision.Sorokin and Gurevych (2017) identify sentence-level relation between entity pairs in a rich context. They predict relations between each entity pairs by considering all other possible entity pairs in the same sentence as context and modeling the correlation of relations via attention mechanism. Christopoulou et al. (2019) model the context of an entity pair by iteratively aggregating walk paths between the target entity pair on the graph, and achieve comparable results without using external linguistic tools. Zhu et al. (2019) model implicit reasoning via message passing among context entity pairs. In this work, we also focus on extracting sentence-level relations. A crucial difference is that we extract relations within a sentence or paragraph sequentially to explicitly model the relation reasoning structure. Zeng et al. (2017) explicitly use a special first-order logic rule to model the dependencies of relations within a sentence. A crucial distinction of our model is that we are capable of modeling general and also long reasoning chains by recursively applying rules.2.2. Relation Extraction From ImagesIn order to understand and reason about the context of an image we need not only information about objects within the scene, but also about relations between these objects. Therefore, extracting the relations between objects (e.g., in/on/under, support, etc.) yields a better scene understanding compared to just recognizing objects and their individual properties (Elliott and de Vries, 2015). While relations can be predicted pair-wise (Chao et al., 2015; Ramanathan et al., 2015), most current work focuses on the generation of a directed graph generally referred to as scene graph (Johnson et al., 2015; Xu et al., 2017; Zhang et al., 2017). Scene graphs are a way of representing the context of an image in a structured way to improve the performance of tasks such as visual question answering or image retrieval. Existing scene graph generators usually extend an object detection framework that first detects bounding boxes for objects, then extracts visual features and classifies objects inside bounding boxes, and finally predicts relations between objects in a parallel manner. One of the challenges is that the number of possible relations grows exponentially with the number of objects in an image. This makes it computationally challenging to evaluate all possible relations. Therefore, many approaches work on ways to prune unlikely relations from the graph or to only focus on the most probable relations from the beginning.Li et al. (2017) combine three tasks—object detection, scene graph generation, and region captioning—and show that learning all three tasks at once leads to an overall better performance since learned features can be shared across tasks. Xu et al. (2017) propose an end-to-end trainable approach for creating an image-grounded scene graph that consists of object categories, bounding boxes for the individual objects, and relationships between pairs of objects by iteratively refining its predictions. Liang et al. (2017) perform prediction together with a traversal of the graph, essentially in a sequential manner. However, it takes only the last two prediction results into account and thus is unable to perform general logic inductions based on a partial inference result.Li and Gupta (2018) learn to transform 2D image representations into a graph representation where the nodes represent image regions and edges model similarity between these image regions while Chen et al. (2018) introduce a graph structure specifically to facilitate reasoning between regions that are far apart in the image. Yang et al. (2018) make the scene graph generation more tractable and efficient by using a relation proposal network that identifies likely edges in the scene graph and a Graph Convolutional Network to update objects and their relationships based on the objects' neighbors. Woo et al. (2018) propose a relational embedding module to jointly represent connections among all objects instead of focusing on objects in isolation.Related to our approach, Wan et al. (2018) work on completing existing scene graphs given an image and a corresponding scene graph. However, they do not use logic reasoning, but instead, use a neural network to extract unidentified relations between existing nodes in the scene graph to obtain improved scene graphs with more accurate relations. The approach, however, is still completely data-driven and, as such, it is not clear how it handles the long tail of sparsely occurring relations and how it generalizes to novel object-relation triplets.Zellers et al. (2018) observe that object labels are highly predictive of relation labels (but not vice versa) and use this insight to develop both a new baseline and a network that takes this into consideration by staging bounding box predictions, object identities (in the bounding boxes), and relations in a hierarchical manner. Chen et al. (2019) show that using the knowledge about correlations between objects and associated relations can be explicitly represented in a KG. A novel routing network then facilitates scene graph generation by using prior statistical knowledge about the interplay of objects and relations.Gu et al. (2019) incorporate commonsense knowledge into the generation process of a scene graph by using an external KG while Qi et al. (2019) use linguistic knowledge to improve the performance on detecting semantic relations by using a semantic transformation module to map visual features and word embeddings into a common semantic space. So far, most work on extracting scene graphs from images is based purely on data-driven learning with neural networks. This creates challenges in scalability (especially for images with many objects) and suffers from the long tail of relations in the training data, which is difficult to learn for neural network-based approaches. Additionally, it is not clear whether these approaches are able to generalize learned relations to novel settings. In contrast, our approach combines data-driven neural networks with a differentiable model that applies logic rules for relation extraction. This enables us to insert prior knowledge about certain relations (e.g., transitivity) into our model which can help with generalizability (since relations are now decoupled from the objects), scalability (we can efficiently evaluate the learned rules), and the long tail of relations in the training data (once a rule encodes one of these relations we can easily apply it to other objects, too). | [
"2354612",
"2450716",
"16112549",
"27411231",
"9377276",
"29951191",
"26017442"
] | [
{
"pmid": "2354612",
"title": "Connectionism and the problem of systematicity: why Smolensky's solution doesn't work.",
"abstract": "In two recent papers, Paul Smolensky responds to a challenge Jerry Fodor and Zenon Pylyshyn posed for connectionist theories of cognition: to explain the existence of systematic relations among cognitive capacities without assuming that mental processes are causally sensitive to the constituent structure of mental representations. Smolensky thinks connectionists can explain systematicity if they avail themselves of \"distributed\" mental representation. In facts, Smolensky offers two accounts of distributed mental representation, corresponding to his notions of \"weak\" and \"strong\" compositional structure. We argue that weak compositional structure is irrelevant to the systematicity problem and of dubious internal coherence. We then argue that strong compositional (tensor product) representations fail to explain systematicity because they fail to exhibit the sort of constituents that can provide domains for structure sensitive mental processes."
},
{
"pmid": "16112549",
"title": "Framewise phoneme classification with bidirectional LSTM and other neural network architectures.",
"abstract": "In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it."
},
{
"pmid": "27411231",
"title": "LSTM: A Search Space Odyssey.",
"abstract": "Several variants of the long short-term memory (LSTM) architecture for recurrent neural networks have been proposed since its inception in 1995. In recent years, these networks have become the state-of-the-art models for a variety of machine learning problems. This has led to a renewed interest in understanding the role and utility of various computational components of typical LSTM variants. In this paper, we present the first large-scale analysis of eight LSTM variants on three representative tasks: speech recognition, handwriting recognition, and polyphonic music modeling. The hyperparameters of all LSTM variants for each task were optimized separately using random search, and their importance was assessed using the powerful functional ANalysis Of VAriance framework. In total, we summarize the results of 5400 experimental runs ( ≈ 15 years of CPU time), which makes our study the largest of its kind on LSTM networks. Our results show that none of the variants can improve upon the standard LSTM architecture significantly, and demonstrate the forget gate and the output activation function to be its most critical components. We further observe that the studied hyperparameters are virtually independent and derive guidelines for their efficient adjustment."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "29951191",
"title": "Not-So-CLEVR: learning same-different relations strains feedforward neural networks.",
"abstract": "The advent of deep learning has recently led to great successes in various engineering applications. As a prime example, convolutional neural networks, a type of feedforward neural network, now approach human accuracy on visual recognition tasks like image classification and face recognition. However, here we will show that feedforward neural networks struggle to learn abstract visual relations that are effortlessly recognized by non-human primates, birds, rodents and even insects. We systematically study the ability of feedforward neural networks to learn to recognize a variety of visual relations and demonstrate that same-different visual relations pose a particular strain on these networks. Networks fail to learn same-different visual relations when stimulus variability makes rote memorization difficult. Further, we show that learning same-different problems becomes trivial for a feedforward network that is fed with perceptually grouped stimuli. This demonstration and the comparative success of biological vision in learning visual relations suggests that feedback mechanisms such as attention, working memory and perceptual grouping may be the key components underlying human-level abstract visual reasoning."
},
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
}
] |
Frontiers in Plant Science | 31798598 | PMC6868057 | 10.3389/fpls.2019.01321 | Deep Learning-Based Phenotyping System With Glocal Description of Plant Anomalies and Symptoms | Recent advances in Deep Neural Networks have allowed the development of efficient and automated diagnosis systems for plant anomalies recognition. Although existing methods have shown promising results, they present several limitations to provide an appropriate characterization of the problem, especially in real-field scenarios. To address this limitation, we propose an approach that besides being able to efficiently detect and localize plant anomalies, allows to generate more detailed information about their symptoms and interactions with the scene, by combining visual object recognition and language generation. It uses an image as input and generates a diagnosis result that shows the location of anomalies and sentences describing the symptoms as output. Our framework is divided into two main parts: First, a detector obtains a set of region features that contain the anomalies using a Region-based Deep Neural Network. Second, a language generator takes the features of the detector as input and generates descriptive sentences with details of the symptoms using Long-Short Term Memory (LSTM). Our loss metric allows the system to be trained end-to-end from the object detector to the language generator. Finally, the system outputs a set of bounding boxes along with the sentences that describe their symptoms using glocal criteria into two different ways: a set of specific descriptions of the anomalies detected in the plant and an abstract description that provides general information about the scene. We demonstrate the efficiency of our approach in the challenging tomato diseases and pests recognition task. We further show that our approach achieves a mean Average Precision (mAP) of 92.5% in our newly created Tomato Plant Anomalies Description Dataset. Our objective evaluation allows users to understand the relationships between pathologies and their evolution throughout their stage of infection, location in the plant, symptoms, etc. Our work introduces a cost-efficient tool that provides farmers with a technology that facilitates proper handling of crops. | Related WorksIn this section, we first introduce some related works based on deep neural networks for object detection and image description. Then, we review some recent techniques used for plant anomalies recognition.Deep Learning Methods for Object Detection and Image-Based DescriptionIn vision systems, object detection has opened a wide range of opportunities with several applications in different fields. These systems involve not only recognizing and classifying objects in the image (Russakovsky et al., 2015) but also localizing them by drawing bounding boxes around their area (Ren et al., 2016). State-of-the-art methods based on deep learning for object detection can be categorized into two types: two-stages (Dai et al., 2016; Ren et al., 2016; He et al., 2017) and single-stage (Redmon et al., 2015; Liu et al., 2016; Redmon and Farhadi, 2017). Correspondingly, in recent years, much of the progress in deep learning has been also directed to develop handful and efficient feature extractors (Krizhevsky et al., 2012; Simonyan and Zissermann, 2014; Szegedy et al., 2015; He et al., 2016; Huang et al., 2017; Hu et al., 2017; Xie et al., 2017). Lately, Feature Pyramid Network (FPN) (Lin et al., 2017) has shown progress, especially in the recognition of objects at various scales. Basically, it exploits a pyramidal form of CNN feature hierarchy while creating a feature pyramid that has semantics at all scales. The result is a feature pyramid that has a rich semantics at all levels and is built quickly from a single input image.In addition to recognizing patterns within images, methods based on deep learning have shown remarkable abilities to generate text as well (Bahdanau et al., 2015). In practice, to generate an automatic description from the images, it is necessary to understand how humans describe an image (Bernardi et al., 2016). Humans by nature have the ability to find relationships between objects and their possible interaction, their attributes and actions they perform. The problem of generating descriptions from visual data has been widely studied and recent interest has been put into solving the problem of image description in natural language (Kiros et al., 2014a; Kiros et al., 2014b; Donahue et al., 2017; Kaparthy and Li, 2017; Vinyals et al., 2017). For instance, Kiros et al. (2014a) used a CNN to learn representations of words and image characteristics together by jointly training a language model. Subsequently, Kiros et al. (2014b) proposed an encoder-decoder based method that learns a joint image-sentence embedding where sentences are encoded using LSTM recurrent neural networks. To that purpose, image features extracted by a CNN are projected into the space of the LSTM to generate language. Kaparthy et al. (Kaparthy and Li, 2017) developed a deep neural network that infers the alignment between segments of sentences and the area of the image that they describe. Specifically, they use a Region-Based Convolutional Neural Network (R-CNN) to find objects in the image, and a RNN to generate a description in the form of text. In addition, to make the combination of visual recognition and description end-to-end trainable, Donahue et al. (2017) proposed Long-term Recurrent Convolutional Networks (LRCNs). Further, Vinyals et al. (2017) introduced an end-to-end approach to generate a description of images. They combine vision (CNN) for image classification and language models (RNN) for language generation.Although the detection and recognition of objects are necessary, they are not sufficient to produce detailed information. The results are a list of labels corresponding to the objects in the image. in specific applications, an efficient image description should not only contain a list of objects but also possibly a clear and concise description of them. In that direction, several recent studies take advantage of image description on regions to describe images with natural language (Johnson et al., 2016; Kaparthy and Li, 2017; Yang et al., 2017). They are specifically based on a combination of RNN language model that is conditioned on the image information. However, in those approaches, they tackle the problem from a subjective point of view, since they find the objects that are presented in the image but not a relationship between them. In our work, we extend the idea of object-based description as an application for recognizing plant anomalies. The system can provide more precise and clear information about the pathology that affects a plant.Plant Anomalies RecognitionThe worldwide accessibility to mobile systems and the recent advances in software and hardware technologies have allowed the implementation of more efficient technologies in several areas. Recently, several works have demonstrated the potential and possibilities of utilizing deep neural network techniques for phenotyping in plants (Mutka and Bart, 2015; Singh et al., 2016; Araus et al., 2018; Singh et al., 2018). The rapid growth of sophistication and capabilities of deep neural networks have opened up a wide range of opportunities to extend their application towards the solution of common problems in the plant science research community, such as the case of plant diseases recognition. Following this trend, recent studies based on deep learning, have addressed automated identification of plant diseases by non-destructive methods in different types of crops. These methods can be divided into two types: image-based diseases recognition and region-based diseases recognition.In approaches based on image classification, features of images containing a specific disease are extracted using CNN and subsequently classified into different categories. Some examples include the detection of plant anomalies in several crops such as apple (Liu et al., 2018), bananas (Amara et al., 2017), cucumber (Kawasaki et al., 2015), tomato (Fuentes et al., 2017a), etc. This application has been further extended to multiple crops (Mohanty et al., 2016; Sladojevic et al., 2016) to distinguish different types of pathologies out of healthy leaves. However, it is worth to mention that, although these approaches show the use of CNN-based methods as a powerful tool to extract features and efficiently classify images that contain particular diseases in different types of crops, they are limited to perform experiments using images obtained in the laboratory, rather than a real scenario. Therefore, they do not cover all variations included in real-field conditions such as state of infections, presence of various anomalies in the same sample, surrounding objects, etc. Consequently, their results may be subjective to the scene instead of the diseases in particular.In contrast to the aforementioned works, Fuentes et al. (2017b) proposed a robust system that can recognize nine different types of anomalies in tomato plants. They show a satisfactory method that is able to provide the category (class) and location (bounding box) of pathologies using images collected in real-field scenarios. Recently, Fuentes et al. (2018) extended their work in (Fuentes et al., 2017b) and showed a significant improvement in the task of tomato plant anomalies recognition using a secondary diagnostic function based on CNN-filter banks to reduce the influence of the false positives generated by the detector. Compared to their previous approach (Fuentes et al., 2017b), they obtained a recognition rate of approximately 96% which is a gain of 13%. This system has also demonstrated to be an effective technique to address the problem of class imbalance that appears especially in datasets with limited data.In general, although the works mentioned above have substantially allowed satisfactory detection and recognition of plant anomalies, they present limited capabilities to provide a better characterization of the problem, especially in real-field scenarios. In other words, they lack specific information that can specifically allow users to better understand the state of the infection based on the symptoms of diseases. To address this limitation, we propose an approach that differs mainly from previous methods in that, besides being able to detect plant anomalies and their location in the image, it also provides more detailed information about their symptoms and interactions with the scene. It uses an image as input and produces a user-friendly diagnostic result that is shown in the form of sentences as output. | [
"29067037",
"30210509",
"9377276",
"27713752",
"25601871",
"27295650",
"26679045",
"28969999"
] | [
{
"pmid": "29067037",
"title": "X-FIDO: An Effective Application for Detecting Olive Quick Decline Syndrome with Deep Learning and Data Fusion.",
"abstract": "We have developed a vision-based program to detect symptoms of Olive Quick Decline Syndrome (OQDS) on leaves of Olea europaea L. infected by Xylella fastidiosa, named X-FIDO (Xylella FastIdiosa Detector for O. europaea L.). Previous work predicted disease from leaf images with deep learning but required a vast amount of data which was obtained via crowd sourcing such as the PlantVillage project. This approach has limited applicability when samples need to be tested with traditional methods (i.e., PCR) to avoid incorrect training input or for quarantine pests which manipulation is restricted. In this paper, we demonstrate that transfer learning can be leveraged when it is not possible to collect thousands of new leaf images. Transfer learning is the re-application of an already trained deep learner to a new problem. We present a novel algorithm for fusing data at different levels of abstraction to improve performance of the system. The algorithm discovers low-level features from raw data to automatically detect veins and colors that lead to symptomatic leaves. The experiment included images of 100 healthy leaves, 99 X. fastidiosa-positive leaves and 100 X. fastidiosa-negative leaves with symptoms related to other stress factors (i.e., abiotic factors such as water stress or others diseases). The program detects OQDS with a true positive rate of 98.60 ± 1.47% in testing, showing great potential for image analysis for this disease. Results were obtained with a convolutional neural network trained with the stochastic gradient descent method, and ten trials with a 75/25 split of training and testing data. This work shows potential for massive screening of plants with reduced diagnosis time and cost."
},
{
"pmid": "30210509",
"title": "High-Performance Deep Neural Network-Based Tomato Plant Diseases and Pests Diagnosis System With Refinement Filter Bank.",
"abstract": "A fundamental problem that confronts deep neural networks is the requirement of a large amount of data for a system to be efficient in complex applications. Promising results of this problem are made possible through the use of techniques such as data augmentation or transfer learning of pre-trained models in large datasets. But the problem still persists when the application provides limited or unbalanced data. In addition, the number of false positives resulting from training a deep model significantly cause a negative impact on the performance of the system. This study aims to address the problem of false positives and class unbalance by implementing a Refinement Filter Bank framework for Tomato Plant Diseases and Pests Recognition. The system consists of three main units: First, a Primary Diagnosis Unit (Bounding Box Generator) generates the bounding boxes that contain the location of the infected area and class. The promising boxes belonging to each class are then used as input to a Secondary Diagnosis Unit (CNN Filter Bank) for verification. In this second unit, misclassified samples are filtered through the training of independent CNN classifiers for each class. The result of the CNN Filter Bank is a decision of whether a target belongs to the category as it was detected (True) or not (False) otherwise. Finally, an integration unit combines the information from the primary and secondary units while keeping the True Positive samples and eliminating the False Positives that were misclassified in the first unit. By this implementation, the proposed approach is able to obtain a recognition rate of approximately 96%, which represents an improvement of 13% compared to our previous work in the complex task of tomato diseases and pest recognition. Furthermore, our system is able to deal with the false positives generated by the bounding box generator, and class unbalances that appear especially on datasets with limited data."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "27713752",
"title": "Using Deep Learning for Image-Based Plant Disease Detection.",
"abstract": "Crop diseases are a major threat to food security, but their rapid identification remains difficult in many parts of the world due to the lack of the necessary infrastructure. The combination of increasing global smartphone penetration and recent advances in computer vision made possible by deep learning has paved the way for smartphone-assisted disease diagnosis. Using a public dataset of 54,306 images of diseased and healthy plant leaves collected under controlled conditions, we train a deep convolutional neural network to identify 14 crop species and 26 diseases (or absence thereof). The trained model achieves an accuracy of 99.35% on a held-out test set, demonstrating the feasibility of this approach. Overall, the approach of training deep learning models on increasingly large and publicly available image datasets presents a clear path toward smartphone-assisted crop disease diagnosis on a massive global scale."
},
{
"pmid": "25601871",
"title": "Image-based phenotyping of plant disease symptoms.",
"abstract": "Plant diseases cause significant reductions in agricultural productivity worldwide. Disease symptoms have deleterious effects on the growth and development of crop plants, limiting yields and making agricultural products unfit for consumption. For many plant-pathogen systems, we lack knowledge of the physiological mechanisms that link pathogen infection and the production of disease symptoms in the host. A variety of quantitative high-throughput image-based methods for phenotyping plant growth and development are currently being developed. These methods range from detailed analysis of a single plant over time to broad assessment of the crop canopy for thousands of plants in a field and employ a wide variety of imaging technologies. Application of these methods to the study of plant disease offers the ability to study quantitatively how host physiology is altered by pathogen infection. These approaches have the potential to provide insight into the physiological mechanisms underlying disease symptom development. Furthermore, imaging techniques that detect the electromagnetic spectrum outside of visible light allow us to quantify disease symptoms that are not visible by eye, increasing the range of symptoms we can observe and potentially allowing for earlier and more thorough symptom detection. In this review, we summarize current progress in plant disease phenotyping and suggest future directions that will accelerate the development of resistant crop varieties."
},
{
"pmid": "27295650",
"title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.",
"abstract": "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available."
},
{
"pmid": "26679045",
"title": "Relaying the Ethylene Signal: New Roles for EIN2.",
"abstract": "ETHYLENE INSENSITIVE 2 (EIN2), an endoplasmic reticulum (ER) localized protein, plays a central role in relaying the ethylene signal from ER perception to the nucleus. Two recent reports reveal the novel role for EIN2 in translational control, providing another layer of regulation for ethylene signal transduction."
},
{
"pmid": "28969999",
"title": "Are GM Crops for Yield and Resilience Possible?",
"abstract": "Crop yield improvements need to accelerate to avoid future food insecurity. Outside Europe, genetically modified (GM) crops for herbicide- and insect-resistance have been transformative in agriculture; other traits have also come to market. However, GM of yield potential and stress resilience has yet to impact on food security. Genes have been identified for yield such as grain number, size, leaf growth, resource allocation, and signaling for drought tolerance, but there is only one commercialized drought-tolerant GM variety. For GM and genome editing to impact on yield and resilience there is a need to understand yield-determining processes in a cell and developmental context combined with evaluation in the grower environment. We highlight a sugar signaling mechanism as a paradigm for this approach."
}
] |
Biology Direct | 31752974 | PMC6868770 | 10.1186/s13062-019-0249-6 | A novel framework for horizontal and vertical data integration in cancer studies with application to survival time prediction models | BackgroundRecently high-throughput technologies have been massively used alongside clinical tests to study various types of cancer. Data generated in such large-scale studies are heterogeneous, of different types and formats. With lack of effective integration strategies novel models are necessary for efficient and operative data integration, where both clinical and molecular information can be effectively joined for storage, access and ease of use. Such models, combined with machine learning methods for accurate prediction of survival time in cancer studies, can yield novel insights into disease development and lead to precise personalized therapies.ResultsWe developed an approach for intelligent data integration of two cancer datasets (breast cancer and neuroblastoma) − provided in the CAMDA 2018 ‘Cancer Data Integration Challenge’, and compared models for prediction of survival time. We developed a novel semantic network-based data integration framework that utilizes NoSQL databases, where we combined clinical and expression profile data, using both raw data records and external knowledge sources. Utilizing the integrated data we introduced Tumor Integrated Clinical Feature (TICF) − a new feature for accurate prediction of patient survival time. Finally, we applied and validated several machine learning models for survival time prediction.ConclusionWe developed a framework for semantic integration of clinical and omics data that can borrow information across multiple cancer studies. By linking data with external domain knowledge sources our approach facilitates enrichment of the studied data by discovery of internal relations. The proposed and validated machine learning models for survival time prediction yielded accurate results.ReviewersThis article was reviewed by Eran Elhaik, Wenzhong Xiao and Carlos Loucera. | Related workIn this work horizontal integration is considered to be a management approach in which the raw data (patients, clinical records, expression profiles, etc.) can be “owned” and managed by one network. Usually, each type of raw data can define different semantics for common management purposes. In contrast, vertical integration semantically combines the attributes of each separate type of data that are related to one another. Additional information, in particular for the molecular data, can be found in external domain knowledge sources. With this newly added information the missing parts of the studied data can be filled in. In this way relations between attributes of the different records can be learnt. Currently, there are many established algorithms that address single-track data analysis [7, 8, 11, 12], and some recent successful approaches to integrative exploration [13]. These, however, usually only focus on one of the integration applications, either horizontal or vertical, underutilizing the entireness of the available information and the latent relations. We propose a novel framework that employs both these integration views. We show its value on a first example application to machine learning-based survival time prediction. | [
"16574494",
"12075666",
"12546870",
"11854055",
"15217521",
"27993167",
"26109056",
"23190475",
"29615097",
"29880025",
"10802651",
"1391987",
"3882734"
] | [
{
"pmid": "16574494",
"title": "Data integration and genomic medicine.",
"abstract": "Genomic medicine aims to revolutionize health care by applying our growing understanding of the molecular basis of disease. Research in this arena is data intensive, which means data sets are large and highly heterogeneous. To create knowledge from data, researchers must integrate these large and diverse data sets. This presents daunting informatic challenges such as representation of data that is suitable for computational inference (knowledge representation), and linking heterogeneous data sets (data integration). Fortunately, many of these challenges can be classified as data integration problems, and technologies exist in the area of data integration that may be applied to these challenges. In this paper, we discuss the opportunities of genomic medicine as well as identify the informatics challenges in this domain. We also review concepts and methodologies in the field of data integration. These data integration concepts and methodologies are then aligned with informatics challenges in genomic medicine and presented as potential solutions. We conclude this paper with challenges still not addressed in genomic medicine and gaps that remain in data integration research to facilitate genomic medicine."
},
{
"pmid": "12075666",
"title": "Biological data integration: wrapping data and tools.",
"abstract": "Nowadays scientific data is inevitably digital and stored in a wide variety of formats in heterogeneous systems. Scientists need to access an integrated view of remote or local heterogeneous data sources with advanced data accessing, analyzing, and visualization tools. Building a digital library for scientific data requires accessing and manipulating data extracted from flat files or databases, documents retrieved from the Web as well as data generated by software. We present an approach to wrapping web data sources, databases, flat files, or data generated by tools through a database view mechanism. Generally, a wrapper has two tasks: it first sends a query to the source to retrieve data and, second builds the expected output with respect to the virtual structure. Our wrappers are composed of a retrieval component based on an intermediate object view mechanism called search views mapping the source capabilities to attributes, and an eXtensible Markup Language (XML) engine, respectively, to perform these two tasks. The originality of the approach consists of: 1) a generic view mechanism to access seamlessly data sources with limited capabilities and 2) the ability to wrap data sources as well as the useful specific tools they may provide. Our approach has been developed and demonstrated as part of the multidatabase system supporting queries via uniform object protocol model (OPM) interfaces."
},
{
"pmid": "12546870",
"title": "Discovery informatics: its evolving role in drug discovery.",
"abstract": "Drug discovery and development is a highly complex process requiring the generation of very large amounts of data and information. Currently this is a largely unmet informatics challenge. The current approaches to building information and knowledge from large amounts of data has been addressed in cases where the types of data are largely homogeneous or at the very least well-defined. However, we are on the verge of an exciting new era of drug discovery informatics in which methods and approaches dealing with creating knowledge from information and information from data are undergoing a paradigm shift. The needs of this industry are clear: Large amounts of data are generated using a variety of innovative technologies and the limiting step is accessing, searching and integrating this data. Moreover, the tendency is to move crucial development decisions earlier in the discovery process. It is crucial to address these issues with all of the data at hand, not only from current projects but also from previous attempts at drug development. What is the future of drug discovery informatics? Inevitably, the integration of heterogeneous, distributed data are required. Mining and integration of domain specific information such as chemical and genomic data will continue to develop. Management and searching of textual, graphical and undefined data that are currently difficult, will become an integral part of data searching and an essential component of building information- and knowledge-bases."
},
{
"pmid": "11854055",
"title": "The evolving role of information technology in the drug discovery process.",
"abstract": "Information technologies for chemical structure prediction, heterogeneous database access, pattern discovery, and systems and molecular modeling have evolved to become core components of the modern drug discovery process. As this evolution continues, the balance between in silico modeling and 'wet' chemistry will continue to shift and it might eventually be possible to step through the discovery pipeline without the aid of traditional laboratory techniques. Rapid advances in the industrialization of gene sequencing combined with databases of protein sequence and structure have created a target-rich but lead-poor environment. During the next decade, newer information technologies that facilitate the molecular modeling of drug-target interactions are likely to shift this balance towards molecular-based personalized medicine -- the ultimate goal of the drug discovery process."
},
{
"pmid": "15217521",
"title": "Joint analysis of two microarray gene-expression data sets to select lung adenocarcinoma marker genes.",
"abstract": "BACKGROUND\nDue to the high cost and low reproducibility of many microarray experiments, it is not surprising to find a limited number of patient samples in each study, and very few common identified marker genes among different studies involving patients with the same disease. Therefore, it is of great interest and challenge to merge data sets from multiple studies to increase the sample size, which may in turn increase the power of statistical inferences. In this study, we combined two lung cancer studies using microarray GeneChip, employed two gene shaving methods and a two-step survival test to identify genes with expression patterns that can distinguish diseased from normal samples, and to indicate patient survival, respectively.\n\n\nRESULTS\nIn addition to common data transformation and normalization procedures, we applied a distribution transformation method to integrate the two data sets. Gene shaving (GS) methods based on Random Forests (RF) and Fisher's Linear Discrimination (FLD) were then applied separately to the joint data set for cancer gene selection. The two methods discovered 13 and 10 marker genes (5 in common), respectively, with expression patterns differentiating diseased from normal samples. Among these marker genes, 8 and 7 were found to be cancer-related in other published reports. Furthermore, based on these marker genes, the classifiers we built from one data set predicted the other data set with more than 98% accuracy. Using the univariate Cox proportional hazard regression model, the expression patterns of 36 genes were found to be significantly correlated with patient survival (p < 0.05). Twenty-six of these 36 genes were reported as survival-related genes from the literature, including 7 known tumor-suppressor genes and 9 oncogenes. Additional principal component regression analysis further reduced the gene list from 36 to 16.\n\n\nCONCLUSION\nThis study provided a valuable method of integrating microarray data sets with different origins, and new methods of selecting a minimum number of marker genes to aid in cancer diagnosis. After careful data integration, the classification method developed from one data set can be applied to the other with high prediction accuracy."
},
{
"pmid": "27993167",
"title": "Prognostic value of cross-omics screening for kidney clear cell renal cancer survival.",
"abstract": "BACKGROUND\nKidney renal clear cell carcinoma (KIRC) is a type of cancer that is resistant to chemotherapy and radiotherapy and has limited treatment possibilities. Large-scale molecular profiling of KIRC tumors offers a great potential to uncover the genetic and epigenetic changes underlying this disease and to improve the clinical management of KIRC patients. However, in practice the clinicians and researchers typically focus on single-platform molecular data or on a small set of genes. Using molecular and clinical data of over 500 patients, we have systematically studied which type of molecular data is the most informative in predicting the clinical outcome of KIRC patients, as a standalone platform and integrated with clinical data.\n\n\nRESULTS\nWe applied different computational approaches to preselect on survival-predictive genomic markers and evaluated the usability of mRNA/miRNA/protein expression data, copy number variation (CNV) data and DNA methylation data in predicting survival of KIRC patients. Our analyses show that expression and methylation data have statistically significant predictive powers compared to a random guess, but do not perform better than predictions on clinical data alone. However, the integration of molecular data with clinical variables resulted in improved predictions. We present a set of survival associated genomic loci that could potentially be employed as clinically useful biomarkers.\n\n\nCONCLUSIONS\nOur study evaluates the survival prediction of different large-scale molecular data of KIRC patients and describes the prognostic relevance of such data over clinical-variable-only models. It also demonstrates the survival prognostic importance of methylation alterations in KIRC tumors and points to the potential of epigenetic modulators in KIRC treatment.\n\n\nREVIEWERS\nAn extended abstract of this research paper was selected for the CAMDA Satellite Meeting to ISMB 2015 by the CAMDA Programme Committee. The full research paper then underwent one round of Open Peer Review under a responsible CAMDA Programme Committee member, Djork-Arné Clevert, PhD (Bayer AG, Germany). Open Peer Review was provided by Martin Otava, PhD (Janssen Pharmaceutica, Belgium) and Hendrik Luuk, PhD (The Centre for Disease Models and Biomedical Imaging, University of Tartu, Estonia). The Reviewer comments section shows the full reviews and author responses."
},
{
"pmid": "26109056",
"title": "Comparison of RNA-seq and microarray-based models for clinical endpoint prediction.",
"abstract": "BACKGROUND\nGene expression profiling is being widely applied in cancer research to identify biomarkers for clinical endpoint prediction. Since RNA-seq provides a powerful tool for transcriptome-based applications beyond the limitations of microarrays, we sought to systematically evaluate the performance of RNA-seq-based and microarray-based classifiers in this MAQC-III/SEQC study for clinical endpoint prediction using neuroblastoma as a model.\n\n\nRESULTS\nWe generate gene expression profiles from 498 primary neuroblastomas using both RNA-seq and 44 k microarrays. Characterization of the neuroblastoma transcriptome by RNA-seq reveals that more than 48,000 genes and 200,000 transcripts are being expressed in this malignancy. We also find that RNA-seq provides much more detailed information on specific transcript expression patterns in clinico-genetic neuroblastoma subgroups than microarrays. To systematically compare the power of RNA-seq and microarray-based models in predicting clinical endpoints, we divide the cohort randomly into training and validation sets and develop 360 predictive models on six clinical endpoints of varying predictability. Evaluation of factors potentially affecting model performances reveals that prediction accuracies are most strongly influenced by the nature of the clinical endpoint, whereas technological platforms (RNA-seq vs. microarrays), RNA-seq data analysis pipelines, and feature levels (gene vs. transcript vs. exon-junction level) do not significantly affect performances of the models.\n\n\nCONCLUSIONS\nWe demonstrate that RNA-seq outperforms microarrays in determining the transcriptomic characteristics of cancer, while RNA-seq and microarray-based models perform similarly in clinical endpoint prediction. Our findings may be valuable to guide future studies on the development of gene expression-based predictive models and their implementation in clinical practice."
},
{
"pmid": "23190475",
"title": "Bioinformatics clouds for big data manipulation.",
"abstract": "UNLABELLED\nAs advances in life sciences and information technology bring profound influences on bioinformatics due to its interdisciplinary nature, bioinformatics is experiencing a new leap-forward from in-house computing infrastructure into utility-supplied cloud computing delivered over the Internet, in order to handle the vast quantities of biological data generated by high-throughput experimental technologies. Albeit relatively new, cloud computing promises to address big data storage and analysis issues in the bioinformatics field. Here we review extant cloud-based services in bioinformatics, classify them into Data as a Service (DaaS), Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS), and present our perspectives on the adoption of cloud computing in bioinformatics.\n\n\nREVIEWERS\nThis article was reviewed by Frank Eisenhaber, Igor Zhulin, and Sandor Pongor."
},
{
"pmid": "29615097",
"title": "Multi-omics integration for neuroblastoma clinical endpoint prediction.",
"abstract": "BACKGROUND\nHigh-throughput methodologies such as microarrays and next-generation sequencing are routinely used in cancer research, generating complex data at different omics layers. The effective integration of omics data could provide a broader insight into the mechanisms of cancer biology, helping researchers and clinicians to develop personalized therapies.\n\n\nRESULTS\nIn the context of CAMDA 2017 Neuroblastoma Data Integration challenge, we explore the use of Integrative Network Fusion (INF), a bioinformatics framework combining a similarity network fusion with machine learning for the integration of multiple omics data. We apply the INF framework for the prediction of neuroblastoma patient outcome, integrating RNA-Seq, microarray and array comparative genomic hybridization data. We additionally explore the use of autoencoders as a method to integrate microarray expression and copy number data.\n\n\nCONCLUSIONS\nThe INF method is effective for the integration of multiple data sources providing compact feature signatures for patient classification with performances comparable to other methods. Latent space representation of the integrated data provided by the autoencoder approach gives promising results, both by improving classification on survival endpoints and by providing means to discover two groups of patients characterized by distinct overall survival (OS) curves.\n\n\nREVIEWERS\nThis article was reviewed by Djork-Arné Clevert and Tieliu Shi."
},
{
"pmid": "29880025",
"title": "Predicting clinical outcome of neuroblastoma patients using an integrative network-based approach.",
"abstract": "BACKGROUND\nOne of the main current challenges in computational biology is to make sense of the huge amounts of multidimensional experimental data that are being produced. For instance, large cohorts of patients are often screened using different high-throughput technologies, effectively producing multiple patient-specific molecular profiles for hundreds or thousands of patients.\n\n\nRESULTS\nWe propose and implement a network-based method that integrates such patient omics data into Patient Similarity Networks. Topological features derived from these networks were then used to predict relevant clinical features. As part of the 2017 CAMDA challenge, we have successfully applied this strategy to a neuroblastoma dataset, consisting of genomic and transcriptomic data. In particular, we observe that models built on our network-based approach perform at least as well as state of the art models. We furthermore explore the effectiveness of various topological features and observe, for instance, that redundant centrality metrics can be combined to build more powerful models.\n\n\nCONCLUSION\nWe demonstrate that the networks inferred from omics data contain clinically relevant information and that patient clinical outcomes can be predicted using only network topological data.\n\n\nREVIEWERS\nThis article was reviewed by Yang-Yu Liu, Tomislav Smuc and Isabel Nepomuceno."
},
{
"pmid": "1391987",
"title": "The Nottingham Prognostic Index in primary breast cancer.",
"abstract": "In 1982 we constructed a prognostic index for patients with primary, operable breast cancer. This index was based on a retrospective analysis of 9 factors in 387 patients. Only 3 of the factors (tumour size, stage of disease, and tumour grade) remained significant on multivariate analysis. The index was subsequently validated in a prospective study of 320 patients. We now present the results of applying this prognostic index to all of the first 1,629 patients in our series of operable breast cancer up to the age of 70. We have used the index to define three subsets of patients with different chances of dying from breast cancer: 1) good prognosis, comprising 29% of patients with 80% 15-year survival; 2) moderate prognosis, 54% of patients with 42% 15-year survival; 3) poor prognosis, 17% of patients with 13% 15-year survival. The 15-year survival of an age-matched female population was 83%."
},
{
"pmid": "3882734",
"title": "Treatment selection for cancer patients: application of statistical decision theory to the treatment of advanced ovarian cancer.",
"abstract": "Optimal treatment selection for patients with chronic disease, especially advanced cancer, requires careful consideration in weighing risks and benefits of each therapy. The application of statistical decision theory to such problems provides an explicit and systematic means of combining information on risks and benefits with individual patient preferences on quality-of-life issues. This paper evaluates the strengths and weaknesses of this methodology by using, as an example, treatment selection in advanced ovarian cancer. Possible treatment options and the major consequences of each are first outlined on a decision tree. The probability of various outcomes is estimated from the literature and methods for assessing the relative value or utility of each outcome are illustrated by interviews with 9 volunteers. Based on decision analysis, the recommended treatment for advanced ovarian cancer is found to be highly dependent on survival estimates but far less dependent on other probability estimates or the method of obtaining utilities. Individual preferences are also found to influence the treatment choice. The analysis illustrates that an important strength in using decision theory is its ability to identify key factors in the decision through sensitivity analysis. This may help both the physician selecting treatment and the investigator planning clinical trials which compare these therapies. In addition, this method can help in planning a trial's sample size by determining what survival difference between therapeutic strategies is worth detecting. Some problems identified with this methodology include the need for several simplifying assumptions and the difficulties in assessing individual preferences. On balance, we believe decision theory in this setting can play a useful role in complementing the physician's clinical judgement."
}
] |
Frontiers in Neurorobotics | 31803041 | PMC6873106 | 10.3389/fnbot.2019.00095 | Autonomous Sequence Generation for a Neural Dynamic Robot: Scene Perception, Serial Order, and Object-Oriented Movement | Neurally inspired robotics already has a long history that includes reactive systems emulating reflexes, neural oscillators to generate movement patterns, and neural networks as trainable filters for high-dimensional sensory information. Neural inspiration has been less successful at the level of cognition. Decision-making, planning, building and using memories, for instance, are more often addressed in terms of computational algorithms than through neural process models. To move neural process models beyond reactive behavior toward cognition, the capacity to autonomously generate sequences of processing steps is critical. We review a potential solution to this problem that is based on strongly recurrent neural networks described as neural dynamic systems. Their stable states perform elementary motor or cognitive functions while coupled to sensory inputs. The state of the neural dynamics transitions to a new motor or cognitive function when a previously stable neural state becomes unstable. Only when a neural robotic system is capable of acting autonomously does it become a useful to a human user. We demonstrate how a neural dynamic architecture that supports autonomous sequence generation can engage in such interaction. A human user presents colored objects to the robot in a particular order, thus defining a serial order of color concepts. The user then exposes the system to a visual scene that contains the colored objects in a new spatial arrangement. The robot autonomously builds a scene representation by sequentially bringing objects into the attentional foreground. Scene memory updates if the scene changes. The robot performs visual search and then reaches for the objects in the instructed serial order. In doing so, the robot generalizes across time and space, is capable of waiting when an element is missing, and updates its action plans online when the scene changes. The entire flow of behavior emerges from a time-continuous neural dynamics without any controlling or supervisory algorithm. | 5.3. Related WorkA number of groups have addressed object-directed action and the requisite perception in a similar neural-dynamic framework (Fard et al., 2015; Strauss et al., 2015; Tan et al., 2016). Serial order and the specific neural mechanism for sequencing neural activation patterns were not yet part of these efforts, which otherwise overlap with ours. A number of neural dynamic models of serial order or sequencing have been proposed (e.g., Deco and Rolls, 2005), but not been brought into robotic problems. One reason may be the lack of a control structure comparable to our condition of satisfaction, so that the sequences unfold in neural dynamics at a given rhythm that is not synchronized with perceptual events. Such systems would not remain tied to the actual performance of a sequence in the world.Related attempts to model in neural terms the entire chain from perception to action have been made for robotic vehicles. For instance, Alexander and Sporns (2002) enabled a vehicle to learn from reward a task directed at objects that a robot vehicle was able to pick up. pick up. Gurney et al. (2004) realized a neurally inspired system the organized the organism (This paper is useful also for its careful discussion of different levels of descriptions for neurally inspired approaches to robotics). Both systems are conceptually in the fold of behavior-based robotics, in that the sequences of actions emerge from a neural architecture, modulated by adaptation. To our knowledge, systems of that kind have not yet been shown to be able to form serial order memories and acquire scene representations.A different style of neural robotic model for cognition is SPAUN (Eliasmith et al., 2012). This is an approach based on the Neural Engineering Framework (Eliasmith, 2005), which is able to implement any neural dynamic model in a spiking neural network. Thus, models based on DFT may, in principle, be implemented within this framework. On the other hand, SPAUN has also been turned to approaches to cognition that may not be compatible with the principles of DFT, in particular, the Vector Symbolic Architecture (VSA) framework that goes back to Smolensky, Kanerva, Plate, and Gayler (see Levy and Gayler, 2008 for review). In VSA, concepts are mapped onto high-dimensional vectors, that enable processing these concepts in the manner of symbol manipulation. If this approach is entirely free of non-neural algorithmic steps is not clear to us. | [
"911931",
"28878645",
"17831441",
"3281179",
"20889368",
"15811241",
"15901399",
"23197532",
"12088245",
"26559472",
"15271492",
"10074679",
"18555958",
"23148415",
"28303100",
"28532370",
"26017442",
"28503145",
"27853431",
"21227083",
"25719670",
"20800989",
"12689379",
"25462637",
"3281253",
"17224612",
"26667353",
"16942860",
"4767470"
] | [
{
"pmid": "28878645",
"title": "Adaptive Control Strategies for Interlimb Coordination in Legged Robots: A Review.",
"abstract": "Walking animals produce adaptive interlimb coordination during locomotion in accordance with their situation. Interlimb coordination is generated through the dynamic interactions of the neural system, the musculoskeletal system, and the environment, although the underlying mechanisms remain unclear. Recently, investigations of the adaptation mechanisms of living beings have attracted attention, and bio-inspired control systems based on neurophysiological findings regarding sensorimotor interactions are being developed for legged robots. In this review, we introduce adaptive interlimb coordination for legged robots induced by various factors (locomotion speed, environmental situation, body properties, and task). In addition, we show characteristic properties of adaptive interlimb coordination, such as gait hysteresis and different time-scale adaptations. We also discuss the underlying mechanisms and control strategies to achieve adaptive interlimb coordination and the design principle for the control system of legged robots."
},
{
"pmid": "17831441",
"title": "New approaches to robotics.",
"abstract": "In order to build autonomous robots that can carry out useful work in unstructured environments new approaches have been developed to building intelligent systems. The relationship to traditional academic robotics and traditional artificial intelligence is examined. In the new approaches a tight coupling of sensing to action produces architectures for intelligence that are networks of simple computational elements which are quite broad, but not very deep. Recent work within this approach has demonstrated the use of representations, expectations, plans, goals, and learning, but without resorting to the traditional uses of central, abstractly manipulable or symbolic representations. Perception within these systems is often an active process, and the dynamics of the interactions with the world are extremely important. The question of how to evaluate and compare the new to traditional work still provokes vigorous discussion."
},
{
"pmid": "20889368",
"title": "Population clocks: motor timing with neural dynamics.",
"abstract": "An understanding of sensory and motor processing will require elucidation of the mechanisms by which the brain tells time. Open questions relate to whether timing relies on dedicated or intrinsic mechanisms and whether distinct mechanisms underlie timing across scales and modalities. Although experimental and theoretical studies support the notion that neural circuits are intrinsically capable of sensory timing on short scales, few general models of motor timing have been proposed. For one class of models, population clocks, it is proposed that time is encoded in the time-varying patterns of activity of a population of neurons. We argue that population clocks emerge from the internal dynamics of recurrently connected networks, are biologically realistic and account for many aspects of motor timing."
},
{
"pmid": "15811241",
"title": "Sequential memory: a putative neural and synaptic dynamical mechanism.",
"abstract": "A key issue in the neurophysiology of cognition is the problem of sequential learning. Sequential learning refers to the ability to encode and represent the temporal order of discrete elements occurring in a sequence. We show that the short-term memory for a sequence of items can be implemented in an autoassociation neural network. Each item is one of the attractor states of the network. The autoassociation network is implemented at the level of integrate-and-fire neurons so that the contributions of different biophysical mechanisms to sequence learning can be investigated. It is shown that if it is a property of the synapses or neurons that support each attractor state that they adapt, then every time the network is made quiescent (e.g., by inhibition), then the attractor state that emerges next is the next item in the sequence. We show with numerical simulations implementations of the mechanisms using (1) a sodium inactivation-based spike-frequency-adaptation mechanism, (2) a Ca(2+)-activated K+ current, and (3) short-term synaptic depression, with sequences of up to three items. The network does not need repeated training on a particular sequence and will repeat the items in the order that they were last presented. The time between the items in a sequence is not fixed, allowing the items to be read out as required over a period of up to many seconds. The network thus uses adaptation rather than associative synaptic modification to recall the order of the items in a recently presented sequence."
},
{
"pmid": "15901399",
"title": "A unified approach to building and controlling spiking attractor networks.",
"abstract": "Extending work in Eliasmith and Anderson (2003), we employ a general framework to construct biologically plausible simulations of the three classes of attractor networks relevant for biological systems: static (point, line, ring, and plane) attractors, cyclic attractors, and chaotic attractors. We discuss these attractors in the context of the neural systems that they have been posited to help explain: eye control, working memory, and head direction; locomotion (specifically swimming); and olfaction, respectively. We then demonstrate how to introduce control into these models. The addition of control shows how attractor networks can be used as subsystems in larger neural systems, demonstrates how a much larger class of networks can be related to attractor networks, and makes it clear how attractor networks can be exploited for various information processing tasks in neurobiological systems."
},
{
"pmid": "23197532",
"title": "A large-scale model of the functioning brain.",
"abstract": "A central challenge for cognitive and systems neuroscience is to relate the incredibly complex behavior of animals to the equally complex activity of their brains. Recently described, large-scale neural models have not bridged this gap between neural activity and biological function. In this work, we present a 2.5-million-neuron model of the brain (called \"Spaun\") that bridges this gap by exhibiting many different behaviors. The model is presented only with visual image sequences, and it draws all of its responses with a physically modeled arm. Although simplified, the model captures many aspects of neuroanatomy, neurophysiology, and psychological behavior, which we demonstrate via eight diverse tasks."
},
{
"pmid": "12088245",
"title": "Dynamic field theory of movement preparation.",
"abstract": "A theoretical framework for understanding movement preparation is proposed. Movement parameters are represented by activation fields, distributions of activation defined over metric spaces. The fields evolve under the influence of various sources of localized input, representing information about upcoming movements. Localized patterns of activation self-stabilize through cooperative and competitive interactions within the fields. The task environment is represented by a 2nd class of fields, which preshape the movement parameter representation. The model accounts for a sizable body of empirical findings on movement initiation (continuous and graded nature of movement preparation, dependence on the metrics of the task, stimulus uncertainty effect, stimulus-response compatibility effects, Simon effect, precuing paradigm, and others) and suggests new ways of exploring the structure of motor representations."
},
{
"pmid": "26559472",
"title": "Modeling human target reaching with an adaptive observer implemented with dynamic neural fields.",
"abstract": "Humans can point fairly accurately to memorized states when closing their eyes despite slow or even missing sensory feedback. It is also common that the arm dynamics changes during development or from injuries. We propose a biologically motivated implementation of an arm controller that includes an adaptive observer. Our implementation is based on the neural field framework, and we show how a path integration mechanism can be trained from few examples. Our results illustrate successful generalization of path integration with a dynamic neural field by which the robotic arm can move in arbitrary directions and velocities. Also, by adapting the strength of the motor effect the observer implicitly learns to compensate an image acquisition delay in the sensory system. Our dynamic implementation of an observer successfully guides the arm toward the target in the dark, and the model produces movements with a bell-shaped velocity profile, consistent with human behavior data."
},
{
"pmid": "15271492",
"title": "Computational models of the basal ganglia: from robots to membranes.",
"abstract": "With the rapid accumulation of neuroscientific data comes a pressing need to develop models that can explain the computational processes performed by the basal ganglia. Relevant biological information spans a range of structural levels, from the activity of neuronal membranes to the role of the basal ganglia in overt behavioural control. This viewpoint presents a framework for understanding the aims, limitations and methods for testing of computational models across all structural levels. We identify distinct modelling strategies that can deliver important and complementary insights into the nature of problems the basal ganglia have evolved to solve, and describe methods that are used to solve them."
},
{
"pmid": "10074679",
"title": "High-level scene perception.",
"abstract": "Three areas of high-level scene perception research are reviewed. The first concerns the role of eye movements in scene perception, focusing on the influence of ongoing cognitive processing on the position and duration of fixations in a scene. The second concerns the nature of the scene representation that is retained across a saccade and other brief time intervals during ongoing scene perception. Finally, we review research on the relationship between scene and object identification, focusing particularly on whether the meaning of a scene influences the identification of constituent objects."
},
{
"pmid": "18555958",
"title": "Central pattern generators for locomotion control in animals and robots: a review.",
"abstract": "The problem of controlling locomotion is an area in which neuroscience and robotics can fruitfully interact. In this article, I will review research carried out on locomotor central pattern generators (CPGs), i.e. neural circuits capable of producing coordinated patterns of high-dimensional rhythmic output signals while receiving only simple, low-dimensional, input signals. The review will first cover neurobiological observations concerning locomotor CPGs and their numerical modelling, with a special focus on vertebrates. It will then cover how CPG models implemented as neural networks or systems of coupled oscillators can be used in robotics for controlling the locomotion of articulated robots. The review also presents how robots can be used as scientific tools to obtain a better understanding of the functioning of biological CPGs. Finally, various methods for designing CPGs to control specific modes of locomotion will be briefly reviewed. In this process, I will discuss different types of CPG models, the pros and cons of using CPGs with robots, and the pros and cons of using robots as scientific tools. Open research topics both in biology and in robotics will also be discussed."
},
{
"pmid": "23148415",
"title": "Dynamical movement primitives: learning attractor models for motor behaviors.",
"abstract": "Nonlinear dynamical systems have been used in many disciplines to model complex behaviors, including biological motor control, robotics, perception, economics, traffic prediction, and neuroscience. While often the unexpected emergent behavior of nonlinear systems is the focus of investigations, it is of equal importance to create goal-directed behavior (e.g., stable locomotion from a system of coupled oscillators under perceptual guidance). Modeling goal-directed behavior with nonlinear systems is, however, rather difficult due to the parameter sensitivity of these systems, their complex phase transitions in response to subtle parameter changes, and the difficulty of analyzing and predicting their long-term behavior; intuition and time-consuming parameter tuning play a major role. This letter presents and reviews dynamical movement primitives, a line of research for modeling attractor behaviors of autonomous nonlinear dynamical systems with the help of statistical learning techniques. The essence of our approach is to start with a simple dynamical system, such as a set of linear differential equations, and transform those into a weakly nonlinear system with prescribed attractor dynamics by means of a learnable autonomous forcing term. Both point attractors and limit cycle attractors of almost arbitrary complexity can be generated. We explain the design principle of our approach and evaluate its properties in several example applications in motor control and robotics."
},
{
"pmid": "28303100",
"title": "A Neural Dynamic Architecture for Reaching and Grasping Integrates Perception and Movement Generation and Enables On-Line Updating.",
"abstract": "Reaching for objects and grasping them is a fundamental skill for any autonomous robot that interacts with its environment. Although this skill seems trivial to adults, who effortlessly pick up even objects they have never seen before, it is hard for other animals, for human infants, and for most autonomous robots. Any time during movement preparation and execution, human reaching movement are updated if the visual scene changes (with a delay of about 100 ms). The capability for online updating highlights how tightly perception, movement planning, and movement generation are integrated in humans. Here, we report on an effort to reproduce this tight integration in a neural dynamic process model of reaching and grasping that covers the complete path from visual perception to movement generation within a unified modeling framework, Dynamic Field Theory. All requisite processes are realized as time-continuous dynamical systems that model the evolution in time of neural population activation. Population level neural processes bring about the attentional selection of objects, the estimation of object shape and pose, and the mapping of pose parameters to suitable movement parameters. Once a target object has been selected, its pose parameters couple into the neural dynamics of movement generation so that changes of pose are propagated through the architecture to update the performed movement online. Implementing the neural architecture on an anthropomorphic robot arm equipped with a Kinect sensor, we evaluate the model by grasping wooden objects. Their size, shape, and pose are estimated from a neural model of scene perception that is based on feature fields. The sequential organization of a reach and grasp act emerges from a sequence of dynamic instabilities within a neural dynamics of behavioral organization, that effectively switches the neural controllers from one phase of the action to the next. Trajectory formation itself is driven by a dynamical systems version of the potential field approach. We highlight the emergent capacity for online updating by showing that a shift or rotation of the object during the reaching phase leads to the online adaptation of the movement plan and successful completion of the grasp."
},
{
"pmid": "28532370",
"title": "Deep Neural Networks: A New Framework for Modeling Biological Vision and Brain Information Processing.",
"abstract": "Recent advances in neural network modeling have enabled major strides in computer vision and other artificial intelligence applications. Human-level visual recognition abilities are coming within reach of artificial systems. Artificial neural networks are inspired by the brain, and their computations could be implemented in biological neurons. Convolutional feedforward networks, which now dominate computer vision, take further inspiration from the architecture of the primate visual hierarchy. However, the current models are designed with engineering goals, not to model brain computations. Nevertheless, initial studies comparing internal representations between these models and primate brains find surprisingly similar representational spaces. With human-level performance no longer out of reach, we are entering an exciting new era, in which we will be able to build biologically faithful feedforward and recurrent computational models of how biological brains perform high-level feats of intelligence, including vision."
},
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "28503145",
"title": "A Neural-Dynamic Architecture for Concurrent Estimation of Object Pose and Identity.",
"abstract": "Handling objects or interacting with a human user about objects on a shared tabletop requires that objects be identified after learning from a small number of views and that object pose be estimated. We present a neurally inspired architecture that learns object instances by storing features extracted from a single view of each object. Input features are color and edge histograms from a localized area that is updated during processing. The system finds the best-matching view for the object in a novel input image while concurrently estimating the object's pose, aligning the learned view with current input. The system is based on neural dynamics, computationally operating in real time, and can handle dynamic scenes directly off live video input. In a scenario with 30 everyday objects, the system achieves recognition rates of 87.2% from a single training view for each object, while also estimating pose quite precisely. We further demonstrate that the system can track moving objects, and that it can segment the visual array, selecting and recognizing one object while suppressing input from another known object in the immediate vicinity. Evaluation on the COIL-100 dataset, in which objects are depicted from different viewing angles, revealed recognition rates of 91.1% on the first 30 objects, each learned from four training views."
},
{
"pmid": "27853431",
"title": "Developing Dynamic Field Theory Architectures for Embodied Cognitive Systems with cedar.",
"abstract": "Embodied artificial cognitive systems, such as autonomous robots or intelligent observers, connect cognitive processes to sensory and effector systems in real time. Prime candidates for such embodied intelligence are neurally inspired architectures. While components such as forward neural networks are well established, designing pervasively autonomous neural architectures remains a challenge. This includes the problem of tuning the parameters of such architectures so that they deliver specified functionality under variable environmental conditions and retain these functions as the architectures are expanded. The scaling and autonomy problems are solved, in part, by dynamic field theory (DFT), a theoretical framework for the neural grounding of sensorimotor and cognitive processes. In this paper, we address how to efficiently build DFT architectures that control embodied agents and how to tune their parameters so that the desired cognitive functions emerge while such agents are situated in real environments. In DFT architectures, dynamic neural fields or nodes are assigned dynamic regimes, that is, attractor states and their instabilities, from which cognitive function emerges. Tuning thus amounts to determining values of the dynamic parameters for which the components of a DFT architecture are in the specified dynamic regime under the appropriate environmental conditions. The process of tuning is facilitated by the software framework cedar, which provides a graphical interface to build and execute DFT architectures. It enables to change dynamic parameters online and visualize the activation states of any component while the agent is receiving sensory inputs in real time. Using a simple example, we take the reader through the workflow of conceiving of DFT architectures, implementing them on embodied agents, tuning their parameters, and assessing performance while the system is coupled to real sensory inputs."
},
{
"pmid": "21227083",
"title": "Behavior-based robotics as a tool for synthesis of artificial behavior and analysis of natural behavior.",
"abstract": "Work in behavior-based systems focuses on functional modeling, that is, the synthesis of life-like and/or biologically inspired behavior that is robust, repeatable and adaptive. Inspiration from cognitive science, neuroscience and biology drives the development of new methods and models in behavior-based robotics, and the results tie together several related fields including artificial life, evolutionary computation, and multi-agent systems. Ideas from artificial intelligence and engineering continue to be explored actively and applied to behavior-based robots as their role in animal modeling and practical applications is being developed."
},
{
"pmid": "25719670",
"title": "Human-level control through deep reinforcement learning.",
"abstract": "The theory of reinforcement learning provides a normative account, deeply rooted in psychological and neuroscientific perspectives on animal behaviour, of how agents may optimize their control of an environment. To use reinforcement learning successfully in situations approaching real-world complexity, however, agents are confronted with a difficult task: they must derive efficient representations of the environment from high-dimensional sensory inputs, and use these to generalize past experience to new situations. Remarkably, humans and other animals seem to solve this problem through a harmonious combination of reinforcement learning and hierarchical sensory processing systems, the former evidenced by a wealth of neural data revealing notable parallels between the phasic signals emitted by dopaminergic neurons and temporal difference reinforcement learning algorithms. While reinforcement learning agents have achieved some successes in a variety of domains, their applicability has previously been limited to domains in which useful features can be handcrafted, or to domains with fully observed, low-dimensional state spaces. Here we use recent advances in training deep neural networks to develop a novel artificial agent, termed a deep Q-network, that can learn successful policies directly from high-dimensional sensory inputs using end-to-end reinforcement learning. We tested this agent on the challenging domain of classic Atari 2600 games. We demonstrate that the deep Q-network agent, receiving only the pixels and the game score as inputs, was able to surpass the performance of all previous algorithms and achieve a level comparable to that of a professional human games tester across a set of 49 games, using the same algorithm, network architecture and hyperparameters. This work bridges the divide between high-dimensional sensory inputs and actions, resulting in the first artificial agent that is capable of learning to excel at a diverse array of challenging tasks."
},
{
"pmid": "20800989",
"title": "An embodied account of serial order: how instabilities drive sequence generation.",
"abstract": "Learning and generating serially ordered sequences of actions is a core component of cognition both in organisms and in artificial cognitive systems. When these systems are embodied and situated in partially unknown environments, specific constraints arise for any neural mechanism of sequence generation. In particular, sequential action must resist fluctuating sensory information and be capable of generating sequences in which the individual actions may vary unpredictably in duration. We provide a solution to this problem within the framework of Dynamic Field Theory by proposing an architecture in which dynamic neural networks create stable states at each stage of a sequence. These neural attractors are destabilized in a cascade of bifurcations triggered by a neural representation of a condition of satisfaction for each action. We implement the architecture on a robotic vehicle in a color search task, demonstrating both sequence learning and sequence generation on the basis of low-level sensory information."
},
{
"pmid": "12689379",
"title": "Computational approaches to motor learning by imitation.",
"abstract": "Movement imitation requires a complex set of mechanisms that map an observed movement of a teacher onto one's own movement apparatus. Relevant problems include movement recognition, pose estimation, pose tracking, body correspondence, coordinate transformation from external to egocentric space, matching of observed against previously learned movement, resolution of redundant degrees-of-freedom that are unconstrained by the observation, suitable movement representations for imitation, modularization of motor control, etc. All of these topics by themselves are active research problems in computational and neurobiological sciences, such that their combination into a complete imitation system remains a daunting undertaking-indeed, one could argue that we need to understand the complete perception-action loop. As a strategy to untangle the complexity of imitation, this paper will examine imitation purely from a computational point of view, i.e. we will review statistical and mathematical approaches that have been suggested for tackling parts of the imitation problem, and discuss their merits, disadvantages and underlying principles. Given the focus on action recognition of other contributions in this special issue, this paper will primarily emphasize the motor side of imitation, assuming that a perceptual system has already identified important features of a demonstrated movement and created their corresponding spatial information. Based on the formalization of motor control in terms of control policies and their associated performance criteria, useful taxonomies of imitation learning can be generated that clarify different approaches and future research directions."
},
{
"pmid": "25462637",
"title": "Deep learning in neural networks: an overview.",
"abstract": "In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks."
},
{
"pmid": "3281253",
"title": "Dynamic pattern generation in behavioral and neural systems.",
"abstract": "In the search for principles of pattern generation in complex biological systems, an operational approach is presented that embraces both theory and experiment. The central mathematical concepts of self-organization in nonequilibrium systems (including order parameter dynamics, stability, fluctuations, and time scales) are used to show how a large number of empirically observed features of temporal patterns can be mapped onto simple low-dimensional (stochastic, nonlinear) dynamical laws that are derivable from lower levels of description. The theoretical framework provides a language and a strategy, accompanied by new observables, that may afford an understanding of dynamic patterns at several scales of analysis (including behavioral patterns, neural networks, and individual neurons) and the linkage among them."
},
{
"pmid": "17224612",
"title": "Robust object recognition with cortex-like mechanisms.",
"abstract": "We introduce a new general framework for the recognition of complex visual scenes, which is motivated by biology: We describe a hierarchical system that closely follows the organization of visual cortex and builds an increasingly complex and invariant feature representation by alternating between a template matching and a maximum pooling operation. We demonstrate the strength of the approach on a range of recognition tasks: From invariant single object recognition in clutter to multiclass categorization problems and complex scene understanding tasks that rely on the recognition of both shape-based as well as texture-based objects. Given the biological constraints that the system had to satisfy, the approach performs surprisingly well: It has the capability of learning from only a few training examples and competes with state-of-the-art systems. We also discuss the existence of a universal, redundant dictionary of features that could handle the recognition of most object categories. In addition to its relevance for computer vision, the success of this approach suggests a plausibility proof for a class of feedforward models of object recognition in cortex."
},
{
"pmid": "26667353",
"title": "Choice reaching with a LEGO arm robot (CoRLEGO): The motor system guides visual attention to movement-relevant information.",
"abstract": "We present an extension of a neurobiologically inspired robotics model, termed CoRLEGO (Choice reaching with a LEGO arm robot). CoRLEGO models experimental evidence from choice reaching tasks (CRT). In a CRT participants are asked to rapidly reach and touch an item presented on the screen. These experiments show that non-target items can divert the reaching movement away from the ideal trajectory to the target item. This is seen as evidence attentional selection of reaching targets can leak into the motor system. Using competitive target selection and topological representations of motor parameters (dynamic neural fields) CoRLEGO is able to mimic this leakage effect. Furthermore if the reaching target is determined by its colour oddity (i.e. a green square among red squares or vice versa), the reaching trajectories become straighter with repetitions of the target colour (colour streaks). This colour priming effect can also be modelled with CoRLEGO. The paper also presents an extension of CoRLEGO. This extension mimics findings that transcranial direct current stimulation (tDCS) over the motor cortex modulates the colour priming effect (Woodgate et al., 2015). The results with the new CoRLEGO suggest that feedback connections from the motor system to the brain's attentional system (parietal cortex) guide visual attention to extract movement-relevant information (i.e. colour) from visual stimuli. This paper adds to growing evidence that there is a close interaction between the motor system and the attention system. This evidence contradicts the traditional conceptualization of the motor system as the endpoint of a serial chain of processing stages. At the end of the paper we discuss CoRLEGO's predictions and also lessons for neurobiologically inspired robotics emerging from this work."
},
{
"pmid": "16942860",
"title": "The time course of saccadic decision making: dynamic field theory.",
"abstract": "Making a saccadic eye movement involves two decisions, the decision to initiate the saccade and the selection of the visual target of the saccade. Here we provide a theoretical account for the time-courses of these two processes, whose instabilities are the basis of decision making. We show how the cross-over from spatial averaging for fast saccades to selection for slow saccades arises from the balance between excitatory and inhibitory processes. Initiating a saccade involves overcoming fixation, as can be observed in the countermanding paradigm, which we model accounting both for the temporal evolution of the suppression probability and its dependence on fixation activity. The interaction between the two forms of decision making is demonstrated by predicting how the cross-over from averaging to selection depends on the fixation stimulus in gap-step-overlap paradigms. We discuss how the activation dynamics of our model may be mapped onto neuronal structures including the motor map and the fixation cells in superior colliculus."
}
] |
Frontiers in Bioengineering and Biotechnology | 31799243 | PMC6874164 | 10.3389/fbioe.2019.00316 | On the Visuomotor Behavior of Amputees and Able-Bodied People During Grasping | Visual attention is often predictive for future actions in humans. In manipulation tasks, the eyes tend to fixate an object of interest even before the reach-to-grasp is initiated. Some recent studies have proposed to exploit this anticipatory gaze behavior to improve the control of dexterous upper limb prostheses. This requires a detailed understanding of visuomotor coordination to determine in which temporal window gaze may provide helpful information. In this paper, we verify and quantify the gaze and motor behavior of 14 transradial amputees who were asked to grasp and manipulate common household objects with their missing limb. For comparison, we also include data from 30 able-bodied subjects who executed the same protocol with their right arm. The dataset contains gaze, first person video, angular velocities of the head, and electromyography and accelerometry of the forearm. To analyze the large amount of video, we developed a procedure based on recent deep learning methods to automatically detect and segment all objects of interest. This allowed us to accurately determine the pixel distances between the gaze point, the target object, and the limb in each individual frame. Our analysis shows a clear coordination between the eyes and the limb in the reach-to-grasp phase, confirming that both intact and amputated subjects precede the grasp with their eyes by more than 500 ms. Furthermore, we note that the gaze behavior of amputees was remarkably similar to that of the able-bodied control group, despite their inability to physically manipulate the objects. | 4.1. Visuomotor Strategy and Comparison With Related WorkIn section 3.2, we presented the results of eye, head, and limb coordination during reaching and grasping. The eyes are the first to react to the vocal stimulus by exhibiting an increasing saccade-related activity, leading to a fixation on the target in about 150 ms. When the eyes start moving, also the head follows almost immediately. Such short delays between movement of the eyes and the head have been reported in the literature, ranging from 10 ms to 100 ms during a block-copying task (Smeets et al., 1996) or in reaction to visual stimuli (Goldring et al., 1996; Di Cesare et al., 2013). This behavior is however strongly dependent on the experimental setting and even small variations therein can change the outcome. For instance, Pelz et al. (2001) found that depending on the exercise's instruction the head may both precede (by about 200 ms) or follow the eyes (by about 50 ms) in the same block-copying task.After the activation of the eyes and the head we observe the movement onset of the arm 130 ms later. Similar values ranging from 170 ms to 300 ms were also reported by Smeets et al. (1996) and Pelz et al. (2001) in a block-copying task and by Belardinelli et al. (2016) in a pick and place task. Land et al. (1999) instead found a median delay of 0.56 s during a tea-making activity. Rather than movement onset, the time the hand takes to reach the target is more interesting for our scope. For the intact subjects, the hand typically starts to occlude the target object around 500 ms after the first fixation. Although occlusion does not necessarily already imply a completed grasp, especially given the first person perspective, we do expect the grasp to follow not much later. These results confirm that visual attention on objects anticipates manipulation. In previous studies concerning displacements (Johansson et al., 2001; Belardinelli et al., 2016; Lavoie et al., 2018) and grasping activities (Brouwer et al., 2009), a variable delay ranging from 0.53 s to 1.3 s was found between the eye and hand. Also in these cases, the exact value of the delay depends on the characteristics of the experiment.In section 3.3, we concentrated on the visuomotor strategy adopted by amputated and able-bodied subjects to interact with the objects during three groups of functional tasks. We can characterize the strategies associated with these groups in terms of the types of fixations defined by Land et al. (1999) and Land and Hayhoe (2001), namely locating, directing, guiding, and checking. A fixation to locate is typically done at the beginning of an action, to mentally map the location of objects that is to be used. Instead, a fixation to direct is meant to detect an object that will be used immediately after. Fixations to guide are usually multiple and occur when the gaze shifts among two or more objects that are approaching each other. Finally, there are long checking fixations to monitor the state of an action waiting for its completion.The visual strategy of the in place actions is relatively straightforward. In these tasks, subjects initiate with a fixation to direct the attention to the target object. Subsequently, their fixation remains on the manipulated object to check the correct execution of the task. Note that this visual attention seems focused on the target object rather than the subject's hand, as can be seen comparing the gaze-target and gaze-limb distances in Figures 5A,B. Indeed, Land et al. (1999) noted that the hands themselves are rarely fixated.Also the lifting actions start with a directing fixation to locate the object of interest. However, whereas the initial fixation is focused on the intended grasp location (cf. the left column in Figure 7), the gaze shifts upwards when the hand has grasped the object. This coincides with the transition from the directing fixation to visually checking the lifting action. This is in line with observations by Voudouris et al. (2018), who noted that people may fixate higher when grasping and lifting an object to direct their gaze to where the object will be in the future.Finally, displacement actions are the ones most investigated in the literature. Previous studies on pick and place tasks (Belardinelli et al., 2016; Lavoie et al., 2018) and on the block-copying task (Smeets et al., 1996; Pelz et al., 2001) fall in this category. In this case, we observe in Figure 5E that the gaze-target and gaze-limb distances have three minima for intact subjects, namely at the initial pick-up, the destination, and at the release again at the initial position. All three minima indicate fixations that are meant to direct the approach of the hand, either for (1) grasping the object, (2) displacing it, or finally (3) releasing it. This behavior can clearly be seen for both intact as well as amputated subjects in the example in Figure 8. We also notice that the eyes did not wait for the completion of the pick-up action, moving instead toward the position of the destination around 200 ms in advance. This proactive role of the eyes was highlighted by Land et al. (1999), who measured the gaze moving on to the next object between 0 s to 1 s before the current object manipulation was terminated. Also Pelz et al. (2001) observed the eyes departing from the target object 100 ms to 150 ms before the arrival of the hand. | [
"26818971",
"23408215",
"19271888",
"27597823",
"21600048",
"26556065",
"8891638",
"22183755",
"31517965",
"11517279",
"20667803",
"10755142",
"16516530",
"11718795",
"8008066",
"11100157",
"10605640",
"30029228",
"29540617",
"24891493",
"26529274",
"29580245",
"4633066",
"28925815",
"31029174",
"12478404",
"11545465",
"21397901",
"22345089",
"23206549",
"8817273",
"24758375",
"21622729",
"30167674"
] | [
{
"pmid": "26818971",
"title": "It's in the eyes: Planning precise manual actions before execution.",
"abstract": "It is well-known that our eyes typically fixate those objects in a scene, with which interactions are about to unfold. During manual interactions, our eyes usually anticipate the next subgoal and thus serve top-down, goal-driven information extraction requirements, probably driven by a schema-based task representation. On the other hand, motor control research concerning object manipulations has extensively demonstrated how grasping choices are often influenced by deeper considerations about the final goal of manual interactions. Here we show that also these deeper considerations are reflected in early eye fixation behavior, significantly before the hand makes contact with the object. In this study, subjects were asked to either pretend to drink out of the presented object or to hand it over to the experimenter. The objects were presented upright or upside down, thus affording a thumb-up (prone) or a thumb-down (supine) grasp. Eye fixation data show a clear anticipatory preference for the region where the index finger is going to be placed. Indeed, fixations highly correlate with the final index finger position, thus subserving the planning of the actual manual action. Moreover, eye fixations reveal several orders of manual planning: Fixation distributions do not only depend on the object orientation but also on the interaction task. These results suggest a fully embodied, bidirectional sensorimotor coupling of eye-hand coordination: The eyes help in planning and determining the actual manual object interaction, considering where to grasp the presented object in the light of the orientation and type of the presented object and the actual manual task to be accomplished with the object."
},
{
"pmid": "23408215",
"title": "Determining skill level in myoelectric prosthesis use with multiple outcome measures.",
"abstract": "To obtain more insight into how the skill level of an upper-limb myoelectric prosthesis user is composed, the current study aimed to (1) portray prosthetic handling at different levels of description, (2) relate results of the clinical level to kinematic measures, and (3) identify specific parameters in these measures that characterize the skill level of a prosthesis user. Six experienced transradial myoelectric prosthesis users performed a clinical test (Southampton Hand Assessment Procedure [SHAP]) and two grasping tasks. Kinematic measures were end point kinematics, joint angles, grasp force control, and gaze behavior. The results of the clinical and kinematic measures were in broad agreement with each other. Participants who scored higher on the SHAP showed overall better performance on the kinematic measures. They had smaller movement times, had better grip force control, and needed less visual attention on the hand. The results showed that time was a key parameter in prosthesis use and should be one of the main focus aspects of rehabilitation. The insights from this study are useful in rehabilitation practice because they allow therapists to specifically focus on certain parameters that may result in a higher level of skill for the prosthesis user."
},
{
"pmid": "19271888",
"title": "Differences in fixations between grasping and viewing objects.",
"abstract": "Where exactly do people look when they grasp an object? An object is usually contacted at two locations, whereas the gaze can only be at one location at the time. We investigated participants' fixation locations when they grasp objects with the contact positions of both index finger and thumb being visible and compared these to fixation locations when they only viewed the objects. Participants grasped with the index finger at the top and the thumb at the bottom of a flat shape. The main difference between grasping and viewing was that after a saccade roughly directed to the object's center of gravity, participants saccaded more upward and more into the direction of a region that was difficult to contact during grasping. A control experiment indicated that it was not the upper part of the shape that attracted fixation, while the results were consistent with an attraction by the index finger. Participants did not try to fixate both contact locations. Fixations were closer to the object's center of gravity in the viewing than in the grasping task. In conclusion, participants adapt their eye movements to the need of the task, such as acquiring information about regions with high required contact precision in grasping, even with small (graspable) objects. We suggest that in grasping, the main function of fixations is to acquire visual feedback of the approaching digits."
},
{
"pmid": "27597823",
"title": "The Reality of Myoelectric Prostheses: Understanding What Makes These Devices Difficult for Some Users to Control.",
"abstract": "Users of myoelectric prostheses can often find them difficult to control. This can lead to passive-use of the device or total rejection, which can have detrimental effects on the contralateral limb due to overuse. Current clinically available prostheses are \"open loop\" systems, and although considerable effort has been focused on developing biofeedback to \"close the loop,\" there is evidence from laboratory-based studies that other factors, notably improving predictability of response, may be as, if not more, important. Interestingly, despite a large volume of research aimed at improving myoelectric prostheses, it is not currently known which aspect of clinically available systems has the greatest impact on overall functionality and everyday usage. A protocol has, therefore, been designed to assess electromyographic (EMG) skill of the user and predictability of the prosthesis response as significant parts of the control chain, and to relate these to functionality and everyday usage. Here, we present the protocol and results from early pilot work. A set of experiments has been developed. First, to characterize user skill in generating the required level of EMG signal, as well as the speed with which users are able to make the decision to activate the appropriate muscles. Second, to measure unpredictability introduced at the skin-electrode interface, in order to understand the effects of the socket-mounted electrode fit under different loads on the variability of time taken for the prosthetic hand to respond. To evaluate prosthesis user functionality, four different outcome measures are assessed. Using a simple upper limb functional task prosthesis users are assessed for (1) success of task completion, (2) task duration, (3) quality of movement, and (4) gaze behavior. To evaluate everyday usage away from the clinic, the symmetricity of their real-world arm use is assessed using activity monitoring. These methods will later be used to assess a prosthesis user cohort to establish the relative contribution of each control factor to the individual measures of functionality and everyday usage (using multiple regression models). The results will support future researchers, designers, and clinicians in concentrating their efforts on the area that will have the greatest impact on improving prosthesis use."
},
{
"pmid": "21600048",
"title": "The SmartHand transradial prosthesis.",
"abstract": "BACKGROUND\nProsthetic components and control interfaces for upper limb amputees have barely changed in the past 40 years. Many transradial prostheses have been developed in the past, nonetheless most of them would be inappropriate if/when a large bandwidth human-machine interface for control and perception would be available, due to either their limited (or inexistent) sensorization or limited dexterity. SmartHand tackles this issue as is meant to be clinically experimented in amputees employing different neuro-interfaces, in order to investigate their effectiveness. This paper presents the design and on bench evaluation of the SmartHand.\n\n\nMETHODS\nSmartHand design was bio-inspired in terms of its physical appearance, kinematics, sensorization, and its multilevel control system. Underactuated fingers and differential mechanisms were designed and exploited in order to fit all mechatronic components in the size and weight of a natural human hand. Its sensory system was designed with the aim of delivering significant afferent information to the user through adequate interfaces.\n\n\nRESULTS\nSmartHand is a five fingered self-contained robotic hand, with 16 degrees of freedom, actuated by 4 motors. It integrates a bio-inspired sensory system composed of 40 proprioceptive and exteroceptive sensors and a customized embedded controller both employed for implementing automatic grasp control and for potentially delivering sensory feedback to the amputee. It is able to perform everyday grasps, count and independently point the index. The weight (530 g) and speed (closing time: 1.5 seconds) are comparable to actual commercial prostheses. It is able to lift a 10 kg suitcase; slippage tests showed that within particular friction and geometric conditions the hand is able to stably grasp up to 3.6 kg cylindrical objects.\n\n\nCONCLUSIONS\nDue to its unique embedded features and human-size, the SmartHand holds the promise to be experimentally fitted on transradial amputees and employed as a bi-directional instrument for investigating -during realistic experiments- different interfaces, control and feedback strategies in neuro-engineering studies."
},
{
"pmid": "26556065",
"title": "Phantom hand and wrist movements in upper limb amputees are slow but naturally controlled movements.",
"abstract": "After limb amputation, patients often wake up with a vivid perception of the presence of the missing limb, called \"phantom limb\". Phantom limbs have mostly been studied with respect to pain sensation. But patients can experience many other phantom sensations, including voluntary movements. The goal of the present study was to quantify phantom movement kinematics and relate these to intact limb kinematics and to the time elapsed since amputation. Six upper arm and two forearm amputees with various delays since amputation (6months to 32years) performed phantom finger, hand and wrist movements at self-chosen comfortable velocities. The kinematics of the phantom movements was indirectly obtained via the intact limb that synchronously mimicked the phantom limb movements, using a Cyberglove® for measuring finger movements and an inertial measurement unit for wrist movements. Results show that the execution of phantom movements is perceived as \"natural\" but effortful. The types of phantom movements that can be performed are variable between the patients but they could all perform thumb flexion/extension and global hand opening/closure. Finger extension movements appeared to be 24% faster than finger flexion movements. Neither the number of types of phantom movements that can be executed nor the kinematic characteristics were related to the elapsed time since amputation, highlighting the persistence of post-amputation neural adaptation. We hypothesize that the perceived slowness of phantom movements is related to altered proprioceptive feedback that cannot be recalibrated by lack of visual feedback during phantom movement execution."
},
{
"pmid": "8891638",
"title": "Combined eye-head gaze shifts to visual and auditory targets in humans.",
"abstract": "We studied the characteristics of combined eye-head gaze shifts in human subjects to determine whether they used similar strategies when looking at visual (V), auditory (A), and combined (V + A) targets located at several target eccentricities along the horizontal meridian. Subjects displayed considerable variability in the combinations of eye and head movement used to orient to the targets, ranging from those who always aligned their head close to the target, to those who relied predominantly on eye movements and only moved their head when the target was located beyond the limits of ocular motility. For a given subject, there was almost no variability in the amount of eye and head movement in the three target conditions (V, A, V + A). The time to initiate a gaze shift was influenced by stimulus modality and eccentricity. Auditory targets produced the longest latencies when located centrally (less than 20 degrees eccentricity), whereas visual targets evoked the longest latencies when located peripherally (greater than 40 degrees eccentricity). Combined targets (V + A) elicited the shortest latency reaction times at all eccentricities. The peak velocity of gaze shifts was also affected by target modality. At eccentricities between 10 and 30 degrees, peak gaze velocity was greater for movements to visual targets than for movements to auditory targets. Movements to the combined target were of comparable speed with movements to visual targets. Despite the modality-specific differences in reaction latency and peak gaze velocity, the consistency of combinations of eye and head movement within subjects suggests that visual and auditory signals are remapped into a common reference frame for controlling orienting gaze shifts. A likely candidate is the deeper layers of the superior colliculus, because visual and auditory signals converge directly onto the neurons projecting to the eye and head premotor centers."
},
{
"pmid": "22183755",
"title": "Predictive eye movements in natural vision.",
"abstract": "In the natural world, the brain must handle inherent delays in visual processing. This is a problem particularly during dynamic tasks. A possible solution to visuo-motor delays is prediction of a future state of the environment based on the current state and properties of the environment learned from experience. Prediction is well known to occur in both saccades and pursuit movements and is likely to depend on some kind of internal visual model as the basis for this prediction. However, most evidence comes from controlled laboratory studies using simple paradigms. In this study, we examine eye movements made in the context of demanding natural behavior, while playing squash. We show that prediction is a pervasive component of gaze behavior in this context. We show in addition that these predictive movements are extraordinarily precise and operate continuously in time across multiple trajectories and multiple movements. This suggests that prediction is based on complex dynamic visual models of the way that balls move, accumulated over extensive experience. Since eye, head, arm, and body movements all co-occur, it seems likely that a common internal model of predicted visual state is shared by different effectors to allow flexible coordination patterns. It is generally agreed that internal models are responsible for predicting future sensory state for control of body movements. The present work suggests that model-based prediction is likely to be a pervasive component in natural gaze control as well."
},
{
"pmid": "31517965",
"title": "Quantitative Eye Gaze and Movement Differences in Visuomotor Adaptations to Varying Task Demands Among Upper-Extremity Prosthesis Users.",
"abstract": "Importance\nNew treatments for upper-limb amputation aim to improve movement quality and reduce visual attention to the prosthesis. However, evaluation is limited by a lack of understanding of the essential features of human-prosthesis behavior and by an absence of consistent task protocols.\n\n\nObjective\nTo evaluate whether task selection is a factor in visuomotor adaptations by prosthesis users to accomplish 2 tasks easily performed by individuals with normal arm function.\n\n\nDesign, Setting, and Participants\nThis cross-sectional study was conducted in a single research center at the University of Alberta, Edmonton, Alberta, Canada. Upper-extremity prosthesis users were recruited from January 1, 2016, through December 31, 2016, and individuals with normal arm function were recruited from October 1, 2015, through November 30, 2015. Eight prosthesis users and 16 participants with normal arm function were asked to perform 2 goal-directed tasks with synchronized motion capture and eye tracking. Data analysis was performed from December 3, 2018, to April 15, 2019.\n\n\nMain Outcome and Measures\nMovement time, eye fixation, and range of motion of the upper body during 2 object transfer tasks (cup and box) were the main outcomes.\n\n\nResults\nA convenience sample comprised 8 male prosthesis users with acquired amputation (mean [range] age, 45 [30-64] years), along with 16 participants with normal arm function (8 [50%] of whom were men; mean [range] age, 26 [18-43] years; mean [range] height, 172.3 [158.0-186.0] cm; all right handed). Prosthesis users spent a disproportionately prolonged mean (SD) time in grasp and release phases when handling the cups (grasp: 2.0 [2.3] seconds vs 0.9 [0.8] seconds; P < .001; release: 1.1 [0.6] seconds vs 0.7 [0.4] seconds; P < .001). Prosthesis users also had increased mean (SD) visual fixations on the hand for the cup compared with the box task during reach (10.2% [12.1%] vs 2.2% [2.8%]) and transport (37.1% [9.7%] vs 22.3% [7.6%]). Fixations on the hand for both tasks were significantly greater for prosthesis users compared with normative values. Prosthesis users had significantly more trunk flexion and extension for the box task compared with the cup task (mean [SD] trunk range of motion, 32.1 [10.7] degrees vs 21.2 [3.7] degrees; P = .01), with all trunk motions greater than normative values. The box task required greater shoulder movements compared with the cup task for prosthesis users (mean [SD] flexion and extension; 51.3 [12.6] degrees vs 41.0 [9.4] degrees, P = .01; abduction and adduction: 40.5 [7.2] degrees vs 32.3 [5.1] degrees, P = .02; rotation: 50.6 [15.7] degrees vs 35.5 [10.0] degrees, P = .02). However, other than shoulder abduction and adduction for the box task, these values were less than those seen for participants with normal arm function.\n\n\nConclusions and Relevance\nThis study suggests that prosthesis users have an inherently different way of adapting to varying task demands, therefore suggesting that task selection is crucial in evaluating visuomotor performance. The cup task required greater compensatory visual fixations and prolonged grasp and release movements, and the box task required specific kinematic compensatory strategies as well as increased visual fixation. This is the first study to date to examine visuomotor differences in prosthesis users across varying task demands, and the findings appear to highlight the advantages of quantitative assessment in understanding human-prosthesis interaction."
},
{
"pmid": "11517279",
"title": "Eye-hand coordination in object manipulation.",
"abstract": "We analyzed the coordination between gaze behavior, fingertip movements, and movements of the manipulated object when subjects reached for and grasped a bar and moved it to press a target-switch. Subjects almost exclusively fixated certain landmarks critical for the control of the task. Landmarks at which contact events took place were obligatory gaze targets. These included the grasp site on the bar, the target, and the support surface where the bar was returned after target contact. Any obstacle in the direct movement path and the tip of the bar were optional landmarks. Subjects never fixated the hand or the moving bar. Gaze and hand/bar movements were linked concerning landmarks, with gaze leading. The instant that gaze exited a given landmark coincided with a kinematic event at that landmark in a manner suggesting that subjects monitored critical kinematic events for phasic verification of task progress and subgoal completion. For both the obstacle and target, subjects directed saccades and fixations to sites that were offset from the physical extension of the objects. Fixations related to an obstacle appeared to specify a location around which the extending tip of the bar should travel. We conclude that gaze supports hand movement planning by marking key positions to which the fingertips or grasped object are subsequently directed. The salience of gaze targets arises from the functional sensorimotor requirements of the task. We further suggest that gaze control contributes to the development and maintenance of sensorimotor correlation matrices that support predictive motor control in manipulation."
},
{
"pmid": "20667803",
"title": "Standardization of automated analyses of oculomotor fixation and saccadic behaviors.",
"abstract": "In an effort towards standardization, this paper evaluates the performance of five eye movement classification algorithms in terms of their assessment of oculomotor fixation and saccadic behavior. The results indicate that performance of these five commonly used algorithms vary dramatically even in the case of a simple stimulus evoked task using a single, common threshold value. The important contributions of this paper are: 1) evaluation and comparison of performance of five algorithms to classify specific oculomotor behavior 2) introduction and comparison of new standardized scores to provide more reliable classification performance 3) logic for a reasonable threshold value selection for any eye movement classification algorithm based on the standardized scores and 4) logic for establishing a criterion-based baseline for performance comparison between any eye movement classification algorithms. Proposed techniques enable efficient and objective clinical applications providing means to assure meaningful automated eye movement classification."
},
{
"pmid": "10755142",
"title": "The roles of vision and eye movements in the control of activities of daily living.",
"abstract": "The aim of this study was to determine the pattern of fixations during the performance of a well-learned task in a natural setting (making tea), and to classify the types of monitoring action that the eyes perform. We used a head-mounted eye-movement video camera, which provided a continuous view of the scene ahead, with a dot indicating foveal direction with an accuracy of about 1 deg. A second video camera recorded the subject's activities from across the room. The videos were linked and analysed frame by frame. Foveal direction was always close to the object being manipulated, and very few fixations were irrelevant to the task. The first object-related fixation typically led the first indication of manipulation by 0.56 s, and vision moved to the next object about 0.61 s before manipulation of the previous object was complete. Each object-related act that did not involve a waiting period lasted an average of 3.3 s and involved about 7 fixations. Roughly a third of all fixations on objects could be definitely identified with one of four monitoring functions: locating objects used later in the process, directing the hand or object in the hand to a new location, guiding the approach of one object to another (e.g. kettle and lid), and checking the state of some variable (e.g. water level). We conclude that although the actions of tea-making are 'automated' and proceed with little conscious involvement, the eyes closely monitor every step of the process. This type of unconscious attention must be a common phenomenon in everyday life."
},
{
"pmid": "16516530",
"title": "Eye movements and the control of actions in everyday life.",
"abstract": "The patterns of eye movement that accompany static activities such as reading have been studied since the early 1900s, but it is only since head-mounted eye trackers became available in the 1980s that it has been possible to study active tasks such as walking, driving, playing ball games and ordinary everyday activities like food preparation. This review examines the ways that vision contributes to the organization of such activities, and in particular how eye movements are used to locate the information needed by the motor system in the execution of each act. Major conclusions are that the eyes are proactive, typically seeking out the information required in the second before each act commences, although occasional 'look ahead' fixations are made to establish the locations of objects for use further into the future. Gaze often moves on before the last act is complete, indicating the presence of an information buffer. Each task has a characteristic but flexible pattern of eye movements that accompanies it, and this pattern is similar between individuals. The eyes rarely visit objects that are irrelevant to the action, and the conspicuity of objects (in terms of low-level image statistics) is much less important than their role in the task. Gaze control may involve movements of eyes, head and trunk, and these are coordinated in a way that allows for both flexibility of movement and stability of gaze. During the learning of a new activity, the eyes first provide feedback on the motor performance, but as this is perfected they provide feed-forward direction, seeking out the next object to be acted upon."
},
{
"pmid": "11718795",
"title": "In what ways do eye movements contribute to everyday activities?",
"abstract": "Two recent studies have investigated the relations of eye and hand movements in extended food preparation tasks, and here the results are compared. The tasks could be divided into a series of actions performed on objects. The eyes usually reached the next object in the sequence before any sign of manipulative action, indicating that eye movements are planned into the motor pattern and lead each action. The eyes usually fixated the same object throughout the action upon it, although they often moved on to the next object in the sequence before completion of the preceding action. The specific roles of individual fixations could be identified as locating (establishing the locations of objects for future use), directing (establishing target direction prior to contact), guiding (supervising the relative movements of two or three objects) and checking (establishing whether some particular condition is met, prior to the termination of an action). It is argued that, at the beginning of each action, the oculomotor system is supplied with the identity of the required object, information about its location, and instructions about the nature of the monitoring required during the action. The eye movements during this kind of task are nearly all to task-relevant objects, and thus their control is seen as primarily 'top-down', and influenced very little by the 'intrinsic salience' of objects."
},
{
"pmid": "8008066",
"title": "Where we look when we steer.",
"abstract": "Steering a car requires visual information from the changing pattern of the road ahead. There are many theories about what features a driver might use, and recent attempts to engineer self-steering vehicles have sharpened interest in the mechanisms involved. However, there is little direct information linking steering performance to the driver's direction of gaze. We have made simultaneous recordings of steering-wheel angle and drivers' gaze direction during a series of drives along a tortuous road. We found that drivers rely particularly on the 'tangent point' on the inside of each curve, seeking this point 1-2 s before each bend and returning to it throughout the bend. The direction of this point relative to the car's heading predicts the curvature of the road ahead, and we examine the way this information is used."
},
{
"pmid": "11100157",
"title": "From eye movements to actions: how batsmen hit the ball.",
"abstract": "In cricket, a batsman watches a fast bowler's ball come toward him at a high and unpredictable speed, bouncing off ground of uncertain hardness. Although he views the trajectory for little more than half a second, he can accurately judge where and when the ball will reach him. Batsmen's eye movements monitor the moment when the ball is released, make a predictive saccade to the place where they expect it to hit the ground, wait for it to bounce, and follow its trajectory for 100-200 ms after the bounce. We show how information provided by these fixations may allow precise prediction of the ball's timing and placement. Comparing players with different skill levels, we found that a short latency for the first saccade distinguished good from poor batsmen, and that a cricket player's eye movement strategy contributes to his skill in the game."
},
{
"pmid": "30029228",
"title": "Using synchronized eye and motion tracking to determine high-precision eye-movement patterns during object-interaction tasks.",
"abstract": "This study explores the role that vision plays in sequential object interactions. We used a head-mounted eye tracker and upper-limb motion capture to quantify visual behavior while participants performed two standardized functional tasks. By simultaneously recording eye and motion tracking, we precisely segmented participants' visual data using the movement data, yielding a consistent and highly functionally resolved data set of real-world object-interaction tasks. Our results show that participants spend nearly the full duration of a trial fixating on objects relevant to the task, little time fixating on their own hand when reaching toward an object, and slightly more time-although still very little-fixating on the object in their hand when transporting it. A consistent spatial and temporal pattern of fixations was found across participants. In brief, participants fixate an object to be picked up at least half a second before their hand arrives at the object and stay fixated on the object until they begin to transport it, at which point they shift their fixation directly to the drop-off location of the object, where they stay fixated until the object is successfully released. This pattern provides additional evidence of a common system for the integration of vision and object interaction in humans, and is consistent with theoretical frameworks hypothesizing the distribution of attention to future action targets as part of eye and hand-movement preparation. Our results thus aid the understanding of visual attention allocation during planning of object interactions both inside and outside the field of view."
},
{
"pmid": "29540617",
"title": "Illusory movement perception improves motor control for prosthetic hands.",
"abstract": "To effortlessly complete an intentional movement, the brain needs feedback from the body regarding the movement's progress. This largely nonconscious kinesthetic sense helps the brain to learn relationships between motor commands and outcomes to correct movement errors. Prosthetic systems for restoring function have predominantly focused on controlling motorized joint movement. Without the kinesthetic sense, however, these devices do not become intuitively controllable. We report a method for endowing human amputees with a kinesthetic perception of dexterous robotic hands. Vibrating the muscles used for prosthetic control via a neural-machine interface produced the illusory perception of complex grip movements. Within minutes, three amputees integrated this kinesthetic feedback and improved movement control. Combining intent, kinesthesia, and vision instilled participants with a sense of agency over the robotic movements. This feedback approach for closed-loop control opens a pathway to seamless integration of minds and machines."
},
{
"pmid": "24891493",
"title": "Stereovision and augmented reality for closed-loop control of grasping in hand prostheses.",
"abstract": "OBJECTIVE\nTechnologically advanced assistive devices are nowadays available to restore grasping, but effective and effortless control integrating both feed-forward (commands) and feedback (sensory information) is still missing. The goal of this work was to develop a user friendly interface for the semi-automatic and closed-loop control of grasping and to test its feasibility.\n\n\nAPPROACH\nWe developed a controller based on stereovision to automatically select grasp type and size and augmented reality (AR) to provide artificial proprioceptive feedback. The system was experimentally tested in healthy subjects using a dexterous hand prosthesis to grasp a set of daily objects. The subjects wore AR glasses with an integrated stereo-camera pair, and triggered the system via a simple myoelectric interface.\n\n\nMAIN RESULTS\nThe results demonstrated that the subjects got easily acquainted with the semi-autonomous control. The stereovision grasp decoder successfully estimated the grasp type and size in realistic, cluttered environments. When allowed (forced) to correct the automatic system decisions, the subjects successfully utilized the AR feedback and achieved close to ideal system performance.\n\n\nSIGNIFICANCE\nThe new method implements a high level, low effort control of complex functions in addition to the low level closed-loop control. The latter is achieved by providing rich visual feedback, which is integrated into the real life environment. The proposed system is an effective interface applicable with small alterations for many advanced prosthetic and orthotic/therapeutic rehabilitation devices."
},
{
"pmid": "26529274",
"title": "Sensor fusion and computer vision for context-aware control of a multi degree-of-freedom prosthesis.",
"abstract": "OBJECTIVE\nMyoelectric activity volitionally generated by the user is often used for controlling hand prostheses in order to replicate the synergistic actions of muscles in healthy humans during grasping. Muscle synergies in healthy humans are based on the integration of visual perception, heuristics and proprioception. Here, we demonstrate how sensor fusion that combines artificial vision and proprioceptive information with the high-level processing characteristics of biological systems can be effectively used in transradial prosthesis control.\n\n\nAPPROACH\nWe developed a novel context- and user-aware prosthesis (CASP) controller integrating computer vision and inertial sensing with myoelectric activity in order to achieve semi-autonomous and reactive control of a prosthetic hand. The presented method semi-automatically provides simultaneous and proportional control of multiple degrees-of-freedom (DOFs), thus decreasing overall physical effort while retaining full user control. The system was compared against the major commercial state-of-the art myoelectric control system in ten able-bodied and one amputee subject. All subjects used transradial prosthesis with an active wrist to grasp objects typically associated with activities of daily living.\n\n\nMAIN RESULTS\nThe CASP significantly outperformed the myoelectric interface when controlling all of the prosthesis DOF. However, when tested with less complex prosthetic system (smaller number of DOF), the CASP was slower but resulted with reaching motions that contained less compensatory movements. Another important finding is that the CASP system required minimal user adaptation and training.\n\n\nSIGNIFICANCE\nThe CASP constitutes a substantial improvement for the control of multi-DOF prostheses. The application of the CASP will have a significant impact when translated to real-life scenarious, particularly with respect to improving the usability and acceptance of highly complex systems (e.g., full prosthetic arms) by amputees."
},
{
"pmid": "29580245",
"title": "The clinical relevance of advanced artificial feedback in the control of a multi-functional myoelectric prosthesis.",
"abstract": "BACKGROUND\nTo effectively replace the human hand, a prosthesis should seamlessly respond to user intentions but also convey sensory information back to the user. Restoration of sensory feedback is rated highly by the prosthesis users, and feedback is critical for grasping in able-bodied subjects. Nonetheless, the benefits of feedback in prosthetics are still debated. The lack of consensus is likely due to the complex nature of sensory feedback during prosthesis control, so that its effectiveness depends on multiple factors (e.g., task complexity, user learning).\n\n\nMETHODS\nWe evaluated the impact of these factors with a longitudinal assessment in six amputee subjects, using a clinical setup (socket, embedded control) and a range of tasks (box and blocks, block turn, clothespin and cups relocation). To provide feedback, we have proposed a novel vibrotactile stimulation scheme capable of transmitting multiple variables from a multifunction prosthesis. The subjects wore a bracelet with four by two uniformly placed vibro-tactors providing information on contact, prosthesis state (active function), and grasping force. The subjects also completed a questionnaire for the subjective evaluation of the feedback.\n\n\nRESULTS\nThe tests demonstrated that feedback was beneficial only in the complex tasks (block turn, clothespin and cups relocation), and that the training had an important, task-dependent impact. In the clothespin relocation and block turn tasks, training allowed the subjects to establish successful feedforward control, and therefore, the feedback became redundant. In the cups relocation task, however, the subjects needed some training to learn how to properly exploit the feedback. The subjective evaluation of the feedback was consistently positive, regardless of the objective benefits. These results underline the multifaceted nature of closed-loop prosthesis control as, depending on the context, the same feedback interface can have different impact on performance. Finally, even if the closed-loop control does not improve the performance, it could be beneficial as it seems to improve the subjective experience.\n\n\nCONCLUSIONS\nTherefore, in this study we demonstrate, for the first time, the relevance of an advanced, multi-variable feedback interface for dexterous, multi-functional prosthesis control in a clinically relevant setting."
},
{
"pmid": "28925815",
"title": "Examining the Spatiotemporal Disruption to Gaze When Using a Myoelectric Prosthetic Hand.",
"abstract": "The aim of this study was to provide a detailed account of the spatial and temporal disruptions to eye-hand coordination when using a prosthetic hand during a sequential fine motor skill. Twenty-one able-bodied participants performed 15 trials of the picking up coins task derived from the Southampton Hand Assessment Procedure with their anatomic hand and with a prosthesis simulator while wearing eye-tracking equipment. Gaze behavior results revealed that when using the prosthesis, performance detriments were accompanied by significantly greater hand-focused gaze and a significantly longer time to disengage gaze from manipulations to plan upcoming movements. The study findings highlight key metrics that distinguish disruptions to eye-hand coordination that may have implications for the training of prosthesis use."
},
{
"pmid": "31029174",
"title": "Visual attention, EEG alpha power and T7-Fz connectivity are implicated in prosthetic hand control and can be optimized through gaze training.",
"abstract": "BACKGROUND\nProsthetic hands impose a high cognitive burden on the user that often results in fatigue, frustration and prosthesis rejection. However, efforts to directly measure this burden are sparse and little is known about the mechanisms behind it. There is also a lack of evidence-based training interventions designed to improve prosthesis hand control and reduce the mental effort required to use them. In two experiments, we provide the first direct evaluation of this cognitive burden using measurements of EEG and eye-tracking (Experiment 1), and then explore how a novel visuomotor intervention (gaze training; GT) might alleviate it (Experiment 2).\n\n\nMETHODS\nIn Experiment 1, able-bodied participants (n = 20) lifted and moved a jar, first using their anatomical hand and then using a myoelectric prosthetic hand simulator. In experiment 2, a GT group (n = 12) and a movement training (MT) group (n = 12) trained with the prosthetic hand simulator over three one hour sessions in a picking up coins task, before returning for retention, delayed retention and transfer tests. The GT group received instruction regarding how to use their eyes effectively, while the MT group received movement-related instruction typical in rehabilitation.\n\n\nRESULTS\nExperiment 1 revealed that when using the prosthetic hand, participants performed worse, exhibited spatial and temporal disruptions to visual attention, and exhibited a global decrease in EEG alpha power (8-12 Hz), suggesting increased cognitive effort. Experiment 2 showed that GT was the more effective method for expediting prosthesis learning, optimising visual attention, and lowering conscious control - as indexed by reduced T7-Fz connectivity. Whilst the MT group improved performance, they did not reduce hand-focused visual attention and showed increased conscious movement control. The superior benefits of GT transferred to a more complex tea-making task.\n\n\nCONCLUSIONS\nThese experiments quantify the visual and cortical mechanisms relating to the cognitive burden experienced during prosthetic hand control. They also evidence the efficacy of a GT intervention that alleviated this burden and promoted better learning and transfer, compared to typical rehabilitation instructions. These findings have theoretical and practical implications for prosthesis rehabilitation, the development of emerging prosthesis technologies and for the general understanding of human-tool interactions."
},
{
"pmid": "12478404",
"title": "How far ahead do we look when required to step on specific locations in the travel path during locomotion?",
"abstract": "Spatial-temporal gaze behaviour patterns were analysed as normal participants wearing a mobile eye tracker were required to step on 17 footprints, regularly or irregularly spaced over a 10-m distance, placed in their travel path. We examined the characteristics of two types of gaze fixation with respect to the participants' stepping patterns: footprint fixation; and travel fixation when the gaze is stable and travelling at the speed of whole body. The results showed that travel gaze fixation is a dominant gaze behaviour occupying over 50% of the travel time. It is hypothesised that this gaze behaviour would facilitate acquisition of environmental and self-motion information from the optic flow that is generated during locomotion: this in turn would guide movements of the lower limbs to the appropriate landing targets. When participants did fixate on the landing target they did so on average two steps ahead, about 800-1000 ms before the limb is placed on the target area. This would allow them sufficient time to successfully modify their gait patterns. None of the gaze behaviours was influenced by the placement (regularly versus irregularly spaced) of the footprints or repeated exposures to the travel path. Rather visual information acquired during each trial was used \"de novo\" to modulate gait patterns. This study provides a clear temporal link between gaze and stepping pattern and adds to our understanding of how vision is used to regulate locomotion."
},
{
"pmid": "11545465",
"title": "The coordination of eye, head, and hand movements in a natural task.",
"abstract": "Relatively little is known about movements of the eyes, head, and hands in natural tasks. Normal behavior requires spatial and temporal coordination of the movements in more complex circumstances than are typically studied, and usually provides the opportunity for motor planning. Previous studies of natural tasks have indicated that the parameters of eye and head movements are set by global task constraints. In this experiment, we explore the temporal coordination of eye, head, and hand movements while subjects performed a simple block-copying task. The task involved fixations to gather information about the pattern, as well as visually guided hand movements to pick up and place blocks. Subjects used rhythmic patterns of eye, head, and hand movements in a fixed temporal sequence or coordinative structure. However, the pattern varied according to the immediate task context. Coordination was maintained by delaying the hand movements until the eye was available for guiding the movement. This suggests that observers maintain coordination by setting up a temporary, task-specific synergy between the eye and hand. Head movements displayed considerable flexibility and frequently diverged from the gaze change, appearing instead to be linked to the hand trajectories. This indicates that the coordination of eye and head in gaze changes is usually the consequence of a synergistic linkage rather than an obligatory one. These temporary synergies simplify the coordination problem by reducing the number of control variables, and consequently the attentional demands, necessary for the task."
},
{
"pmid": "21397901",
"title": "The moving phantom: motor execution or motor imagery?",
"abstract": "Amputees who have a phantom limb often report the ability to move this phantom voluntarily. In the literature, phantom limb movements are generally considered to reflect motor imagery rather than motor execution. The aim of this study was to investigate whether amputees distinguish between executing a movement of the phantom limb and imagining moving the missing limb. We examined the capacity of 19 upper-limb amputees to execute and imagine movements of both their phantom and intact limbs. Their behaviour was compared with that of 18 age-matched normal controls. A global questionnaire-based assessment of imagery ability and timed tests showed that amputees can indeed distinguish between motor execution and motor imagery with the phantom limb, and that the former is associated with activity in stump muscles while the latter is not. Amputation reduced the speed of voluntary movements with the phantom limb but did not change the speed of imagined movements, suggesting that the absence of the limb specifically affects the ability to voluntarily move the phantom but does not change the ability to imagine moving the missing limb. These results suggest that under some conditions, for example amputation, the predicted sensory consequences of a motor command are sufficient to evoke the sensation of voluntary movement. They also suggest that the distinction between imagined and executed movements should be taken into consideration when designing research protocols to investigate the analgesic effects of sensorimotor feedback."
},
{
"pmid": "22345089",
"title": "Disentangling motor execution from motor imagery with the phantom limb.",
"abstract": "Amputees can move their phantom limb at will. These 'movements without movements' have generally been considered as motor imagery rather than motor execution, but amputees can in fact perform both executed and imagined movements with their phantom and they report distinct perceptions during each task. Behavioural evidence for this dual ability comes from the fact that executed movements are associated with stump muscle contractions whereas imagined movements are not, and that phantom executed movements are slower than intact hand executed movements whereas the speed of imagined movements is identical for both hands. Since neither execution nor imagination produces any visible movement, we hypothesized that the perceptual difference between these two motor tasks relies on the activation of distinct cerebral networks. Using functional magnetic resonance imaging and changes in functional connectivity (dynamic causal modelling), we examined the activity associated with imagined and executed movements of the intact and phantom hands of 14 upper-limb amputees. Distinct but partially overlapping cerebral networks were active during both executed and imagined phantom limb movements (both performed at the same speed). A region of interest analysis revealed a 'switch' between execution and imagination; during execution there was more activity in the primary somatosensory cortex, the primary motor cortex and the anterior lobe of the cerebellum, while during imagination there was more activity in the parietal and occipital lobes, and the posterior lobe of the cerebellum. In overlapping areas, task-related differences were detected in the location of activation peaks. The dynamic causal modelling analysis further confirmed the presence of a clear neurophysiological distinction between imagination and execution, as motor imagery and motor execution had opposite effects on the supplementary motor area-primary motor cortex network. This is the first imaging evidence that the neurophysiological network activated during phantom limb movements is similar to that of executed movements of intact limbs and differs from the phantom limb imagination network. The dual ability of amputees to execute and imagine movements of their phantom limb and the fact that these two tasks activate distinct cortical networks are important factors to consider when designing rehabilitation programmes for the treatment of phantom limb pain."
},
{
"pmid": "23206549",
"title": "Influence of postural constraints on eye and head latency during voluntary rotations.",
"abstract": "Redirecting gaze towards new targets often requires not only eye movements, but also synergistic rotations of the head, trunk and feet. This study investigates the influence of postural constraints on eye and head latency during voluntary refixations in the horizontal plane in 14 normal subjects. Three postural conditions were presented, (1) sitting in a chair using only eye and head movements, (2) standing without feet movements and (3) standing with feet movement. Head-eye reorientations towards eccentric un-predictable locations were performed towards ±45° and ±90° targets and back towards a central, spatially predictable target. Results showed that postural constraints affected eye latency but only when subjects knew the future location of the target (recentering \"return\" trials). Specifically, relatively longer eye latencies were observed when subjects had to turn their feet back towards the predictable central target. These findings suggest that the additional CNS processing required to reduce degrees of freedom during predictive motion introduces delays to the eye movement in order to efficiently assemble the components of a new motor synergy."
},
{
"pmid": "8817273",
"title": "Goal-directed arm movements change eye-head coordination.",
"abstract": "We compared the head movements accompanying gaze shifts while our subjects executed different manual operations, requiring gaze shifts of about 30 degrees. The different tasks yielded different latencies between gaze shifts and hand movements, and different maximum velocities of the hand. These changes in eye-hand coordination had a clear effect on eye-head coordination: the latencies and maximum velocities of head and hand were correlated. The same correlation between movements of the head and hand was also found within a task. Therefore, the changes in eye-head coordination are not caused by changes in the strategy of the subjects. We conclude that head movements and saccades during gaze shifts are not based on the same command: head movements depend both on the actual saccade and on possible future gaze shifts."
},
{
"pmid": "24758375",
"title": "Visuomotor behaviours when using a myoelectric prosthesis.",
"abstract": "BACKGROUND\nA recent study showed that the gaze patterns of amputee users of myoelectric prostheses differ markedly from those seen in anatomically intact subjects. Gaze behaviour is a promising outcome measures for prosthesis designers, as it appears to reflect the strategies adopted by amputees to compensate for the absence of proprioceptive feedback and uncertainty/delays in the control system, factors believed to be central to the difficulty in using prostheses. The primary aim of our study was to characterise visuomotor behaviours over learning to use a trans-radial myoelectric prosthesis. Secondly, as there are logistical advantages to using anatomically intact subjects in prosthesis evaluation studies, we investigated similarities in visuomotor behaviours between anatomically intact users of a trans-radial prosthesis simulator and experienced trans-radial myoelectric prosthesis users.\n\n\nMETHODS\nIn part 1 of the study, we investigated visuomotor behaviours during performance of a functional task (reaching, grasping and manipulating a carton) in a group of seven anatomically intact subjects over learning to use a trans-radial myoelectric prosthesis simulator (Dataset 1). Secondly, we compared their patterns of visuomotor behaviour with those of four experienced trans-radial myoelectric prosthesis users (Dataset 2). We recorded task movement time, performance on the SHAP test of hand function and gaze behaviour.\n\n\nRESULTS\nDataset 1 showed that while reaching and grasping the object, anatomically intact subjects using the prosthesis simulator devoted around 90% of their visual attention to either the hand or the area of the object to be grasped. This pattern of behaviour did not change with training, and similar patterns were seen in Dataset 2. Anatomically intact subjects exhibited significant increases in task duration at their first attempts to use the prosthesis simulator. At the end of training, the values had decreased and were similar to those seen in Dataset 2.\n\n\nCONCLUSIONS\nThe study provides the first functional description of the gaze behaviours seen during use of a myoelectric prosthesis. Gaze behaviours were found to be relatively insensitive to practice. In addition, encouraging similarities were seen between the amputee group and the prosthesis simulator group."
},
{
"pmid": "21622729",
"title": "Eye guidance in natural vision: reinterpreting salience.",
"abstract": "Models of gaze allocation in complex scenes are derived mainly from studies of static picture viewing. The dominant framework to emerge has been image salience, where properties of the stimulus play a crucial role in guiding the eyes. However, salience-based schemes are poor at accounting for many aspects of picture viewing and can fail dramatically in the context of natural task performance. These failures have led to the development of new models of gaze allocation in scene viewing that address a number of these issues. However, models based on the picture-viewing paradigm are unlikely to generalize to a broader range of experimental contexts, because the stimulus context is limited, and the dynamic, task-driven nature of vision is not represented. We argue that there is a need to move away from this class of model and find the principles that govern gaze allocation in a broader range of settings. We outline the major limitations of salience-based selection schemes and highlight what we have learned from studies of gaze allocation in natural vision. Clear principles of selection are found across many instances of natural vision and these are not the principles that might be expected from picture-viewing studies. We discuss the emerging theoretical framework for gaze allocation on the basis of reward maximization and uncertainty reduction."
},
{
"pmid": "30167674",
"title": "Gaze when reaching to grasp a glass.",
"abstract": "People have often been reported to look near their index finger's contact point when grasping. They have only been reported to look near the thumb's contact point when grasping an opaque object at eye height with a horizontal grip-thus when the region near the index finger's contact point is occluded. To examine to what extent being able to see the digits' final trajectories influences where people look, we compared gaze when reaching to grasp a glass of water or milk that was placed at eye or hip height. Participants grasped the glass and poured its contents into another glass on their left. Surprisingly, most participants looked nearer to their thumb's contact point. To examine whether this was because gaze was biased toward the position of the subsequent action, which was to the left, we asked participants in a second experiment to grasp a glass and either place it or pour its contents into another glass either to their left or right. Most participants' gaze was biased to some extent toward the position of the next action, but gaze was not influenced consistently across participants. Gaze was also not influenced consistently across the experiments for individual participants-even for those who participated in both experiments. We conclude that gaze is not simply determined by the identity of the digit or by details of the contact points, such as their visibility, but that gaze is just as sensitive to other factors, such as where one will manipulate the object after grasping."
}
] |
Frontiers in Genetics | 31824556 | PMC6882287 | 10.3389/fgene.2019.01054 | Sparse Graph Regularization Non-Negative Matrix Factorization Based on Huber Loss Model for Cancer Data Analysis | Non-negative matrix factorization (NMF) is a matrix decomposition method based on the square loss function. To exploit cancer information, cancer gene expression data often uses the NMF method to reduce dimensionality. Gene expression data usually have some noise and outliers, while the original NMF loss function is very sensitive to non-Gaussian noise. To improve the robustness and clustering performance of the algorithm, we propose a sparse graph regularization NMF based on Huber loss model for cancer data analysis (Huber-SGNMF). Huber loss is a function between L
1-norm and L
2-norm that can effectively handle non-Gaussian noise and outliers. Taking into account the sparsity matrix and data geometry information, sparse penalty and graph regularization terms are introduced into the model to enhance matrix sparsity and capture data manifold structure. Before the experiment, we first analyzed the robustness of Huber-SGNMF and other models. Experiments on The Cancer Genome Atlas (TCGA) data have shown that Huber-SGNMF performs better than other most advanced methods in sample clustering and differentially expressed gene selection. | Related WorkNon-Negative Matrix FactorizationNMF is a dimensionality reduction method based on partial representation. For a given dataset X=[x1,x2…,xn]∈ℝm×n, NMF can decompose it into the basic matrix U∈ℝm×k and the coefficient matrix V∈ℝk×n, with the purpose of approximating the original matrix by two matrix products. In general, the rank of matrix factorization k is selected by the number of larger singular values.For gene expression data matrix X∈ℝm×n, each row represents a gene corresponding to n samples, and each column represents a sample composed of m genes. Moreover, U contains m rows of metagene and V contains n rows of metapattern (Liu et al., 2018). Each column of V is a projection of a corresponding sample vector in X according to the basic matrix U (Li et al., 2017). NMF is visualized on gene expression data as shown in
Figure 1
.Figure 1The gene expression data matrix X∈ℝm×n is decomposed into a low-dimensional basic matrix U∈ℝm×k and a low-dimensional coefficient matrix V∈ℝk×n. The product of two low-dimensional matrices can approximate the original matrix.The NMF loss function is minimized as follows:(1)min ‖X−UV‖2, s.t. U≥0,V≥0,where ‖⋅‖ represents the application of the Frobenius norm to the matrix.Lee and Seung proposed the use of multiplicative iterative update rules to solve the optimal solution of NMF (Lee and Seung, 1999). Its update formula is as follows:(2)uik=uik(XV)ik(UVVT)ik,(3)vkj=vkj(UTX)kj(UTUV)kj,where u
ik and v
kj are elements belonging to U and V, respectively. The non-negative constraints of U and V only allow additive combinations between different elements, so NMF can learn part-based representations (Cai et al., 2011).Huber LossData usually contain a small amount of outliers and noise, which can have a worse effect on model reconstruction. For noise and outliers in the dataset, Huber loss uses weighted L
1-norm processing because the L
1-norm is robust and can effectively handle outliers and noise (Guofa et al., 2011; Yu et al., 2016). For other valuable data in the dataset, Huber losses still use L
2-norm loss to fit the data. Huber loss function δ(·) is defined as follows:(4)δ(e)={e2 if |e|<c,2c|e|−c2 if |e|≥c,where c represents the threshold parameter of the data using the L
1-norm or the L
2-norm. This function is a bounded and convex function that minimizes the effects of a single anomaly point (Chreiky et al., 2016). Huber losses often apply to the insensitive outliers and noise contained in the data, which are often difficult to find using the squared loss function (Du et al., 2012).Manifold RegularizationThe manifold learning theory (Belkin and Niyogi, 2001) shows that the internal manifold structure of the data can be effectively simulated by the nearest neighbor of the data points. Each data point finds its nearest p neighbors and connects the data points to the neighbors with edges. There are many ways to define the weight of an edge, most commonly 0–1 weighted: W
ij=1, if and only if nodes i and j are connected by edges. The advantage of this weighting method is that it is easy to calculate.Weight matrix W
ij is only used to measure the intimacy between data points. For the low-dimensional representation s
j of the high dimensional data x
j, the Euclidean distance O(sj,sl)=‖sj−sl‖2 is typically used to measure the similarity between two low-dimensional data points. According to the intimacy weight W, the smoothness of the two low-dimensional vectors can be measured as follows:(5)R=12∑j,lN‖sj−sl‖2Wjl=∑j=1NsjTsjDjj−∑j,l=1NsjTslWjl=tr(VDVT)−tr(VWVT)=tr(VLVT),where tr(·) denotes the trace of a matrix. The matrix D is defined as a diagonal matrix with diagonal elements Dii=∑jjWjl The graph Laplacian (Liu et al., 2014) matrix L is defined as L=D-W.We hope that if the high-dimensional data x
j and x
l are very intimate, then s
j and s
l should be close enough in low-dimensional representations (Cai et al., 2011). Therefore, minimizing R is added to our model to encode the internal geometry of the data. | [
"30352910",
"21173440",
"22539978",
"28586052",
"21119225",
"17483501",
"24700283",
"10548103",
"27448379",
"30961642",
"28186906",
"24565791",
"30442682",
"30858165",
"30396569",
"17547139",
"11125150",
"26672047",
"21609933",
"27067410",
"30984238"
] | [
{
"pmid": "30352910",
"title": "DNA Sequencing of Small Bowel Adenocarcinomas Identifies Targetable Recurrent Mutations in the ERBB2 Signaling Pathway.",
"abstract": "PURPOSE\nLittle is known about the genetic alterations characteristic of small bowel adenocarcinoma (SBA). Our purpose was to identify targetable alterations and develop experimental models of this disease.Experimental Design: Whole-exome sequencing (WES) was completed on 17 SBA patient samples and targeted-exome sequencing (TES) on 27 samples to confirm relevant driver mutations. Two SBA models with ERBB2 kinase activating mutations were tested for sensitivity to anti-ERBB2 agents in vivo and in vitro. Biochemical changes were measured by reverse-phase protein arrays.\n\n\nRESULTS\nWES identified somatic mutations in 4 canonical pathways (WNT, ERBB2, STAT3, and chromatin remodeling), which were validated in the TES cohort. Although APC mutations were present in only 23% of samples, additional WNT-related alterations were seen in 12%. ERBB2 mutations and amplifications were present in 23% of samples. Patients with alterations in the ERBB2 signaling cascade (64%) demonstrated worse clinical outcomes (median survival 70.3 months vs. 109 months; log-rank HR = 2.4, P = 0.03). Two ERBB2-mutated (V842I and Y803H) cell lines were generated from SBA patient samples. Both demonstrated high sensitivity to ERBB2 inhibitor dacomitinib (IC50 < 2.5 nmol/L). In xenografts derived from these samples, treatment with dacomitinib reduced tumor growth by 39% and 59%, respectively, whereas it had no effect in an SBA wild-type ERBB2 model.\n\n\nCONCLUSIONS\nThe in vitro and in vivo models of SBA developed here provide a valuable resource for understanding targetable mutations in this disease. Our findings support clinical efforts to target activating ERBB2 mutations in patients with SBA that harbor these alterations."
},
{
"pmid": "21173440",
"title": "Graph Regularized Nonnegative Matrix Factorization for Data Representation.",
"abstract": "Matrix factorization techniques have been frequently applied in information retrieval, computer vision, and pattern recognition. Among them, Nonnegative Matrix Factorization (NMF) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts based in the human brain. On the other hand, from the geometric perspective, the data is usually sampled from a low-dimensional manifold embedded in a high-dimensional ambient space. One then hopes to find a compact representation,which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In this paper, we propose a novel algorithm, called Graph Regularized Nonnegative Matrix Factorization (GNMF), for this purpose. In GNMF, an affinity graph is constructed to encode the geometrical information and we seek a matrix factorization, which respects the graph structure. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems."
},
{
"pmid": "22539978",
"title": "The anaphase-promoting complex or cyclosome supports cell survival in response to endoplasmic reticulum stress.",
"abstract": "The anaphase-promoting complex or cyclosome (APC/C) is a multi-subunit ubiquitin ligase that regulates exit from mitosis and G1 phase of the cell cycle. Although the regulation and function of APC/C(Cdh1) in the unperturbed cell cycle is well studied, little is known of its role in non-genotoxic stress responses. Here, we demonstrate the role of APC/C(Cdh1) (APC/C activated by Cdh1 protein) in cellular protection from endoplasmic reticulum (ER) stress. Activation of APC/C(Cdh1) under ER stress conditions is evidenced by Cdh1-dependent degradation of its substrates. Importantly, the activity of APC/C(Cdh1) maintains the ER stress checkpoint, as depletion of Cdh1 by RNAi impairs cell cycle arrest and accelerates cell death following ER stress. Our findings identify APC/C(Cdh1) as a regulator of cell cycle checkpoint and cell survival in response to proteotoxic insults."
},
{
"pmid": "28586052",
"title": "A novel approach to select differential pathways associated with hypertrophic cardiomyopathy based on gene co‑expression analysis.",
"abstract": "The present study was designed to develop a novel method for identifying significant pathways associated with human hypertrophic cardiomyopathy (HCM), based on gene co‑expression analysis. The microarray dataset associated with HCM (E‑GEOD‑36961) was obtained from the European Molecular Biology Laboratory‑European Bioinformatics Institute database. Informative pathways were selected based on the Reactome pathway database and screening treatments. An empirical Bayes method was utilized to construct co‑expression networks for informative pathways, and a weight value was assigned to each pathway. Differential pathways were extracted based on weight threshold, which was calculated using a random model. In order to assess whether the co‑expression method was feasible, it was compared with traditional pathway enrichment analysis of differentially expressed genes, which were identified using the significance analysis of microarrays package. A total of 1,074 informative pathways were screened out for subsequent investigations and their weight values were also obtained. According to the threshold of weight value of 0.01057, 447 differential pathways, including folding of actin by chaperonin containing T‑complex protein 1 (CCT)/T‑complex protein 1 ring complex (TRiC), purine ribonucleoside monophosphate biosynthesis and ubiquinol biosynthesis, were obtained. Compared with traditional pathway enrichment analysis, the number of pathways obtained from the co‑expression approach was increased. The results of the present study demonstrated that this method may be useful to predict marker pathways for HCM. The pathways of folding of actin by CCT/TRiC and purine ribonucleoside monophosphate biosynthesis may provide evidence of the underlying molecular mechanisms of HCM, and offer novel therapeutic directions for HCM."
},
{
"pmid": "21119225",
"title": "On epicardial potential reconstruction using regularization schemes with the L1-norm data term.",
"abstract": "The electrocardiographic (ECG) inverse problem is ill-posed and usually solved by regularization schemes. These regularization methods, such as the Tikhonov method, are often based on the L2-norm data and constraint terms. However, L2-norm-based methods inherently provide smoothed inverse solutions that are sensitive to measurement errors, and also lack the capability of localizing and distinguishing multiple proximal cardiac electrical sources. This paper presents alternative regularization schemes employing the L1-norm data term for the reconstruction of epicardial potentials (EPs) from measured body surface potentials (BSPs). During numerical implementation, the iteratively reweighted norm algorithm was applied to solve the L1-norm-related schemes, and measurement noises were considered in the BSP data. The proposed L1-norm data term-based regularization schemes (with L1 and L2 penalty terms of the normal derivative constraint (labelled as L1TV and L1L2)) were compared with the L2-norm data terms (Tikhonov with zero-order and normal derivative constraints, labelled as ZOT and FOT, and the total variation method labelled as L2TV). The studies demonstrated that, with averaged measurement noise, the inverse solutions provided by the L1L2 and FOT algorithms have less relative error values. However, when larger noise occurred in some electrodes (for example, signal lost during measurement), the L1TV and L1L2 methods can obtain more accurate EPs in a robust manner. Therefore the L1-norm data term-based solutions are generally less perturbed by measurement noises, suggesting that the new regularization scheme is promising for providing practical ECG inverse solutions."
},
{
"pmid": "17483501",
"title": "Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis.",
"abstract": "MOTIVATION\nMany practical pattern recognition problems require non-negativity constraints. For example, pixels in digital images and chemical concentrations in bioinformatics are non-negative. Sparse non-negative matrix factorizations (NMFs) are useful when the degree of sparseness in the non-negative basis matrix or the non-negative coefficient matrix in an NMF needs to be controlled in approximating high-dimensional data in a lower dimensional space.\n\n\nRESULTS\nIn this article, we introduce a novel formulation of sparse NMF and show how the new formulation leads to a convergent sparse NMF algorithm via alternating non-negativity-constrained least squares. We apply our sparse NMF algorithm to cancer-class discovery and gene expression data analysis and offer biological analysis of the results obtained. Our experimental results illustrate that the proposed sparse NMF algorithm often achieves better clustering performance with shorter computing time compared to other existing NMF algorithms.\n\n\nAVAILABILITY\nThe software is available as supplementary material."
},
{
"pmid": "24700283",
"title": "CTNNB1 mutational analysis of solid-pseudopapillary neoplasms of the pancreas using endoscopic ultrasound-guided fine-needle aspiration and next-generation deep sequencing.",
"abstract": "BACKGROUND\nSolid-pseudopapillary neoplasm (SPN), a rare neoplasm of the pancreas, frequently harbors mutations in exon 3 of the cadherin-associated protein beta 1 (CTNNB1) gene. Here, we analyzed SPN tissue for CTNNB1 mutations by deep sequencing using next-generation sequencing (NGS).\n\n\nMETHODS\nTissue samples from 7 SPNs and 31 other pancreatic lesions (16 pancreatic ductal adenocarcinomas (PDAC), 11 pancreatic neuroendocrine tumors (PNET), 1 acinar cell carcinoma, 1 autoimmune pancreatitis lesion, and 2 focal pancreatitis lesions) were analyzed by NGS for mutations in exon 3 of CTNNB1.\n\n\nRESULTS\nA single-base-pair missense mutations in exon 3 of CTNNB1 was observed in all 7 SPNs and in 1 of 11 PNET samples. However, mutations were not observed in the tissue samples of any of the 16 PDAC or other four pancreatic disease cases. The variant frequency of CTNNB1 ranged from 5.4 to 48.8 %.\n\n\nCONCLUSIONS\nMutational analysis of CTNNB1 by NGS is feasible and was achieved using SPN samples obtained by endoscopic ultrasound-guided fine needle aspiration."
},
{
"pmid": "10548103",
"title": "Learning the parts of objects by non-negative matrix factorization.",
"abstract": "Is perception of the whole based on perception of its parts? There is psychological and physiological evidence for parts-based representations in the brain, and certain computational theories of object recognition rely on such representations. But little is known about how brains or computers might learn the parts of objects. Here we demonstrate an algorithm for non-negative matrix factorization that is able to learn parts of faces and semantic features of text. This is in contrast to other methods, such as principal components analysis and vector quantization, that learn holistic, not parts-based, representations. Non-negative matrix factorization is distinguished from the other methods by its use of non-negativity constraints. These constraints lead to a parts-based representation because they allow only additive, not subtractive, combinations. When non-negative matrix factorization is implemented as a neural network, parts-based representations emerge by virtue of two properties: the firing rates of neurons are never negative and synaptic strengths do not change sign."
},
{
"pmid": "27448379",
"title": "Graph Regularized Non-Negative Low-Rank Matrix Factorization for Image Clustering.",
"abstract": "Non-negative matrix factorization (NMF) has been one of the most popular methods for feature learning in the field of machine learning and computer vision. Most existing works directly apply NMF on high-dimensional image datasets for computing the effective representation of the raw images. However, in fact, the common essential information of a given class of images is hidden in their low rank parts. For obtaining an effective low-rank data representation, we in this paper propose a non-negative low-rank matrix factorization (NLMF) method for image clustering. For the purpose of improving its robustness for the data in a manifold structure, we further propose a graph regularized NLMF by incorporating the manifold structure information into our proposed objective function. Finally, we develop an efficient alternating iterative algorithm to learn the low-dimensional representation of low-rank parts of images for clustering. Alternatively, we also incorporate robust principal component analysis into our proposed scheme. Experimental results on four image datasets reveal that our proposed methods outperform four representative methods."
},
{
"pmid": "30961642",
"title": "Valproic acid exhibits anti-tumor activity selectively against EGFR/ErbB2/ErbB3-coexpressing pancreatic cancer via induction of ErbB family members-targeting microRNAs.",
"abstract": "BACKGROUND\nDeregulated ErbB signaling plays an important role in tumorigenesis of pancreatic cancer. However, patients with pancreatic cancer benefit little from current existed therapies targeting the ErbB signaling. Here, we explore the potential anti-tumor activity of Valproic acid against pancreatic cancer via targeting ErbB family members.\n\n\nMETHODS\nCell viability assay and apoptosis evaluation were carried out to determine the efficacy of VPA on pancreatic cancer cells. Western blot analyses were performed to determine the expression and activation of proteins. Apoptosis enzyme-linked immunosorbent assay was used to quantify cytoplasmic histone associated DNA fragments. Lentiviral expression system was used to introduce overexpression of exogeneous genes or gene-targeting short hairpin RNAs (shRNAs). qRT-PCR was carried out to analyze the mRNAs and miRNAs expression levels. Tumor xenograft model was established to evaluate the in vivo anti-pancreatic cancer activity of VPA.\n\n\nRESULTS\nVPA preferentially inhibited cell proliferation/survival of, and induced apoptosis in EGFR/ErbB2/ErbB3-coexpressing pancreatic cancer cells within its clinically achievable range [40~100 mg/L (0.24~0.6 mmol/L)]. Mechanistic investigations revealed that VPA treatment resulted in simultaneous significant down-regulation of EGFR, ErbB2, and ErbB3 in pancreatic cancer cells likely via induction of ErbB family members-targeting microRNAs. Moreover, the anti-pancreatic cancer activity of VPA was further validated in tumor xenograft model.\n\n\nCONCLUSIONS\nOur data strongly suggest that VPA may be added to the treatment regimens for pancreatic cancer patients with co-overexpression of the ErbB family members."
},
{
"pmid": "28186906",
"title": "Regularized Non-Negative Matrix Factorization for Identifying Differentially Expressed Genes and Clustering Samples: A Survey.",
"abstract": "Non-negative Matrix Factorization (NMF), a classical method for dimensionality reduction, has been applied in many fields. It is based on the idea that negative numbers are physically meaningless in various data-processing tasks. Apart from its contribution to conventional data analysis, the recent overwhelming interest in NMF is due to its newly discovered ability to solve challenging data mining and machine learning problems, especially in relation to gene expression data. This survey paper mainly focuses on research examining the application of NMF to identify differentially expressed genes and to cluster samples, and the main NMF models, properties, principles, and algorithms with its various generalizations, extensions, and modifications are summarized. The experimental results demonstrate the performance of the various NMF algorithms in identifying differentially expressed genes and clustering samples."
},
{
"pmid": "24565791",
"title": "Progressive image denoising through hybrid graph Laplacian regularization: a unified framework.",
"abstract": "Recovering images from corrupted observations is necessary for many real-world applications. In this paper, we propose a unified framework to perform progressive image recovery based on hybrid graph Laplacian regularized regression. We first construct a multiscale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned, which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space. In this procedure, the intrinsic manifold structure is explicitly considered using both measured and unmeasured samples, and the nonlocal self-similarity property is utilized as a fruitful resource for abstracting a priori knowledge of the images. On the other hand, between two successive scales, the proposed model is extended to a projected high-dimensional feature space through explicit kernel mapping to describe the interscale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. In this way, the proposed algorithm gradually recovers more and more image details and edges, which could not been recovered in previous scale. We test our algorithm on one typical image recovery task: impulse noise removal. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art algorithms."
},
{
"pmid": "30442682",
"title": "Advances in HER2-Targeted Therapy: Novel Agents and Opportunities Beyond Breast and Gastric Cancer.",
"abstract": "The introduction of HER2-targeted therapy for breast and gastric patients with ERBB2 (HER2) amplification/overexpression has led to dramatic improvements in oncologic outcomes. In the past 20 years, five HER2-targeted therapies have been FDA approved, with four approved in the past 8 years. HER2-targeted therapy similarly was found to improve outcomes in HER2-positive gastric cancer. Over the past decade, with the introduction of next-generation sequencing into clinical practice, our understanding of HER2 biology has dramatically improved. We have recognized that HER2 amplification is not limited to breast and gastric cancer but is also found in a variety of tumor types such as colon cancer, bladder cancer, and biliary cancer. Furthermore, HER2-targeted therapy has signal of activity in several tumor types. In addition to HER2 amplification and overexpression, there is also increased recognition of activating HER2 mutations and their potential therapeutic relevance. Furthermore, there is a rapidly growing number of new therapeutics targeting HER2 including small-molecule inhibitors, antibody-drug conjugates, and bispecific antibodies. Taken together, an increasing number of patients are likely to benefit from approved and emerging HER2-targeted therapies."
},
{
"pmid": "30858165",
"title": "Rationale for Using Irreversible Epidermal Growth Factor Receptor Inhibitors in Combination with Phosphatidylinositol 3-Kinase Inhibitors for Advanced Head and Neck Squamous Cell Carcinoma.",
"abstract": "Head and neck squamous cell carcinoma (HNSCC) is a common and debilitating form of cancer characterized by poor patient outcomes and low survival rates. In HNSCC, genetic aberrations in phosphatidylinositol 3-kinase (PI3K) and epidermal growth factor receptor (EGFR) pathway genes are common, and small molecules targeting these pathways have shown modest effects as monotherapies in patients. Whereas emerging preclinical data support the combined use of PI3K and EGFR inhibitors in HNSCC, in-human studies have displayed limited clinical success so far. Here, we examined the responses of a large panel of patient-derived HNSCC cell lines to various combinations of PI3K and EGFR inhibitors, including EGFR agents with varying specificity and mechanistic characteristics. We confirmed the efficacy of PI3K and EGFR combination therapies, observing synergy with α isoform-selective PI3K inhibitor HS-173 and irreversible EGFR/ERBB2 dual inhibitor afatinib in most models tested. Surprisingly, however, our results demonstrated only modest improvement in response to HS-173 with reversible EGFR inhibitor gefitinib. This difference in efficacy was not explained by differences in ERBB target selectivity between afatinib and gefitinib; despite effectively disrupting ERBB2 phosphorylation, the addition of ERBB2 inhibitor CP-724714 failed to enhance the effect of HS-173 gefitinib dual therapy. Accordingly, although irreversible ERBB inhibitors showed strong synergistic activity with HS-173 in our models, none of the reversible ERBB inhibitors were synergistic in our study. Therefore, our results suggest that the ERBB inhibitor mechanism of action may be critical for enhanced synergy with PI3K inhibitors in HNSCC patients and motivate further preclinical studies for ERBB and PI3K combination therapies."
},
{
"pmid": "30396569",
"title": "Cdh1 degradation is mediated by APC/C-Cdh1 and SCF-Cdc4 in budding yeast.",
"abstract": "Cdh1, a substrate-recognition subunit of anaphase-promoting complex/cyclosome (APC/C), is a tumor suppressor, and it is downregulated in various tumor cells in humans. APC/C-Cdh1 is activated from late M phase to G1 phase by antagonizing Cdk1-mediated inhibitory phosphorylation. However, how Cdh1 protein levels are properly regulated is ill-defined. Here we show that Cdh1 is degraded via APC/C-Cdh1 and Skp1-Cullin1-F-box (SCF)-Cdc4 in the budding yeast Saccharomyces cerevisiae. Cdh1 degradation was promoted by forced localization of Cdh1 into the nucleus, where APC/C and SCF are present. Cdk1 promoted APC/C-Cdh1-mediated Cdh1 degradation, whereas polo kinase Cdc5 elicited SCF-Cdc4-mediated degradation. Thus, Cdh1 degradation is controlled via multiple pathways."
},
{
"pmid": "17547139",
"title": "The equivalence of half-quadratic minimization and the gradient linearization iteration.",
"abstract": "A popular way to restore images comprising edges is to minimize a cost function combining a quadratic data-fidelity term and an edge-preserving (possibly nonconvex) regularizalion term. Mainly because of the latter term, the calculation of the solution is slow and cumbersome. Half-quadratic (HQ) minimization (multiplicative form) was pioneered by Geman and Reynolds (1992) in order to alleviate the computational task in the context of image reconstruction with nonconvex regularization. By promoting the idea of locally homogeneous image models with a continuous-valued line process, they reformulated the optimization problem in terms of an augmented cost function which is quadratic with respect to the image and separable with respect to the line process, hence the name \"half quadratic.\" Since then, a large amount of papers were dedicated to HQ minimization and important results--including edge-preservation along with convex regularization and convergence-have been obtained. In this paper, we show that HQ minimization (multiplicative form) is equivalent to the most simple and basic method where the gradient of the cost function is linearized at each iteration step. In fact, both methods give exactly the same iterations. Furthermore, connections of HQ minimization with other methods, such as the quasi-Newton method and the generalized Weiszfeld's method, are straightforward."
},
{
"pmid": "11125150",
"title": "Nonlinear dimensionality reduction by locally linear embedding.",
"abstract": "Many areas of science depend on exploratory data analysis and visualization. The need to analyze large amounts of multivariate data raises the fundamental problem of dimensionality reduction: how to discover compact representations of high-dimensional data. Here, we introduce locally linear embedding (LLE), an unsupervised learning algorithm that computes low-dimensional, neighborhood-preserving embeddings of high-dimensional inputs. Unlike clustering methods for local dimensionality reduction, LLE maps its inputs into a single global coordinate system of lower dimensionality, and its optimizations do not involve local minima. By exploiting the local symmetries of linear reconstructions, LLE is able to learn the global structure of nonlinear manifolds, such as those generated by images of faces or documents of text."
},
{
"pmid": "26672047",
"title": "Characteristic Gene Selection Based on Robust Graph Regularized Non-Negative Matrix Factorization.",
"abstract": "Many methods have been considered for gene selection and analysis of gene expression data. Nonetheless, there still exists the considerable space for improving the explicitness and reliability of gene selection. To this end, this paper proposes a novel method named robust graph regularized non-negative matrix factorization for characteristic gene selection using gene expression data, which mainly contains two aspects: Firstly, enforcing L21-norm minimization on error function which is robust to outliers and noises in data points. Secondly, it considers that the samples lie in low-dimensional manifold which embeds in a high-dimensional ambient space, and reveals the data geometric structure embedded in the original data. To demonstrate the validity of the proposed method, we apply it to gene expression data sets involving various human normal and tumor tissue samples and the results demonstrate that the method is effective and feasible."
},
{
"pmid": "21609933",
"title": "Upregulation of glycogen synthase kinase 3β in human colorectal adenocarcinomas correlates with accumulation of CTNNB1.",
"abstract": "INTRODUCTION\nMutations of the adenomatous polyposis coli (APC) tumor suppressor gene or the CTNNB1 protooncogene have been implicated in the initiation of most human colorectal epithelial neoplasms. Glycogen synthase kinase 3β (GSK3B) serves a critical role in regulating their functions by phosphorylating both APC and CTNNB1 to facilitate CTNNB1 degradation. The current studies were performed to investigate whether GSK3B itself is regulated during the process of colorectal tumorigenesis.\n\n\nPATIENTS AND METHODS\nWe examined the expression of GSK3B and CTNNB1 in tissue samples from 24 human colorectal adenocarcinomas by Western immunoblotting analysis, kinase activity assays and immunohistochemistry. Normal colonic mucosa from the same colectomy specimens were used as a reference for comparison.\n\n\nRESULTS\nWe demonstrated that GSK3B expression levels and kinase activities were markedly and significantly increased in colorectal adenocarcinomas in all 24 cases compared with paired adjacent normal-appearing colonic mucosa. These increases correlated with significantly increased expression of CTNNB1 in the same tumors. Similar results were obtained in several cultured human colon cancer cell lines, demonstrating GSK3B levels correlated with CTNNB1 expression.\n\n\nCONCLUSION\nThough APC and CTNNB1 regulation by GSK3B are frequently disrupted by mutations in colon cancers, our observations suggest that increased functional GSK3B might drive other growth-promoting signals in colorectal tumorigenesis."
},
{
"pmid": "27067410",
"title": "The association between CDH1 promoter methylation and patients with ovarian cancer: a systematic meta-analysis.",
"abstract": "BACKGROUND\nThe down-regulation of E-cadherin gene (CDH1) expression has been regarded as an important event in cancer invasion and metastasis. However, the association between CDH1 promoter methylation and ovarian cancer remains unclear. A meta-analysis was conducted to evaluate the potential role of CDH1 promoter methylation in ovarian cancer.\n\n\nMETHODS\nRelevant articles were identified by searches of PubMed, EMBASE, Cochrane Library, CNKI and Wanfang databases. The pooled odds ratio (OR) and corresponding 95 % confidence interval (CI) were calculated to assess the strength of association.\n\n\nRESULTS\nNine studies were performed using the fixed-effects model in this study, including 485 cancer tissues and 255 nonmalignant tissues. The findings showed that CDH1 promoter methylation had an increased risk of ovarian cancer in cancer tissues (OR = 8.71, P < 0.001) in comparison with nonmalignant tissues. Subgroup analysis of the ethnicity showed that the OR value of CDH1 methylation in Asian population subgroup (OR = 13.20, P < 0.001) was higher than that in Caucasian population subgroup (OR = 3.84, P = 0.005). No significant association was found between ovarian cancer and low malignant potential (LMP) tumor (P = 0.096) among 2 studies, and between CDH1 promoter methylation and tumor stage and tumor histology (all P > 0.05). There was not any evidence of publication bias by Egger's test (all P > 0.05).\n\n\nCONCLUSIONS\nCDH1 promoter methylation can be a potential biomarker in ovarian cancer risk prediction, especially Asians can be more susceptible to CDH1 methylation. However, more studies are still done in the future."
},
{
"pmid": "30984238",
"title": "Simultaneous Interrogation of Cancer Omics to Identify Subtypes With Significant Clinical Differences.",
"abstract": "Recent advances in high-throughput sequencing have accelerated the accumulation of omics data on the same tumor tissue from multiple sources. Intensive study of multi-omics integration on tumor samples can stimulate progress in precision medicine and is promising in detecting potential biomarkers. However, current methods are restricted owing to highly unbalanced dimensions of omics data or difficulty in assigning weights between different data sources. Therefore, the appropriate approximation and constraints of integrated targets remain a major challenge. In this paper, we proposed an omics data integration method, named high-order path elucidated similarity (HOPES). HOPES fuses the similarities derived from various omics data sources to solve the dimensional discrepancy, and progressively elucidate the similarities from each type of omics data into an integrated similarity with various high-order connected paths. Through a series of incremental constraints for commonality, HOPES can take both specificity of single data and consistency between different data types into consideration. The fused similarity matrix gives global insight into patients' correlation and efficiently distinguishes subgroups. We tested the performance of HOPES on both a simulated dataset and several empirical tumor datasets. The test datasets contain three omics types including gene expression, DNA methylation, and microRNA data for five different TCGA cancer projects. Our method was shown to achieve superior accuracy and high robustness compared with several benchmark methods on simulated data. Further experiments on five cancer datasets demonstrated that HOPES achieved superior performances in cancer classification. The stratified subgroups were shown to have statistically significant differences in survival. We further located and identified the key genes, methylation sites, and microRNAs within each subgroup. They were shown to achieve high potential prognostic value and were enriched in many cancer-related biological processes or pathways."
}
] |
Frontiers in Genetics | 31824573 | PMC6883002 | 10.3389/fgene.2019.01182 | Graph Embedding Deep Learning Guides Microbial Biomarkers' Identification | The microbiome-wide association studies are to figure out the relationship between microorganisms and humans, with the goal of discovering relevant biomarkers to guide disease diagnosis. However, the microbiome data is complex, with high noise and dimensions. Traditional machine learning methods are limited by the models' representation ability and cannot learn complex patterns from the data. Recently, deep learning has been widely applied to fields ranging from text processing to image recognition due to its efficient flexibility and high capacity. But the deep learning models must be trained with enough data in order to achieve good performance, which is impractical in reality. In addition, deep learning is considered as black box and hard to interpret. These factors make deep learning not widely used in microbiome-wide association studies. In this work, we construct a sparse microbial interaction network and embed this graph into deep model to alleviate the risk of overfitting and improve the performance. Further, we explore a Graph Embedding Deep Feedforward Network (GEDFN) to conduct feature selection and guide meaningful microbial markers' identification. Based on the experimental results, we verify the feasibility of combining the microbial graph model with the deep learning model, and demonstrate the feasibility of applying deep learning and feature selection on microbial data. Our main contributions are: firstly, we utilize different methods to construct a variety of microbial interaction networks and combine the network via graph embedding deep learning. Secondly, we introduce a feature selection method based on graph embedding and validate the biological meaning of microbial markers. The code is available at https://github.com/MicroAVA/GEDFN.git. | Related WorkMicrobial Interaction NetworkBecause of the various relationships between microorganisms, such as symbiosis, competition and so on, as well as the complex structure and function of microorganisms due to their dynamic properties, the network is a good way to represent complex relationships. Understanding microbial interaction can help us understand microbial functions. System-oriented graph theory can facilitate microbial analysis and enhance our understanding of complex ecosystems and evolutionary processes (Faust et al., 2012; Layeghifard et al., 2017). However, most microorganisms are uncultured, we can only construct microbial interaction networks from high-throughput sequencing data. At present, there are many computational methods to construct microbial interaction networks. In theory, any method of calculating features' relationships can be used. For example, Bray–Curtis can be used to measure species abundance similarity (Bray and Curtis, 1957). The Pearson correlation coefficient is used to evaluate the linear relationship and the Spearman correlation coefficient can measure the rank relationship (Mukaka, 2012). CoNet uses an ensemble approach and combines with different comparison metrics to detect different relationships (Faust and Raes, 2016). Maximum mutual information is designed to capture broader relationships, not limited to specific function families (Reshef et al., 2011). MENA applies random matrix theory to conduct microbial analysis and experiments show it is robust to the noise and threshold (Deng et al., 2012). Sparse Correlations for Compositional data (SparCC) is a tool based on Aitchison's log ratio transformation to conduct microbial composition analysis (Friedman and Alm, 2012). SParse InversE Covariance Estimation for Ecological Association Inference (SPIEC-EASI) combines data logarithmic transformation with graph model inference framework to build a correlation network (Kurtz et al., 2015).Feature SelectionReal biomedical data, especially various omics data with high dimensions and noise, often has feature redundancy problem. Feature selection is a step of data preprocessing, which involves selecting related features from a large number of features to improve subsequent learning tasks (Li et al., 2017).There are mainly three kinds of feature selection methods, including filter, wrapper and embedded method. The filter approach selects subset features and then trains the learner. The feature selection process is independent of the subsequent learner. This is equivalent to filter the initial feature with the feature selection process and train the model with the filtered features. However, filter methods often ignore some features that are helpful for classification. At the same time, many filter methods are based on a single-featured greedy algorithm. The assumption is that each feature is independent while this is often not the case in microbiological data. The wrapper feature selection directly takes the performance of the learner to be used as the evaluation criterion of the feature subset. In other words, the purpose of the wrapper feature selection is to select a feature subset that is most efficient in its performance for a given learner. Compared to the filter method, the wrapper method can evaluate the result of feature selection to improve the classification performance; however, the feature selection process requires to train the learner iteratively and the calculation is huge (Li et al., 2017). The embedded feature selection combines the feature selection in the learning and training process, both of which are completed in the same optimization. In other words, the feature selection is automatically performed during the training.Feature selection is a traditional machine learning research field with many methods. For more information, please refer to the literature (Li et al., 2017). The previous work proposed a feature selection method based on Deep Forest (Zhu et al., 2018); however, there is less work on microbiome-wide association studies via Deep Neural Network and less research is done from the perspective of embedding approach for feature selection. The challenge of feature selection based on microbial network is that there is no microbial network available at present. The commonly used statistical-based interaction network methods may lead to high false positive rate due to the compositional bias (Gloor et al., 2017). | [
"27474269",
"21840927",
"29887378",
"22646978",
"15852500",
"30971806",
"27853510",
"22807668",
"23028285",
"24629344",
"27383984",
"29187837",
"30154767",
"29850911",
"25950956",
"27916383",
"26017442",
"27095192",
"28953883",
"25420450",
"23638278",
"19460890",
"28179361",
"27400279",
"24076764",
"26718401",
"22174245",
"17943116",
"27396567",
"26905627",
"28253908"
] | [
{
"pmid": "27474269",
"title": "Deep learning for computational biology.",
"abstract": "Technological advances in genomics and imaging have led to an explosion of molecular and cellular profiling data from large numbers of samples. This rapid increase in biological data dimension and acquisition rate is challenging conventional analysis strategies. Modern machine learning methods, such as deep learning, promise to leverage very large data sets for finding hidden structure within them, and for making accurate predictions. In this review, we discuss applications of this new breed of analysis approaches in regulatory genomics and cellular imaging. We provide background of what deep learning is, and the settings in which it can be successfully applied to derive biological insights. In addition to presenting specific applications and providing tips for practical use, we also highlight possible pitfalls and limitations to guide computational biologists when and how to make the most use of this new technology."
},
{
"pmid": "21840927",
"title": "The human metagenome: our other genome?",
"abstract": "For about a decade, the human microbiota has been investigated using molecular procedures that are now systematized via metagenomics. Several large scale studies are underway with the goal of establishing a set of reference data, such as catalogues of genes, microbial species and complete genome sequences of strains colonizing the various body sites. A first series of conclusions can be drawn from this 'natural history' approach that will also lay the ground for further studies aiming at understanding--in an ecological perspective--the mechanisms ensuring stable operation of the microbiota in healthy individuals, and how changes in its composition (dysbiosis) may result in diseases."
},
{
"pmid": "29887378",
"title": "Next-Generation Machine Learning for Biological Networks.",
"abstract": "Machine learning, a collection of data-analytical techniques aimed at building predictive models from multi-dimensional datasets, is becoming integral to modern biological research. By enabling one to generate models that learn from large datasets and make predictions on likely outcomes, machine learning can be used to study complex cellular systems such as biological networks. Here, we provide a primer on machine learning for life scientists, including an introduction to deep learning. We discuss opportunities and challenges at the intersection of machine learning and network biology, which could impact disease biology, drug discovery, microbiome research, and synthetic biology."
},
{
"pmid": "22646978",
"title": "Molecular ecological network analyses.",
"abstract": "BACKGROUND\nUnderstanding the interaction among different species within a community and their responses to environmental changes is a central goal in ecology. However, defining the network structure in a microbial community is very challenging due to their extremely high diversity and as-yet uncultivated status. Although recent advance of metagenomic technologies, such as high throughout sequencing and functional gene arrays, provide revolutionary tools for analyzing microbial community structure, it is still difficult to examine network interactions in a microbial community based on high-throughput metagenomics data.\n\n\nRESULTS\nHere, we describe a novel mathematical and bioinformatics framework to construct ecological association networks named molecular ecological networks (MENs) through Random Matrix Theory (RMT)-based methods. Compared to other network construction methods, this approach is remarkable in that the network is automatically defined and robust to noise, thus providing excellent solutions to several common issues associated with high-throughput metagenomics data. We applied it to determine the network structure of microbial communities subjected to long-term experimental warming based on pyrosequencing data of 16 S rRNA genes. We showed that the constructed MENs under both warming and unwarming conditions exhibited topological features of scale free, small world and modularity, which were consistent with previously described molecular ecological networks. Eigengene analysis indicated that the eigengenes represented the module profiles relatively well. In consistency with many other studies, several major environmental traits including temperature and soil pH were found to be important in determining network interactions in the microbial communities examined. To facilitate its application by the scientific community, all these methods and statistical tools have been integrated into a comprehensive Molecular Ecological Network Analysis Pipeline (MENAP), which is open-accessible now (http://ieg2.ou.edu/MENA).\n\n\nCONCLUSIONS\nThe RMT-based molecular ecological network analysis provides powerful tools to elucidate network interactions in microbial communities and their responses to environmental changes, which are fundamentally important for research in microbial ecology and environmental microbiology."
},
{
"pmid": "15852500",
"title": "Minimum redundancy feature selection from microarray gene expression data.",
"abstract": "How to selecting a small subset out of the thousands of genes in microarray data is important for accurate classification of phenotypes. Widely used methods typically rank genes according to their differential expressions among phenotypes and pick the top-ranked genes. We observe that feature sets so obtained have certain redundancy and study methods to minimize it. We propose a minimum redundancy - maximum relevance (MRMR) feature selection framework. Genes selected via MRMR provide a more balanced coverage of the space and capture broader characteristics of phenotypes. They lead to significantly improved class predictions in extensive experiments on 6 gene expression data sets: NCI, Lymphoma, Lung, Child Leukemia, Leukemia, and Colon. Improvements are observed consistently among 4 classification methods: Naive Bayes, Linear discriminant analysis, Logistic regression, and Support vector machines. SUPPLIMENTARY: The top 60 MRMR genes for each of the datasets are listed in http://crd.lbl.gov/~cding/MRMR/. More information related to MRMR methods can be found at http://www.hpeng.net/."
},
{
"pmid": "30971806",
"title": "Deep learning: new computational modelling techniques for genomics.",
"abstract": "As a data-driven science, genomics largely utilizes machine learning to capture dependencies in data and derive novel biological hypotheses. However, the ability to extract new insights from the exponentially increasing volume of genomics data requires more expressive machine learning models. By effectively leveraging large data sets, deep learning has transformed fields such as computer vision and natural language processing. Now, it is becoming the method of choice for many genomics modelling tasks, including predicting the impact of genetic variation on gene regulatory mechanisms such as DNA accessibility and splicing."
},
{
"pmid": "27853510",
"title": "CoNet app: inference of biological association networks using Cytoscape.",
"abstract": "Here we present the Cytoscape app version of our association network inference tool CoNet. Though CoNet was developed with microbial community data from sequencing experiments in mind, it is designed to be generic and can detect associations in any data set where biological entities (such as genes, metabolites or species) have been observed repeatedly. The CoNet app supports Cytoscape 2.x and 3.x and offers a variety of network inference approaches, which can also be combined. Here we briefly describe its main features and illustrate its use on microbial count data obtained by 16S rDNA sequencing of arctic soil samples. The CoNet app is available at: http://apps.cytoscape.org/apps/conet."
},
{
"pmid": "22807668",
"title": "Microbial co-occurrence relationships in the human microbiome.",
"abstract": "The healthy microbiota show remarkable variability within and among individuals. In addition to external exposures, ecological relationships (both oppositional and symbiotic) between microbial inhabitants are important contributors to this variation. It is thus of interest to assess what relationships might exist among microbes and determine their underlying reasons. The initial Human Microbiome Project (HMP) cohort, comprising 239 individuals and 18 different microbial habitats, provides an unprecedented resource to detect, catalog, and analyze such relationships. Here, we applied an ensemble method based on multiple similarity measures in combination with generalized boosted linear models (GBLMs) to taxonomic marker (16S rRNA gene) profiles of this cohort, resulting in a global network of 3,005 significant co-occurrence and co-exclusion relationships between 197 clades occurring throughout the human microbiome. This network revealed strong niche specialization, with most microbial associations occurring within body sites and a number of accompanying inter-body site relationships. Microbial communities within the oropharynx grouped into three distinct habitats, which themselves showed no direct influence on the composition of the gut microbiota. Conversely, niches such as the vagina demonstrated little to no decomposition into region-specific interactions. Diverse mechanisms underlay individual interactions, with some such as the co-exclusion of Porphyromonaceae family members and Streptococcus in the subgingival plaque supported by known biochemical dependencies. These differences varied among broad phylogenetic groups as well, with the Bacilli and Fusobacteria, for example, both enriched for exclusion of taxa from other clades. Comparing phylogenetic versus functional similarities among bacteria, we show that dominant commensal taxa (such as Prevotellaceae and Bacteroides in the gut) often compete, while potential pathogens (e.g. Treponema and Prevotella in the dental plaque) are more likely to co-occur in complementary niches. This approach thus serves to open new opportunities for future targeted mechanistic studies of the microbial ecology of the human microbiome."
},
{
"pmid": "23028285",
"title": "Inferring correlation networks from genomic survey data.",
"abstract": "High-throughput sequencing based techniques, such as 16S rRNA gene profiling, have the potential to elucidate the complex inner workings of natural microbial communities - be they from the world's oceans or the human gut. A key step in exploring such data is the identification of dependencies between members of these communities, which is commonly achieved by correlation analysis. However, it has been known since the days of Karl Pearson that the analysis of the type of data generated by such techniques (referred to as compositional data) can produce unreliable results since the observed data take the form of relative fractions of genes or species, rather than their absolute abundances. Using simulated and real data from the Human Microbiome Project, we show that such compositional effects can be widespread and severe: in some real data sets many of the correlations among taxa can be artifactual, and true correlations may even appear with opposite sign. Additionally, we show that community diversity is the key factor that modulates the acuteness of such compositional effects, and develop a new approach, called SparCC (available at https://bitbucket.org/yonatanf/sparcc), which is capable of estimating correlation values from compositional data. To illustrate a potential application of SparCC, we infer a rich ecological network connecting hundreds of interacting species across 18 sites on the human body. Using the SparCC network as a reference, we estimated that the standard approach yields 3 spurious species-species interactions for each true interaction and misses 60% of the true interactions in the human microbiome data, and, as predicted, most of the erroneous links are found in the samples with the lowest diversity."
},
{
"pmid": "24629344",
"title": "The treatment-naive microbiome in new-onset Crohn's disease.",
"abstract": "Inflammatory bowel diseases (IBDs), including Crohn's disease (CD), are genetically linked to host pathways that implicate an underlying role for aberrant immune responses to intestinal microbiota. However, patterns of gut microbiome dysbiosis in IBD patients are inconsistent among published studies. Using samples from multiple gastrointestinal locations collected prior to treatment in new-onset cases, we studied the microbiome in the largest pediatric CD cohort to date. An axis defined by an increased abundance in bacteria which include Enterobacteriaceae, Pasteurellacaea, Veillonellaceae, and Fusobacteriaceae, and decreased abundance in Erysipelotrichales, Bacteroidales, and Clostridiales, correlates strongly with disease status. Microbiome comparison between CD patients with and without antibiotic exposure indicates that antibiotic use amplifies the microbial dysbiosis associated with CD. Comparing the microbial signatures between the ileum, the rectum, and fecal samples indicates that at this early stage of disease, assessing the rectal mucosal-associated microbiome offers unique potential for convenient and early diagnosis of CD."
},
{
"pmid": "27383984",
"title": "Microbiome-wide association studies link dynamic microbial consortia to disease.",
"abstract": "Rapid advances in DNA sequencing, metabolomics, proteomics and computational tools are dramatically increasing access to the microbiome and identification of its links with disease. In particular, time-series studies and multiple molecular perspectives are facilitating microbiome-wide association studies, which are analogous to genome-wide association studies. Early findings point to actionable outcomes of microbiome-wide association studies, although their clinical application has yet to be approved. An appreciation of the complexity of interactions among the microbiome and the host's diet, chemistry and health, as well as determining the frequency of observations that are needed to capture and integrate this dynamic interface, is paramount for developing precision diagnostics and therapies that are based on the microbiome."
},
{
"pmid": "29187837",
"title": "Microbiome Datasets Are Compositional: And This Is Not Optional.",
"abstract": "Datasets collected by high-throughput sequencing (HTS) of 16S rRNA gene amplimers, metagenomes or metatranscriptomes are commonplace and being used to study human disease states, ecological differences between sites, and the built environment. There is increasing awareness that microbiome datasets generated by HTS are compositional because they have an arbitrary total imposed by the instrument. However, many investigators are either unaware of this or assume specific properties of the compositional data. The purpose of this review is to alert investigators to the dangers inherent in ignoring the compositional nature of the data, and point out that HTS datasets derived from microbiome studies can and should be treated as compositions at all stages of analysis. We briefly introduce compositional data, illustrate the pathologies that occur when compositional data are analyzed inappropriately, and finally give guidance and point to resources and examples for the analysis of microbiome datasets using compositional data analysis."
},
{
"pmid": "30154767",
"title": "The Human Gut Microbiome - A Potential Controller of Wellness and Disease.",
"abstract": "Interest toward the human microbiome, particularly gut microbiome has flourished in recent decades owing to the rapidly advancing sequence-based screening and humanized gnotobiotic model in interrogating the dynamic operations of commensal microbiota. Although this field is still at a very preliminary stage, whereby the functional properties of the complex gut microbiome remain less understood, several promising findings have been documented and exhibit great potential toward revolutionizing disease etiology and medical treatments. In this review, the interactions between gut microbiota and the host have been focused on, to provide an overview of the role of gut microbiota and their unique metabolites in conferring host protection against invading pathogen, regulation of diverse host physiological functions including metabolism, development and homeostasis of immunity and the nervous system. We elaborate on how gut microbial imbalance (dysbiosis) may lead to dysfunction of host machineries, thereby contributing to pathogenesis and/or progression toward a broad spectrum of diseases. Some of the most notable diseases namely Clostridium difficile infection (infectious disease), inflammatory bowel disease (intestinal immune-mediated disease), celiac disease (multisystemic autoimmune disorder), obesity (metabolic disease), colorectal cancer, and autism spectrum disorder (neuropsychiatric disorder) have been discussed and delineated along with recent findings. Novel therapies derived from microbiome studies such as fecal microbiota transplantation, probiotic and prebiotics to target associated diseases have been reviewed to introduce the idea of how certain disease symptoms can be ameliorated through dysbiosis correction, thus revealing a new scientific approach toward disease treatment. Toward the end of this review, several research gaps and limitations have been described along with suggested future studies to overcome the current research lacunae. Despite the ongoing debate on whether gut microbiome plays a role in the above-mentioned diseases, we have in this review, gathered evidence showing a potentially far more complex link beyond the unidirectional cause-and-effect relationship between them."
},
{
"pmid": "29850911",
"title": "A graph-embedded deep feedforward network for disease outcome classification and feature selection using gene expression data.",
"abstract": "Motivation\nGene expression data represents a unique challenge in predictive model building, because of the small number of samples (n) compared with the huge amount of features (p). This 'n≪p' property has hampered application of deep learning techniques for disease outcome classification. Sparse learning by incorporating external gene network information could be a potential solution to this issue. Still, the problem is very challenging because (i) there are tens of thousands of features and only hundreds of training samples, (ii) the scale-free structure of the gene network is unfriendly to the setup of convolutional neural networks.\n\n\nResults\nTo address these issues and build a robust classification model, we propose the Graph-Embedded Deep Feedforward Networks (GEDFN), to integrate external relational information of features into the deep neural network architecture. The method is able to achieve sparse connection between network layers to prevent overfitting. To validate the method's capability, we conducted both simulation experiments and real data analysis using a breast invasive carcinoma RNA-seq dataset and a kidney renal clear cell carcinoma RNA-seq dataset from The Cancer Genome Atlas. The resulting high classification accuracy and easily interpretable feature selection results suggest the method is a useful addition to the current graph-guided classification models and feature selection procedures.\n\n\nAvailability and implementation\nThe method is available at https://github.com/yunchuankong/GEDFN.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "25950956",
"title": "Sparse and compositionally robust inference of microbial ecological networks.",
"abstract": "16S ribosomal RNA (rRNA) gene and other environmental sequencing techniques provide snapshots of microbial communities, revealing phylogeny and the abundances of microbial populations across diverse ecosystems. While changes in microbial community structure are demonstrably associated with certain environmental conditions (from metabolic and immunological health in mammals to ecological stability in soils and oceans), identification of underlying mechanisms requires new statistical tools, as these datasets present several technical challenges. First, the abundances of microbial operational taxonomic units (OTUs) from amplicon-based datasets are compositional. Counts are normalized to the total number of counts in the sample. Thus, microbial abundances are not independent, and traditional statistical metrics (e.g., correlation) for the detection of OTU-OTU relationships can lead to spurious results. Secondly, microbial sequencing-based studies typically measure hundreds of OTUs on only tens to hundreds of samples; thus, inference of OTU-OTU association networks is severely under-powered, and additional information (or assumptions) are required for accurate inference. Here, we present SPIEC-EASI (SParse InversE Covariance Estimation for Ecological Association Inference), a statistical method for the inference of microbial ecological networks from amplicon sequencing datasets that addresses both of these issues. SPIEC-EASI combines data transformations developed for compositional data analysis with a graphical model inference framework that assumes the underlying ecological association network is sparse. To reconstruct the network, SPIEC-EASI relies on algorithms for sparse neighborhood and inverse covariance selection. To provide a synthetic benchmark in the absence of an experimentally validated gold-standard network, SPIEC-EASI is accompanied by a set of computational tools to generate OTU count data from a set of diverse underlying network topologies. SPIEC-EASI outperforms state-of-the-art methods to recover edges and network properties on synthetic data under a variety of scenarios. SPIEC-EASI also reproducibly predicts previously unknown microbial associations using data from the American Gut project."
},
{
"pmid": "27916383",
"title": "Disentangling Interactions in the Microbiome: A Network Perspective.",
"abstract": "Microbiota are now widely recognized as being central players in the health of all organisms and ecosystems, and subsequently have been the subject of intense study. However, analyzing and converting microbiome data into meaningful biological insights remain very challenging. In this review, we highlight recent advances in network theory and their applicability to microbiome research. We discuss emerging graph theoretical concepts and approaches used in other research disciplines and demonstrate how they are well suited for enhancing our understanding of the higher-order interactions that occur within microbiomes. Network-based analytical approaches have the potential to help disentangle complex polymicrobial and microbe-host interactions, and thereby further the applicability of microbiome research to personalized medicine, public health, environmental and industrial applications, and agriculture."
},
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "27095192",
"title": "Interactive tree of life (iTOL) v3: an online tool for the display and annotation of phylogenetic and other trees.",
"abstract": "Interactive Tree Of Life (http://itol.embl.de) is a web-based tool for the display, manipulation and annotation of phylogenetic trees. It is freely available and open to everyone. The current version was completely redesigned and rewritten, utilizing current web technologies for speedy and streamlined processing. Numerous new features were introduced and several new data types are now supported. Trees with up to 100,000 leaves can now be efficiently displayed. Full interactive control over precise positioning of various annotation features and an unlimited number of datasets allow the easy creation of complex tree visualizations. iTOL 3 is the first tool which supports direct visualization of the recently proposed phylogenetic placements format. Finally, iTOL's account system has been redesigned to simplify the management of trees in user-defined workspaces and projects, as it is heavily used and currently handles already more than 500,000 trees from more than 10,000 individual users."
},
{
"pmid": "28953883",
"title": "Strains, functions and dynamics in the expanded Human Microbiome Project.",
"abstract": "The characterization of baseline microbial and functional diversity in the human microbiome has enabled studies of microbiome-related disease, diversity, biogeography, and molecular function. The National Institutes of Health Human Microbiome Project has provided one of the broadest such characterizations so far. Here we introduce a second wave of data from the study, comprising 1,631 new metagenomes (2,355 total) targeting diverse body sites with multiple time points in 265 individuals. We applied updated profiling and assembly methods to provide new characterizations of microbiome personalization. Strain identification revealed subspecies clades specific to body sites; it also quantified species with phylogenetic diversity under-represented in isolate genomes. Body-wide functional profiling classified pathways into universal, human-enriched, and body site-enriched subsets. Finally, temporal analysis decomposed microbial variation into rapidly variable, moderately variable, and stable subsets. This study furthers our knowledge of baseline human microbial diversity and enables an understanding of personalized microbiome function and dynamics."
},
{
"pmid": "25420450",
"title": "The gut microbiota and inflammatory bowel disease.",
"abstract": "Inflammatory bowel disease (IBD) is a chronic and relapsing inflammatory disorder of the gut. Although the precise cause of IBD remains unknown, the most accepted hypothesis of IBD pathogenesis to date is that an aberrant immune response against the gut microbiota is triggered by environmental factors in a genetically susceptible host. The advancement of next-generation sequencing technology has enabled identification of various alterations of the gut microbiota composition in IBD. While some results related to dysbiosis in IBD are different between studies owing to variations of sample type, method of investigation, patient profiles, and medication, the most consistent observation in IBD is reduced bacterial diversity, a decrease of Firmicutes, and an increase of Proteobacteria. It has not yet been established how dysbiosis contributes to intestinal inflammation. Many of the known IBD susceptibility genes are associated with recognition and processing of bacteria, which is consistent with a role of the gut microbiota in the pathogenesis of IBD. A number of trials have shown that therapies correcting dysbiosis, including fecal microbiota transplantation and probiotics, are promising in IBD."
},
{
"pmid": "23638278",
"title": "Statistics corner: A guide to appropriate use of correlation coefficient in medical research.",
"abstract": "Correlation is a statistical method used to assess a possible linear association between two continuous variables. It is simple both to calculate and to interpret. However, misuse of correlation is so common among researchers that some statisticians have wished that the method had never been devised at all. The aim of this article is to provide a guide to appropriate use of correlation in medical research and to highlight some misuse. Examples of the applications of the correlation coefficient have been provided using data from statistical simulations as well as real data. Rule of thumb for interpreting size of a correlation coefficient has been provided."
},
{
"pmid": "19460890",
"title": "Predictor correlation impacts machine learning algorithms: implications for genomic studies.",
"abstract": "MOTIVATION\nThe advent of high-throughput genomics has produced studies with large numbers of predictors (e.g. genome-wide association, microarray studies). Machine learning algorithms (MLAs) are a computationally efficient way to identify phenotype-associated variables in high-dimensional data. There are important results from mathematical theory and numerous practical results documenting their value. One attractive feature of MLAs is that many operate in a fully multivariate environment, allowing for small-importance variables to be included when they act cooperatively. However, certain properties of MLAs under conditions common in genomic-related data have not been well-studied--in particular, correlations among predictors pose a problem.\n\n\nRESULTS\nUsing extensive simulation, we showed considering correlation within predictors is crucial in making valid inferences using variable importance measures (VIMs) from three MLAs: random forest (RF), conditional inference forest (CIF) and Monte Carlo logic regression (MCLR). Using a case-control illustration, we showed that the RF VIMs--even permutation-based--were less able to detect association than other algorithms at effect sizes encountered in complex disease studies. This reduction occurred when 'causal' predictors were correlated with other predictors, and was sharpest when RF tree building used the Gini index. Indeed, RF Gini VIMs are biased under correlation, dependent on predictor correlation strength/number and over-trained to random fluctuations in data when tree terminal node size was small. Permutation-based VIM distributions were less variable for correlated predictors and are unbiased, thus may be preferred when predictors are correlated. MLAs are a powerful tool for high-dimensional data analysis, but well-considered use of algorithms is necessary to draw valid conclusions.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "28179361",
"title": "A microbial signature for Crohn's disease.",
"abstract": "OBJECTIVE\nA decade of microbiome studies has linked IBD to an alteration in the gut microbial community of genetically predisposed subjects. However, existing profiles of gut microbiome dysbiosis in adult IBD patients are inconsistent among published studies, and did not allow the identification of microbial signatures for CD and UC. Here, we aimed to compare the faecal microbiome of CD with patients having UC and with non-IBD subjects in a longitudinal study.\n\n\nDESIGN\nWe analysed a cohort of 2045 non-IBD and IBD faecal samples from four countries (Spain, Belgium, the UK and Germany), applied a 16S rRNA sequencing approach and analysed a total dataset of 115 million sequences.\n\n\nRESULTS\nIn the Spanish cohort, dysbiosis was found significantly greater in patients with CD than with UC, as shown by a more reduced diversity, a less stable microbial community and eight microbial groups were proposed as a specific microbial signature for CD. Tested against the whole cohort, the signature achieved an overall sensitivity of 80% and a specificity of 94%, 94%, 89% and 91% for the detection of CD versus healthy controls, patients with anorexia, IBS and UC, respectively.\n\n\nCONCLUSIONS\nAlthough UC and CD share many epidemiologic, immunologic, therapeutic and clinical features, our results showed that they are two distinct subtypes of IBD at the microbiome level. For the first time, we are proposing microbiomarkers to discriminate between CD and non-CD independently of geographical regions."
},
{
"pmid": "27400279",
"title": "Machine Learning Meta-analysis of Large Metagenomic Datasets: Tools and Biological Insights.",
"abstract": "Shotgun metagenomic analysis of the human associated microbiome provides a rich set of microbial features for prediction and biomarker discovery in the context of human diseases and health conditions. However, the use of such high-resolution microbial features presents new challenges, and validated computational tools for learning tasks are lacking. Moreover, classification rules have scarcely been validated in independent studies, posing questions about the generality and generalization of disease-predictive models across cohorts. In this paper, we comprehensively assess approaches to metagenomics-based prediction tasks and for quantitative assessment of the strength of potential microbiome-phenotype associations. We develop a computational framework for prediction tasks using quantitative microbiome profiles, including species-level relative abundances and presence of strain-specific markers. A comprehensive meta-analysis, with particular emphasis on generalization across cohorts, was performed in a collection of 2424 publicly available metagenomic samples from eight large-scale studies. Cross-validation revealed good disease-prediction capabilities, which were in general improved by feature selection and use of strain-specific markers instead of species-level taxonomic abundance. In cross-study analysis, models transferred between studies were in some cases less accurate than models tested by within-study cross-validation. Interestingly, the addition of healthy (control) samples from other studies to training sets improved disease prediction capabilities. Some microbial species (most notably Streptococcus anginosus) seem to characterize general dysbiotic states of the microbiome rather than connections with a specific disease. Our results in modelling features of the \"healthy\" microbiome can be considered a first step toward defining general microbial dysbiosis. The software framework, microbiome profiles, and metadata for thousands of samples are publicly available at http://segatalab.cibio.unitn.it/tools/metaml."
},
{
"pmid": "24076764",
"title": "Differential abundance analysis for microbial marker-gene surveys.",
"abstract": "We introduce a methodology to assess differential abundance in sparse high-throughput microbial marker-gene survey data. Our approach, implemented in the metagenomeSeq Bioconductor package, relies on a novel normalization technique and a statistical model that accounts for undersampling-a common feature of large-scale marker-gene studies. Using simulated data and several published microbiota data sets, we show that metagenomeSeq outperforms the tools currently used in this field."
},
{
"pmid": "26718401",
"title": "Analysis of the microbiome: Advantages of whole genome shotgun versus 16S amplicon sequencing.",
"abstract": "The human microbiome has emerged as a major player in regulating human health and disease. Translational studies of the microbiome have the potential to indicate clinical applications such as fecal transplants and probiotics. However, one major issue is accurate identification of microbes constituting the microbiota. Studies of the microbiome have frequently utilized sequencing of the conserved 16S ribosomal RNA (rRNA) gene. We present a comparative study of an alternative approach using whole genome shotgun sequencing (WGS). In the present study, we analyzed the human fecal microbiome compiling a total of 194.1 × 10(6) reads from a single sample using multiple sequencing methods and platforms. Specifically, after establishing the reproducibility of our methods with extensive multiplexing, we compared: 1) The 16S rRNA amplicon versus the WGS method, 2) the Illumina HiSeq versus MiSeq platforms, 3) the analysis of reads versus de novo assembled contigs, and 4) the effect of shorter versus longer reads. Our study demonstrates that whole genome shotgun sequencing has multiple advantages compared with the 16S amplicon method including enhanced detection of bacterial species, increased detection of diversity and increased prediction of genes. In addition, increased length, either due to longer reads or the assembly of contigs, improved the accuracy of species detection."
},
{
"pmid": "22174245",
"title": "Detecting novel associations in large data sets.",
"abstract": "Identifying interesting relationships between pairs of variables in large data sets is increasingly important. Here, we present a measure of dependence for two-variable relationships: the maximal information coefficient (MIC). MIC captures a wide range of associations both functional and not, and for functional relationships provides a score that roughly equals the coefficient of determination (R(2)) of the data relative to the regression function. MIC belongs to a larger class of maximal information-based nonparametric exploration (MINE) statistics for identifying and classifying relationships. We apply MIC and MINE to data sets in global health, gene expression, major-league baseball, and the human gut microbiota and identify known and novel relationships."
},
{
"pmid": "17943116",
"title": "The human microbiome project.",
"abstract": "A strategy to understand the microbial components of the human genetic and metabolic landscape and how they contribute to normal physiology and predisposition to disease."
},
{
"pmid": "27396567",
"title": "Metagenome-wide association studies: fine-mining the microbiome.",
"abstract": "Metagenome-wide association studies (MWAS) have enabled the high-resolution investigation of associations between the human microbiome and several complex diseases, including type 2 diabetes, obesity, liver cirrhosis, colorectal cancer and rheumatoid arthritis. The associations that can be identified by MWAS are not limited to the identification of taxa that are more or less abundant, as is the case with taxonomic approaches, but additionally include the identification of microbial functions that are enriched or depleted. In this Review, we summarize recent findings from MWAS and discuss how these findings might inform the prevention, diagnosis and treatment of human disease in the future. Furthermore, we highlight the need to better characterize the biology of many of the bacteria that are found in the human microbiota as an essential step in understanding how bacterial strains that have been identified by MWAS are associated with disease."
},
{
"pmid": "26905627",
"title": "Correlation detection strategies in microbial data sets vary widely in sensitivity and precision.",
"abstract": "Disruption of healthy microbial communities has been linked to numerous diseases, yet microbial interactions are little understood. This is due in part to the large number of bacteria, and the much larger number of interactions (easily in the millions), making experimental investigation very difficult at best and necessitating the nascent field of computational exploration through microbial correlation networks. We benchmark the performance of eight correlation techniques on simulated and real data in response to challenges specific to microbiome studies: fractional sampling of ribosomal RNA sequences, uneven sampling depths, rare microbes and a high proportion of zero counts. Also tested is the ability to distinguish signals from noise, and detect a range of ecological and time-series relationships. Finally, we provide specific recommendations for correlation technique usage. Although some methods perform better than others, there is still considerable need for improvement in current techniques."
},
{
"pmid": "28253908",
"title": "Normalization and microbial differential abundance strategies depend upon data characteristics.",
"abstract": "BACKGROUND\nData from 16S ribosomal RNA (rRNA) amplicon sequencing present challenges to ecological and statistical interpretation. In particular, library sizes often vary over several ranges of magnitude, and the data contains many zeros. Although we are typically interested in comparing relative abundance of taxa in the ecosystem of two or more groups, we can only measure the taxon relative abundance in specimens obtained from the ecosystems. Because the comparison of taxon relative abundance in the specimen is not equivalent to the comparison of taxon relative abundance in the ecosystems, this presents a special challenge. Second, because the relative abundance of taxa in the specimen (as well as in the ecosystem) sum to 1, these are compositional data. Because the compositional data are constrained by the simplex (sum to 1) and are not unconstrained in the Euclidean space, many standard methods of analysis are not applicable. Here, we evaluate how these challenges impact the performance of existing normalization methods and differential abundance analyses.\n\n\nRESULTS\nEffects on normalization: Most normalization methods enable successful clustering of samples according to biological origin when the groups differ substantially in their overall microbial composition. Rarefying more clearly clusters samples according to biological origin than other normalization techniques do for ordination metrics based on presence or absence. Alternate normalization measures are potentially vulnerable to artifacts due to library size. Effects on differential abundance testing: We build on a previous work to evaluate seven proposed statistical methods using rarefied as well as raw data. Our simulation studies suggest that the false discovery rates of many differential abundance-testing methods are not increased by rarefying itself, although of course rarefying results in a loss of sensitivity due to elimination of a portion of available data. For groups with large (~10×) differences in the average library size, rarefying lowers the false discovery rate. DESeq2, without addition of a constant, increased sensitivity on smaller datasets (<20 samples per group) but tends towards a higher false discovery rate with more samples, very uneven (~10×) library sizes, and/or compositional effects. For drawing inferences regarding taxon abundance in the ecosystem, analysis of composition of microbiomes (ANCOM) is not only very sensitive (for >20 samples per group) but also critically the only method tested that has a good control of false discovery rate.\n\n\nCONCLUSIONS\nThese findings guide which normalization and differential abundance techniques to use based on the data characteristics of a given study."
}
] |
Frontiers in Neurorobotics | 31824277 | PMC6883290 | 10.3389/fnbot.2019.00096 | Open-Environment Robotic Acoustic Perception for Object Recognition | Object recognition in containers is extremely difficult for robots. Dynamic audio signals are more responsive to an object's internal property. Therefore, we adopt the dynamic contact method to collect acoustic signals in the container and recognize objects in containers. Traditional machine learning is to recognize objects in a closed environment, which is not in line with practical applications. In real life, exploring objects is dynamically changing, so it is necessary to develop methods that can recognize all classes of objects in an open environment. A framework for recognizing objects in containers using acoustic signals in an open environment is proposed, and then the kernel k nearest neighbor algorithm in an open environment (OSKKNN) is set. An acoustic dataset is collected, and the feasibility of the method is verified on the dataset, which greatly promotes the recognition of objects in an open environment. And it also proves that the use of acoustic to recognize objects in containers has good value. | 2. Related WorkIt is very challenging to recognize objects contained in containers and objects of different weight in containers. These studies are few. When the visual and tactile constraints are limited, the perceptual information generated by the simple static contact is also difficult to recognize objects in containers. We naturally use dynamic contact methods to obtain information about objects. Berthouze et al. (2007) and Takamuku et al. (2008) pointed out that dynamic contact (shaking) is more conducive to recognizing objects than static contact (grasping), which is not easily affected by the shape, size and color of objects. When shaking the object, it will produce the sound signal of vibration, which can be collected by a microphone. There is related research on the use of shaking to collect sound signals (Nakamura et al., 2009, 2013; Araki et al., 2011; Taniguchi et al., 2018).Interactive contacts with objects in different ways and acquisition of sound information to recognize objects are studied as follows: Clarke et al. (2018) used the actions of shaking and pouring to obtain the sound signal of the granular object and combined this with deep learning to recognize five different types of granular objects. Luo et al. (2017) used a pen to hit objects to collect sound information, and used the Mel-Frequency Cepstral Coefficients (MFCCs) and its first and second differential as features; stacked denoising autoencoders are applied to train a deep learning model for object recognition. Sinapov et al. (2009) and Sinapov and Stoytchev (2009) used humanoid robots to perform five different interactive behaviors (grasp, shake, put, push, knock) on 36 common household objects (such as cups, balls, boxes, cans, etc.) and used the k nearest neighbor algorithm (KNN), support vector machine algorithm (SVM) and unsupervised hierarchical clustering to recognize objects. Sinapov et al. (2011) collected the joint torque of the robot and sound signals, and combined with the k nearest neighbor algorithm (KNN) to recognize 50 common household objects.Sinapov et al. (2014) and Schenck et al. (2014) used ten kinds of interactions (such as grasp, shake, push, lift, etc.) to detect four classes of large-particle objects of three colors and three weights. They not only learn categories describing individual objects, but also learn categories describing pairs and groups of objects, and the C4.5 decision tree algorithm is used to classify and the robot learns new classes based on the similarity measurement method. Chen et al. (2016) tested four kinds of containers (glass, plastic, cardboard and soft paper) with 12 kinds of objects and collected sound signals through shaking using Gaussian naive Bayes algorithm (GNB), support vector machine algorithm (SVM) and K-means clustering algorithm (K-Means) to classify and recognize objects. And it is proved that the sound of shaking can be used for object recognition in many places such as shopping malls, workshops and home. Eppe et al. (2018) used a humanoid robot to perform auditory exploration of a group of visually indistinguishable plastic containers filled with different amounts of different materials, proving that deep recursive neural structures can learn to distinguish individual materials and estimate their weight.The above research focuses on object recognition in multiple occasions and closed environments and does not pay attention to recognizing objects in specific applications and open environments. There are some studies on different recognition in the open environment, such as the following literature (Bendale and Boult, 2016; Bapst et al., 2017; Gunther et al., 2017; Moeini et al., 2017; Bao et al., 2018), these studies recognize known classes and detect unknown classes in an open environment, but do not recognize all classes. In the real world, objects touched by robots are constantly changing. How can the robot system be made to be like human beings? When encountering unknown objects, it can be well separated from known objects and relearn relevant knowledge of unknown objects. Therefore, it is very important for robots to develop a systematic framework that can detect objects of unknown classes and recognize all objects through continuous learning of the properties of unknown classes. This paper mainly studies the use of sound to recognize household food objects in containers and focuses on the recognition of all class objects in containers using sound in an open environment. | [
"21787100",
"29495409",
"15971691",
"21460910",
"21339529",
"29872389"
] | [
{
"pmid": "21787100",
"title": "Quantity judgments of auditory and visual stimuli by chimpanzees (Pan troglodytes).",
"abstract": "Many species can choose between two visual sets of stimuli on the basis of quantity. This is true when sets are both visible, or are presented one set at a time or even one item at a time. However, we know comparatively little about how well nonhuman animals can compare auditory quantities. Here, three chimpanzees (Pan troglodytes) chose between two sets of food items when they only heard each item fall into different containers rather than seeing those items. This method prevented the chimpanzees from summing the amount of visible food they saw because there were no visual cues. Chimpanzees performed well, and their performance matched that of previous experiments with regard to obeying Weber's law. They also performed well with comparisons between a sequentially presented auditory set and a fully visible set, demonstrating that duration of presentation was not being used as a cue. In addition, they accommodated empty sets into these judgments, although not perfectly. Thus, chimpanzees can judge auditory quantities in flexible ways that show many similarities to how they compare visual quantities."
},
{
"pmid": "29495409",
"title": "Enhancing Perception with Tactile Object Recognition in Adaptive Grippers for Human-Robot Interaction.",
"abstract": "The use of tactile perception can help first response robotic teams in disaster scenarios, where visibility conditions are often reduced due to the presence of dust, mud, or smoke, distinguishing human limbs from other objects with similar shapes. Here, the integration of the tactile sensor in adaptive grippers is evaluated, measuring the performance of an object recognition task based on deep convolutional neural networks (DCNNs) using a flexible sensor mounted in adaptive grippers. A total of 15 classes with 50 tactile images each were trained, including human body parts and common environment objects, in semi-rigid and flexible adaptive grippers based on the fin ray effect. The classifier was compared against the rigid configuration and a support vector machine classifier (SVM). Finally, a two-level output network has been proposed to provide both object-type recognition and human/non-human classification. Sensors in adaptive grippers have a higher number of non-null tactels (up to 37% more), with a lower mean of pressure values (up to 72% less) than when using a rigid sensor, with a softer grip, which is needed in physical human-robot interaction (pHRI). A semi-rigid implementation with 95.13% object recognition rate was chosen, even though the human/non-human classification had better results (98.78%) with a rigid sensor."
},
{
"pmid": "15971691",
"title": "Do we hear size or sound? Balls dropped on plates.",
"abstract": "The aim of this study is to examine whether it is possible to recover directly the size of an object from the sound of an impact. Specifically, the study is designed to investigate whether listeners can tell the size of a ball from the sound when it is dropped on plates of different diameters (on one, two, or three plates in Experiments 1, 2, and 3, respectively). In this paradigm, most of the sound produced is from the plate rather than the ball. Listeners were told neither how many different balls or plates were used nor the materials of the balls and plates. Although listeners provided reasonable ball size estimates, their judgments were influenced by the size of the plate: Balls were judged to be larger when dropped on larger plates. Moreover, listeners were generally unable to recognize either ball and plate materials or the number of plates used in Experiments 2 and 3. Finally, various acoustic properties of the sounds are shown to be correlated with listeners' judgments."
},
{
"pmid": "21460910",
"title": "Sparsity-motivated automatic target recognition.",
"abstract": "We present an automatic target recognition algorithm using the recently developed theory of sparse representations and compressive sensing. We show how sparsity can be helpful for efficient utilization of data for target recognition. We verify the efficacy of the proposed algorithm in terms of the recognition rate and confusion matrices on the well known Comanche (Boeing-Sikorsky, USA) forward-looking IR data set consisting of ten different military targets at different orientations."
},
{
"pmid": "21339529",
"title": "Secure and Robust Iris Recognition Using Random Projections and Sparse Representations.",
"abstract": "Noncontact biometrics such as face and iris have additional benefits over contact-based biometrics such as fingerprint and hand geometry. However, three important challenges need to be addressed in a noncontact biometrics-based authentication system: ability to handle unconstrained acquisition, robust and accurate matching, and privacy enhancement without compromising security. In this paper, we propose a unified framework based on random projections and sparse representations, that can simultaneously address all three issues mentioned above in relation to iris biometrics. Our proposed quality measure can handle segmentation errors and a wide variety of possible artifacts during iris acquisition. We demonstrate how the proposed approach can be easily extended to handle alignment variations and recognition from iris videos, resulting in a robust and accurate system. The proposed approach includes enhancements to privacy and security by providing ways to create cancelable iris templates. Results on public data sets show significant benefits of the proposed approach."
},
{
"pmid": "29872389",
"title": "Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot.",
"abstract": "In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback-Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes."
}
] |
Frontiers in Genetics | 31824574 | PMC6886371 | 10.3389/fgene.2019.01184 | CircSLNN: Identifying RBP-Binding Sites on circRNAs via Sequence Labeling Neural Networks | The interactions between RNAs and RNA binding proteins (RBPs) are crucial for understanding post-transcriptional regulation mechanisms. A lot of computational tools have been developed to automatically predict the binding relationship between RNAs and RBPs. However, most of the methods can only predict the presence or absence of binding sites for a sequence fragment, without providing specific information on the position or length of the binding sites. Besides, the existing tools focus on the interaction between RBPs and linear RNAs, while the binding sites on circular RNAs (circRNAs) have been rarely studied. In this study, we model the prediction of binding sites on RNAs as a sequence labeling problem, and propose a new model called circSLNN to identify the specific location of RBP-binding sites on circRNAs. CircSLNN is driven by pretrained RNA embedding vectors and a composite labeling model. On our constructed circRNA datasets, our model has an average F
1 score of 0.790. We assess the performance on full-length RNA sequences, the proposed model outperforms previous classification-based models by a large margin. | Related WorkPrediction Based on Traditional Machine Learning MethodsThe prediction of molecular interactions has been a hot topic in bioinformatics over the past decades. Especially, the protein–protein-interactions (PPIs) have been well-studied due to the abundant information that can be utilized in the prediction, e.g. amino acid sequences, function domains, gene ontology annotation (Ashburner et al., 2000). The machine learning-based predictors usually consist of two parts, i.e. the feature extraction and classification. Similar to PPI, the prediction of RNA–RBP-interaction is a typical machine learning problem. However, due to the lack of functional annotation of RNAs, the feature extraction mainly relies on RNA sequences or secondary structures. For some types of RNAs, like circRNAs which have constrained structures, i.e. covalently closed continuous loops, the effective feature extraction from sequences are more important.Traditional feature representation of RNA sequences include k-tuple composition, pseudo k-tuple composition (PseKNC) (Chen et al., 2013), etc. The features are discrete vectors, working with shallow learning models. For instance, Muppirala et al. (2011) used the SVMs and random forest methods to predict the RNA–RBP-interactions. As the rise of deep learning, sequence encoding schemes and deep neural networks have been emerging and achieved better prediction performance.Prediction Based on Deep Neural NetworksDeepBind (Alipanahi et al., 2015) is a pioneer work in developing deep learning models for RNA–RBP-interactions. The model is based on a convolutional neural network, which not only improves prediction accuracy but also reveals new sequence patterns at the binding area. Later, Pan et al. released a series of computational tools, including iDeep (Pan and Shen, 2017), iDeepS (Pan et al., 2018) and iDeepE (Pan and Shen, 2018), which have different feature representation and model architecture. iDeep utilizes five different information sources, i.e. secondary structure information, motif information for describing the conserved region of sequences, CLIP co-binding, region type, and sequence information, to extract high-level abstraction features via deep learning models. Especially, the sequence information is processed by a CNN (Krizhevsky et al., 2012), while other four data sources are processed by deep belief networks (Zou and Conzen, 2004). Compared with iDeep, iDeepS reduces the types of data sources and only retains sequence information and secondary structure information. The authors added bi-directional long short-term memory (BiLSTM) (Schuster and Paliwal, 1997) to integrate the data, which better reserves contextual information based on relative position relationship of nucleotides.Generally, the performance of deep learning-based methods depends on informative feature representation and powerful model architecture. In this study, we explore both the two parts to improve prediction accuracy. | [
"26213851",
"10802651",
"17853436",
"23303794",
"26669964",
"18197166",
"21333748",
"17932917",
"26017442",
"16731699",
"20483814",
"23446348",
"29722865",
"29970003",
"17360525",
"26873924",
"15308537"
] | [
{
"pmid": "26213851",
"title": "Predicting the sequence specificities of DNA- and RNA-binding proteins by deep learning.",
"abstract": "Knowing the sequence specificities of DNA- and RNA-binding proteins is essential for developing models of the regulatory processes in biological systems and for identifying causal disease variants. Here we show that sequence specificities can be ascertained from experimental data with 'deep learning' techniques, which offer a scalable, flexible and unified computational approach for pattern discovery. Using a diverse array of experimental data and evaluation metrics, we find that deep learning outperforms other state-of-the-art methods, even when training on in vitro data and testing on in vivo data. We call this approach DeepBind and have built a stand-alone software tool that is fully automatic and handles millions of sequences per experiment. Specificities determined by DeepBind are readily visualized as a weighted ensemble of position weight matrices or as a 'mutation map' that indicates how variations affect binding within a specific sequence."
},
{
"pmid": "17853436",
"title": "RNA-protein interactions and control of mRNA stability in neurons.",
"abstract": "In addition to transcription, posttranscriptional mechanisms play a vital role in the control of gene expression. There are multiple levels of posttranscriptional regulation, including mRNA processing, splicing, editing, transport, stability, and translation. Among these, mRNA stability is estimated to control about 5-10% of all human genes. The rate of mRNA decay is regulated by the interaction of cis-acting elements in the transcripts and sequence-specific RNA-binding proteins. One of the most studied cis-acting elements is the AU-rich element (ARE) present in the 3' untranslated region (3'UTR) of several unstable mRNAs. These sequences are targets of many ARE-binding proteins; some of which induce degradation whereas others promote stabilization of the mRNA. Recently, these mechanisms were uncovered in neurons, where they have been associated with different physiological phenomena, from early development and nerve regeneration to learning and memory processes. In this Mini-Review, we briefly discuss the general mechanisms of control of mRNA turnover and present evidence supporting the importance of these mechanisms in the expression of an increasing number of neuronal genes."
},
{
"pmid": "23303794",
"title": "iRSpot-PseDNC: identify recombination spots with pseudo dinucleotide composition.",
"abstract": "Meiotic recombination is an important biological process. As a main driving force of evolution, recombination provides natural new combinations of genetic variations. Rather than randomly occurring across a genome, meiotic recombination takes place in some genomic regions (the so-called 'hotspots') with higher frequencies, and in the other regions (the so-called 'coldspots') with lower frequencies. Therefore, the information of the hotspots and coldspots would provide useful insights for in-depth studying of the mechanism of recombination and the genome evolution process as well. So far, the recombination regions have been mainly determined by experiments, which are both expensive and time-consuming. With the avalanche of genome sequences generated in the postgenomic age, it is highly desired to develop automated methods for rapidly and effectively identifying the recombination regions. In this study, a predictor, called 'iRSpot-PseDNC', was developed for identifying the recombination hotspots and coldspots. In the new predictor, the samples of DNA sequences are formulated by a novel feature vector, the so-called 'pseudo dinucleotide composition' (PseDNC), into which six local DNA structural properties, i.e. three angular parameters (twist, tilt and roll) and three translational parameters (shift, slide and rise), are incorporated. It was observed by the rigorous jackknife test that the overall success rate achieved by iRSpot-PseDNC was >82% in identifying recombination spots in Saccharomyces cerevisiae, indicating the new predictor is promising or at least may become a complementary tool to the existing methods in this area. Although the benchmark data set used to train and test the current method was from S. cerevisiae, the basic approaches can also be extended to deal with all the other genomes. Particularly, it has not escaped our notice that the PseDNC approach can be also used to study many other DNA-related problems. As a user-friendly web-server, iRSpot-PseDNC is freely accessible at http://lin.uestc.edu.cn/server/iRSpot-PseDNC."
},
{
"pmid": "26669964",
"title": "CircInteractome: A web tool for exploring circular RNAs and their interacting proteins and microRNAs.",
"abstract": "Circular RNAs (circRNAs) are widely expressed in animal cells, but their biogenesis and functions are poorly understood. CircRNAs have been shown to act as sponges for miRNAs and may also potentially sponge RNA-binding proteins (RBPs) and are thus predicted to function as robust posttranscriptional regulators of gene expression. The joint analysis of large-scale transcriptome data coupled with computational analyses represents a powerful approach to elucidate possible biological roles of ribonucleoprotein (RNP) complexes. Here, we present a new web tool, CircInteractome (circRNA interactome), for mapping RBP- and miRNA-binding sites on human circRNAs. CircInteractome searches public circRNA, miRNA, and RBP databases to provide bioinformatic analyses of binding sites on circRNAs and additionally analyzes miRNA and RBP sites on junction and junction-flanking sequences. CircInteractome also allows the user the ability to (1) identify potential circRNAs which can act as RBP sponges, (2) design junction-spanning primers for specific detection of circRNAs of interest, (3) design siRNAs for circRNA silencing, and (4) identify potential internal ribosomal entry sites (IRES). In sum, the web tool CircInteractome, freely accessible at http://circinteractome.nia.nih.gov, facilitates the analysis of circRNAs and circRNP biology."
},
{
"pmid": "18197166",
"title": "Mechanisms of post-transcriptional regulation by microRNAs: are the answers in sight?",
"abstract": "MicroRNAs constitute a large family of small, approximately 21-nucleotide-long, non-coding RNAs that have emerged as key post-transcriptional regulators of gene expression in metazoans and plants. In mammals, microRNAs are predicted to control the activity of approximately 30% of all protein-coding genes, and have been shown to participate in the regulation of almost every cellular process investigated so far. By base pairing to mRNAs, microRNAs mediate translational repression or mRNA degradation. This Review summarizes the current understanding of the mechanistic aspects of microRNA-induced repression of translation and discusses some of the controversies regarding different modes of microRNA function."
},
{
"pmid": "21333748",
"title": "RNA-protein interactions in human health and disease.",
"abstract": "It is now clear that the genomes of many organisms encode thousands of large and small non-coding (nc)RNAs. However, relative to the discovery of ncRNAs the functions and mechanisms of ncRNAs remain disproportionately understood. One intriguing observation is that many ncRNAs are found to be associated with protein complexes including those involved in transcription regulation, post-transcriptional silencing, and epigentic regulation. These observations suggest that the functions and mechanisms of many of these ncRNAs may depend on their interactions with various protein complexes within the cell. In this review we discuss well known examples as well as newly emerging evidence of a widespread RNA-protein interactions in distinct biological processes in a wide range of organisms, and highlight the importance of developing new technologies to dissect these interactions. Finally, we propose that mis-regulation of ncRNAs interactions with their protein partners may contribute to human disease, and open up a novel approach to therapeutic interventions."
},
{
"pmid": "17932917",
"title": "Prediction of RNA binding sites in a protein using SVM and PSSM profile.",
"abstract": "RNA-binding proteins (RBPs) play key roles in post-transcriptional control of gene expression, which, along with transcriptional regulation, is a major way to regulate patterns of gene expression during development. Thus, the identification and prediction of RNA binding sites is an important step in comprehensive understanding of how RBPs control organism development. Combining evolutionary information and support vector machine (SVM), we have developed an improved method for predicting RNA binding sites or RNA interacting residues in a protein sequence. The prediction models developed in this study have been trained and tested on 86 RNA binding protein chains and evaluated using fivefold cross validation technique. First, a SVM model was developed that achieved a maximum Matthew's correlation coefficient (MCC) of 0.31. The performance of this SVM model further improved the MCC from 0.31 to 0.45, when multiple sequence alignment in the form of PSSM profiles was used as input to the SVM, which is far better than the maximum MCC achieved by previous methods (0.41) on the same dataset. In addition, SVM models were also developed on an alternative dataset that contained 107 RBP chains. Utilizing PSSM as input information to the SVM, the training/testing on this alternate dataset achieved a maximum MCC of 0.32. Conclusively, the prediction performance of SVM models developed in this study is better than the existing methods on the same datasets. A web server 'Pprint' was also developed for predicting RNA binding residues in a protein sequence which is freely available at http://www.imtech.res.in/raghava/pprint/."
},
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "16731699",
"title": "Cd-hit: a fast program for clustering and comparing large sets of protein or nucleotide sequences.",
"abstract": "In 2001 and 2002, we published two papers (Bioinformatics, 17, 282-283, Bioinformatics, 18, 77-82) describing an ultrafast protein sequence clustering program called cd-hit. This program can efficiently cluster a huge protein database with millions of sequences. However, the applications of the underlying algorithm are not limited to only protein sequences clustering, here we present several new programs using the same algorithm including cd-hit-2d, cd-hit-est and cd-hit-est-2d. Cd-hit-2d compares two protein datasets and reports similar matches between them; cd-hit-est clusters a DNA/RNA sequence database and cd-hit-est-2d compares two nucleotide datasets. All these programs can handle huge datasets with millions of sequences and can be hundreds of times faster than methods based on the popular sequence comparison and database search tools, such as BLAST."
},
{
"pmid": "20483814",
"title": "Prediction of protein-RNA binding sites by a random forest method with combined features.",
"abstract": "MOTIVATION\nProtein-RNA interactions play a key role in a number of biological processes, such as protein synthesis, mRNA processing, mRNA assembly, ribosome function and eukaryotic spliceosomes. As a result, a reliable identification of RNA binding site of a protein is important for functional annotation and site-directed mutagenesis. Accumulated data of experimental protein-RNA interactions reveal that a RNA binding residue with different neighbor amino acids often exhibits different preferences for its RNA partners, which in turn can be assessed by the interacting interdependence of the amino acid fragment and RNA nucleotide.\n\n\nRESULTS\nIn this work, we propose a novel classification method to identify the RNA binding sites in proteins by combining a new interacting feature (interaction propensity) with other sequence- and structure-based features. Specifically, the interaction propensity represents a binding specificity of a protein residue to the interacting RNA nucleotide by considering its two-side neighborhood in a protein residue triplet. The sequence as well as the structure-based features of the residues are combined together to discriminate the interaction propensity of amino acids with RNA. We predict RNA interacting residues in proteins by implementing a well-built random forest classifier. The experiments show that our method is able to detect the annotated protein-RNA interaction sites in a high accuracy. Our method achieves an accuracy of 84.5%, F-measure of 0.85 and AUC of 0.92 prediction of the RNA binding residues for a dataset containing 205 non-homologous RNA binding proteins, and also outperforms several existing RNA binding residue predictors, such as RNABindR, BindN, RNAProB and PPRint, and some alternative machine learning methods, such as support vector machine, naive Bayes and neural network in the comparison study. Furthermore, we provide some biological insights into the roles of sequences and structures in protein-RNA interactions by both evaluating the importance of features for their contributions in predictive accuracy and analyzing the binding patterns of interacting residues.\n\n\nAVAILABILITY\nAll the source data and code are available at http://www.aporc.org/doc/wiki/PRNA or http://www.sysbio.ac.cn/datatools.asp\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "23446348",
"title": "Circular RNAs are a large class of animal RNAs with regulatory potency.",
"abstract": "Circular RNAs (circRNAs) in animals are an enigmatic class of RNA with unknown function. To explore circRNAs systematically, we sequenced and computationally analysed human, mouse and nematode RNA. We detected thousands of well-expressed, stable circRNAs, often showing tissue/developmental-stage-specific expression. Sequence analysis indicated important regulatory functions for circRNAs. We found that a human circRNA, antisense to the cerebellar degeneration-related protein 1 transcript (CDR1as), is densely bound by microRNA (miRNA) effector complexes and harbours 63 conserved binding sites for the ancient miRNA miR-7. Further analyses indicated that CDR1as functions to bind miR-7 in neuronal tissues. Human CDR1as expression in zebrafish impaired midbrain development, similar to knocking down miR-7, suggesting that CDR1as is a miRNA antagonist with a miRNA-binding capacity ten times higher than any other known transcript. Together, our data provide evidence that circRNAs form a large class of post-transcriptional regulators. Numerous circRNAs form by head-to-tail splicing of exons, suggesting previously unrecognized regulatory potential of coding sequences."
},
{
"pmid": "29722865",
"title": "Predicting RNA-protein binding sites and motifs through combining local and global deep convolutional neural networks.",
"abstract": "Motivation\nRNA-binding proteins (RBPs) take over 5-10% of the eukaryotic proteome and play key roles in many biological processes, e.g. gene regulation. Experimental detection of RBP binding sites is still time-intensive and high-costly. Instead, computational prediction of the RBP binding sites using patterns learned from existing annotation knowledge is a fast approach. From the biological point of view, the local structure context derived from local sequences will be recognized by specific RBPs. However, in computational modeling using deep learning, to our best knowledge, only global representations of entire RNA sequences are employed. So far, the local sequence information is ignored in the deep model construction process.\n\n\nResults\nIn this study, we present a computational method iDeepE to predict RNA-protein binding sites from RNA sequences by combining global and local convolutional neural networks (CNNs). For the global CNN, we pad the RNA sequences into the same length. For the local CNN, we split a RNA sequence into multiple overlapping fixed-length subsequences, where each subsequence is a signal channel of the whole sequence. Next, we train deep CNNs for multiple subsequences and the padded sequences to learn high-level features, respectively. Finally, the outputs from local and global CNNs are combined to improve the prediction. iDeepE demonstrates a better performance over state-of-the-art methods on two large-scale datasets derived from CLIP-seq. We also find that the local CNN runs 1.8 times faster than the global CNN with comparable performance when using GPUs. Our results show that iDeepE has captured experimentally verified binding motifs.\n\n\nAvailability and implementation\nhttps://github.com/xypan1232/iDeepE.\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "29970003",
"title": "Prediction of RNA-protein sequence and structure binding preferences using deep convolutional and recurrent neural networks.",
"abstract": "BACKGROUND\nRNA regulation is significantly dependent on its binding protein partner, known as the RNA-binding proteins (RBPs). Unfortunately, the binding preferences for most RBPs are still not well characterized. Interdependencies between sequence and secondary structure specificities is challenging for both predicting RBP binding sites and accurate sequence and structure motifs detection.\n\n\nRESULTS\nIn this study, we propose a deep learning-based method, iDeepS, to simultaneously identify the binding sequence and structure motifs from RNA sequences using convolutional neural networks (CNNs) and a bidirectional long short term memory network (BLSTM). We first perform one-hot encoding for both the sequence and predicted secondary structure, to enable subsequent convolution operations. To reveal the hidden binding knowledge from the observed sequences, the CNNs are applied to learn the abstract features. Considering the close relationship between sequence and predicted structures, we use the BLSTM to capture possible long range dependencies between binding sequence and structure motifs identified by the CNNs. Finally, the learned weighted representations are fed into a classification layer to predict the RBP binding sites. We evaluated iDeepS on verified RBP binding sites derived from large-scale representative CLIP-seq datasets. The results demonstrate that iDeepS can reliably predict the RBP binding sites on RNAs, and outperforms the state-of-the-art methods. An important advantage compared to other methods is that iDeepS can automatically extract both binding sequence and structure motifs, which will improve our understanding of the mechanisms of binding specificities of RBPs.\n\n\nCONCLUSION\nOur study shows that the iDeepS method identifies the sequence and structure motifs to accurately predict RBP binding sites. iDeepS is available at https://github.com/xypan1232/iDeepS ."
},
{
"pmid": "17360525",
"title": "Predicting protein-protein interactions based only on sequences information.",
"abstract": "Protein-protein interactions (PPIs) are central to most biological processes. Although efforts have been devoted to the development of methodology for predicting PPIs and protein interaction networks, the application of most existing methods is limited because they need information about protein homology or the interaction marks of the protein partners. In the present work, we propose a method for PPI prediction using only the information of protein sequences. This method was developed based on a learning algorithm-support vector machine combined with a kernel function and a conjoint triad feature for describing amino acids. More than 16,000 diverse PPI pairs were used to construct the universal model. The prediction ability of our approach is better than that of other sequence-based PPI prediction methods because it is able to predict PPI networks. Different types of PPI networks have been effectively mapped with our method, suggesting that, even with only sequence information, this method could be applied to the exploration of networks for any newly discovered protein with unknown biological relativity. In addition, such supplementary experimental information can enhance the prediction ability of the method."
},
{
"pmid": "26873924",
"title": "Circular RNA profile in gliomas revealed by identification tool UROBORUS.",
"abstract": "Recent evidence suggests that many endogenous circular RNAs (circRNAs) may play roles in biological processes. However, the expression patterns and functions of circRNAs in human diseases are not well understood. Computationally identifying circRNAs from total RNA-seq data is a primary step in studying their expression pattern and biological roles. In this work, we have developed a computational pipeline named UROBORUS to detect circRNAs in total RNA-seq data. By applying UROBORUS to RNA-seq data from 46 gliomas and normal brain samples, we detected thousands of circRNAs supported by at least two read counts, followed by successful experimental validation on 24 circRNAs from the randomly selected 27 circRNAs. UROBORUS is an efficient tool that can detect circRNAs with low expression levels in total RNA-seq without RNase R treatment. The circRNAs expression profiling revealed more than 476 circular RNAs differentially expressed in control brain tissues and gliomas. Together with parental gene expression, we found that circRNA and its parental gene have diversified expression patterns in gliomas and control brain tissues. This study establishes an efficient and sensitive approach for predicting circRNAs using total RNA-seq data. The UROBORUS pipeline can be accessed freely for non-commercial purposes at http://uroborus.openbioinformatics.org/."
},
{
"pmid": "15308537",
"title": "A new dynamic Bayesian network (DBN) approach for identifying gene regulatory networks from time course microarray data.",
"abstract": "MOTIVATION\nSignaling pathways are dynamic events that take place over a given period of time. In order to identify these pathways, expression data over time are required. Dynamic Bayesian network (DBN) is an important approach for predicting the gene regulatory networks from time course expression data. However, two fundamental problems greatly reduce the effectiveness of current DBN methods. The first problem is the relatively low accuracy of prediction, and the second is the excessive computational time.\n\n\nRESULTS\nIn this paper, we present a DBN-based approach with increased accuracy and reduced computational time compared with existing DBN methods. Unlike previous methods, our approach limits potential regulators to those genes with either earlier or simultaneous expression changes (up- or down-regulation) in relation to their target genes. This allows us to limit the number of potential regulators and consequently reduce the search space. Furthermore, we use the time difference between the initial change in the expression of a given regulator gene and its potential target gene to estimate the transcriptional time lag between these two genes. This method of time lag estimation increases the accuracy of predicting gene regulatory networks. Our approach is evaluated using time-series expression data measured during the yeast cell cycle. The results demonstrate that this approach can predict regulatory networks with significantly improved accuracy and reduced computational time compared with existing DBN approaches."
}
] |
Frontiers in Neuroscience | 31849588 | PMC6888095 | 10.3389/fnins.2019.01277 | A Data-Driven Measure of Effective Connectivity Based on Renyi's α-Entropy | Transfer entropy (TE) is a model-free effective connectivity measure based on information theory. It has been increasingly used in neuroscience because of its ability to detect unknown non-linear interactions, which makes it well suited for exploratory brain effective connectivity analyses. Like all information theoretic quantities, TE is defined regarding the probability distributions of the system under study, which in practice are unknown and must be estimated from data. Commonly used methods for TE estimation rely on a local approximation of the probability distributions from nearest neighbor distances, or on symbolization schemes that then allow the probabilities to be estimated from the symbols' relative frequencies. However, probability estimation is a challenging problem, and avoiding this intermediate step in TE computation is desirable. In this work, we propose a novel TE estimator using functionals defined on positive definite and infinitely divisible kernels matrices that approximate Renyi's entropy measures of order α. Our data-driven approach estimates TE directly from data, sidestepping the need for probability distribution estimation. Also, the proposed estimator encompasses the well-known definition of TE as a sum of Shannon entropies in the limiting case when α → 1. We tested our proposal on a simulation framework consisting of two linear models, based on autoregressive approaches and a linear coupling function, respectively, and on the public electroencephalogram (EEG) database BCI Competition IV, obtained under a motor imagery paradigm. For the synthetic data, the proposed kernel-based TE estimation method satisfactorily identifies the causal interactions present in the data. Also, it displays robustness to varying noise levels and data sizes, and to the presence of multiple interaction delays in the same connected network. Obtained results for the motor imagery task show that our approach codes discriminant spatiotemporal patterns for the left and right-hand motor imagination tasks, with classification performances that compare favorably to the state-of-the-art. | 2. Related Work2.1. Transfer EntropyTransfer entropy (TE) is an information theoretic quantity that estimates the directed interaction, or information flow, between two dynamical systems (Zhu et al., 2015). It was introduced by Schreiber (2000) as a Wiener-causal measure within the framework of information theory. Therefore, TE is based on the assumption that a time series A causes a time series B if the information of the past of A, alongside the past of B, is better at predicting the future of B than the past of B alone. It is also based on the information theoretic concept of Shannon entropy:(1)HS(X)=𝔼{-log(p(x))}≈-∑xp(x)log(p(x)),where X is a discrete random variable, p(·) is the probability mass function of X, and 𝔼{·} stands for the expected value operator. HS(X) quantifies the average reduction in uncertainty attained after measuring the values of X. By associating the improvement in prediction power of Wiener's definition of causality with the reduction of uncertainty measured by entropy, Schreiber arrived at the concept of TE (Vicente et al., 2011). Formally, TE measures the deviation from the following generalized Markov condition:(2)p(yt+1|ytm,xtn)=p(yt+1|ytm),where xtn∈ℝn and ytm∈ℝm are Markov processes, of orders n and m, that approximate two time series x={xt}t=1l and y={yt}t=1l, respectively, and t ∈ ℕ is a discrete time index. This deviation is quantified through the Kullback-Leibler divergence (DKL(p||q)=∑xp(x)log(p(x)/q(x))) of the probability functions p(yt+1|ytm,xtn) and p(yt+1|ytm):(3)TE(x→y)=∑yt+1,ytm,xtnp(yt+1,ytm,xtn)log(p(yt+1|ytm,xtn)p(yt+1|ytm)).Therefore, TE measures whether the probability of a future value of y increases given the past values of x and y, as compared to the probability of that same future value of y given only the past of y.In an attempt to better capture the underlying dynamics of the system that generates the observed data, i.e., the measured values of the random variables contained in the time series, TE is not usually defined directly on the raw data, but on its space state (Vicente et al., 2011). We can reconstruct such state space from the observations through time embedding. The most commonly used embedding procedure in the literature is Takens delay embedding (Takens, 1981). So that for a time series x its space state is approximated as:(4)xtd=(x(t),x(t-τ),x(t-2τ),…,x(t-(d-1)τ)),where d, τ ∈ ℕ are the embedding dimension and delay, respectively. We can now express the TE in terms of the embedded data as:(5)TE(x→y)=∑yt+1,ytdy,xtdxp(yt+1,ytdy,xtdx)log(p(yt+1|ytdy,xtdx)p(yt+1|ytdy)),where dx, dy ∈ ℕ. To generalize TE to interaction times other than 1, we rewrite Equation (5) as:(6)TE(x→y)=∑yt,yt-1dy,xt-udxp(yt,yt-1dy,xt-udx)log(p(yt|yt-1dy,xt-udx)p(yt|yt-1dy)),where u ∈ ℕ represents the interaction delay between the driving and the driven systems. The changes in the time indexing are necessary to guaranty that Wiener's definition of causality is respected (Wibral et al., 2013). Using the definition in Equation (1), we can also express Equation (6) as a sum of Shannon entropies:(7)TE(x→y)=HS(yt-1dy,xt-udx)-HS(yt,yt-1dy,xt-udx) +HS(yt,yt-1dy)-HS(yt-1dy).In practice, we must estimate the sum of Shannon entropies in Equation (7) from data. The most popular approach to do so, in neuroscience studies, is an adaptation for TE of the Kraskov-Stögbauer-Grassberger method for estimating mutual information (Kraskov et al., 2004; Dimitriadis et al., 2016). The method relies on a local approximation of the probability distributions needed to estimate the entropies from the distances of every data point to its neighbors, within a predefined neighborhood diameter. Also, it deals with the dimensionality differences in the data spaces in Equation (7) by fixing the number of neighbors in the highest dimensional space, the one spanned by (yt,yt-1dy,xt-udx), and projecting the distances obtained there to the marginal (and lower dimensional) spaces so that they serve as neighborhood diameters in those. The Kraskov-Stögbauer-Grassberger estimator for TE is expressed as:(8)TEKSG(x→y)=ψ(K)+E{ψ(nyt−1dy+1) −ψ(nytyt−1dy+1)−ψ(nyt−1dyxt−udx)}t,where ψ(·) stands for the digamma function, K ∈ ℕ is the selected number of neighbors in the highest dimensional space in Equation (7), 𝔼{·}t represents averaging over different time points, and n ∈ ℕ is the number of points in the marginal spaces (Lindner et al., 2011).An alternative approach for TE estimation relies on symbolic dynamics, a powerful tool for studying complex dynamical systems (Dimitriadis et al., 2012). The infinite number of values that can be attained by a given time series is replaced by a set of symbols through a symbolization scheme. We can then use the relative frequency of the symbols to estimate the joint and conditional probability distributions needed to compute TE (Dimitriadis et al., 2016). Given the space state reconstruction of a time series x (see Equation 4), we can arrange the elements in xtd according to their amplitude, in ascending order, as follows:(9)x(t-r1τ)≤x(t-r2τ)≤⋯≤x(t-rdτ),where r1, r2, …rd ∈ {0, 1, …, d − 1}, in order to obtain a symbolic sequence stx:(10)xtd→stx≡(r1,r2,…,rd),in what is known as ordinal pattern symbolization. Finally, we define the symbolic version of TE as:(11)TESym(x→y)=∑st+1y,sty,st+1-uxp(st+1y,sty,st+1-ux)log(p(st+1y|sty,st+1-ux)p(st+1y|sty)).We can rewrite Equation (11) in terms of Shannon entropies, as in Equation (7), and estimate the probability functions by counting the occurrences of the symbols (Dimitriadis et al., 2016).The two methods described above rely on the use of plug-in estimators to approximate the probability distributions in the joint and marginal entropies involved in the definition of TE. Therefore, the so obtained TE depends on the quality of the estimated distributions and, consequently, on the performance of the plug-in estimator, be it based on a nearest neighbor distances approximation or a frequentist approach. Since the estimation of probability distributions can by itself be challenging, it would be desirable to be able to compute TE directly from the data, avoiding the intermediate stage of probability density estimation, as has been proposed for other information theoretic quantities (Giraldo et al., 2015).2.2. Granger CausalityGranger Causality (GC), like TE, is a mathematical formalization of the concept of Wiener's causality, one that is widely used in neuroscience to asses effective connectivity (Seth et al., 2015). However, unlike TE, GC is not based on a probabilistic approach. The basic idea behind it is that for two stationary time series x={xi}i=1n and y={yi}i=1n, if x causes y, then the linear autoregressive model:(12)yi=∑k=1oakyi-k+ei,where o ∈ ℕ is the model's order and ak ∈ ℝ stands for the model's coefficients, will exhibit larger prediction errors ei than a model that also includes past of observations of x; that is, a linear bivariate autoregressive model of the form:(13)yi=∑k=1oak′yi-k+∑k=1obkxi-k+ei′.where the coefficients bk ∈ ℝ. The magnitude of the causal relation from x to y can then be quantified by the log ratio of the variances of the residuals or prediction errors (Seth, 2010):(14)GC(x→y)=log(var(e)var(e′)),where e,e′ ∈ ℝn−o are vectors holding the prediction errors, and var{·} stands for the variance operator. If the past of x does not improve the prediction of y then var(e) ≈ var(e′) and GC(x → y) → 0, if it does, then var(e) ≫ var(e′) and GC(x → y) ≫ 0. As defined above, GC is a linear bivariate parametric method that depends on the order o of the autoregressive model. Nonetheless, there are several variations of this basic formulation of GC that aim to capture nonlinear and multivariate relations in the data (Sameshima and Baccala, 2016). As a final remark, it is worth noting that although by definition TE has an advantage over GC by not assuming any a priori model for the interaction between the systems under study, the two are linked. As demonstrated in Barnett et al. (2009), they are entirely equivalent for Gaussian variables (up to a factor of 2). Because of this relationship and its widespread use we include a standard version of GC as a comparison method in our experiments. | [
"20366183",
"26778976",
"29542141",
"25455427",
"26780815",
"23372623",
"22432952",
"21931720",
"28813231",
"23006806",
"23583615",
"15244698",
"15376498",
"27259085",
"25741277",
"22098775",
"2464490",
"28597846",
"21794851",
"10991308",
"19961876",
"25716830",
"22811657",
"30211307",
"20706781",
"29149201",
"23468850",
"25068489",
"24290935"
] | [
{
"pmid": "20366183",
"title": "Granger causality and transfer entropy are equivalent for Gaussian variables.",
"abstract": "Granger causality is a statistical notion of causal influence based on prediction via vector autoregression. Developed originally in the field of econometrics, it has since found application in a broader arena, particularly in neuroscience. More recently transfer entropy, an information-theoretic measure of time-directed information transfer between jointly dependent processes, has gained traction in a similarly wide field. While it has been recognized that the two concepts must be related, the exact relationship has until now not been formally described. Here we show that for Gaussian variables, Granger causality and transfer entropy are entirely equivalent, thus bridging autoregressive and information-theoretic approaches to data-driven causal inference."
},
{
"pmid": "26778976",
"title": "A Tutorial Review of Functional Connectivity Analysis Methods and Their Interpretational Pitfalls.",
"abstract": "Oscillatory neuronal activity may provide a mechanism for dynamic network coordination. Rhythmic neuronal interactions can be quantified using multiple metrics, each with their own advantages and disadvantages. This tutorial will review and summarize current analysis methods used in the field of invasive and non-invasive electrophysiology to study the dynamic connections between neuronal populations. First, we review metrics for functional connectivity, including coherence, phase synchronization, phase-slope index, and Granger causality, with the specific aim to provide an intuition for how these metrics work, as well as their quantitative definition. Next, we highlight a number of interpretational caveats and common pitfalls that can arise when performing functional connectivity analysis, including the common reference problem, the signal to noise ratio problem, the volume conduction problem, the common input problem, and the sample size bias problem. These pitfalls will be illustrated by presenting a set of MATLAB-scripts, which can be executed by the reader to simulate each of these potential problems. We discuss how these issues can be addressed using current methods."
},
{
"pmid": "29542141",
"title": "Time, frequency, and time-varying Granger-causality measures in neuroscience.",
"abstract": "This article proposes a systematic methodological review and an objective criticism of existing methods enabling the derivation of time, frequency, and time-varying Granger-causality statistics in neuroscience. The capacity to describe the causal links between signals recorded at different brain locations during a neuroscience experiment is indeed of primary interest for neuroscientists, who often have very precise prior hypotheses about the relationships between recorded brain signals. The increasing interest and the huge number of publications related to this topic calls for this systematic review, which describes the very complex methodological aspects underlying the derivation of these statistics. In this article, we first present a general framework that allows us to review and compare Granger-causality statistics in the time domain, and the link with transfer entropy. Then, the spectral and the time-varying extensions are exposed and discussed together with their estimation and distributional properties. Although not the focus of this article, partial and conditional Granger causality, dynamical causal modelling, directed transfer function, directed coherence, partial directed coherence, and their variant are also mentioned."
},
{
"pmid": "25455427",
"title": "Comparison of different spatial transformations applied to EEG data: A case study of error processing.",
"abstract": "The purpose of this paper is to compare the effects of different spatial transformations applied to the same scalp-recorded EEG data. The spatial transformations applied are two referencing schemes (average and linked earlobes), the surface Laplacian, and beamforming (a distributed source localization procedure). EEG data were collected during a speeded reaction time task that provided a comparison of activity between error vs. correct responses. Analyses focused on time-frequency power, frequency band-specific inter-electrode connectivity, and within-subject cross-trial correlations between EEG activity and reaction time. Time-frequency power analyses showed similar patterns of midfrontal delta-theta power for errors compared to correct responses across all spatial transformations. Beamforming additionally revealed error-related anterior and lateral prefrontal beta-band activity. Within-subject brain-behavior correlations showed similar patterns of results across the spatial transformations, with the correlations being the weakest after beamforming. The most striking difference among the spatial transformations was seen in connectivity analyses: linked earlobe reference produced weak inter-site connectivity that was attributable to volume conduction (zero phase lag), while the average reference and Laplacian produced more interpretable connectivity results. Beamforming did not reveal any significant condition modulations of connectivity. Overall, these analyses show that some findings are robust to spatial transformations, while other findings, particularly those involving cross-trial analyses or connectivity, are more sensitive and may depend on the use of appropriate spatial transformations."
},
{
"pmid": "26780815",
"title": "Revealing Cross-Frequency Causal Interactions During a Mental Arithmetic Task Through Symbolic Transfer Entropy: A Novel Vector-Quantization Approach.",
"abstract": "Working memory (WM) is a distributed cognitive process that employs communication between prefrontal cortex and posterior brain regions in the form of cross-frequency coupling between theta ( θ) and high-alpha ( α2) brain waves. A novel method for deriving causal interactions between brain waves of different frequencies is essential for a better understanding of the neural dynamics of such complex cognitive process. Here, we proposed a novel method to estimate transfer entropy ( TE) through a symbolization scheme, which is based on neural-gas algorithm (NG) and encodes a bivariate time series in the form of two symbolic sequences. Given the symbolic sequences, the delay symbolic transfer entropy ( dSTENG) is defined. Our approach is akin to standard symbolic transfer entropy ( STE) that incorporates the ordinal pattern (OP) symbolization technique. We assessed the proposed method in a WM-invoked paradigm that included a mental arithmetic task at various levels of difficulty. Effective interactions between Frontalθ ( Fθ ) and [Formula: see text] ( POα2) brain waves were detected in multichannel EEG recordings from 16 subjects. Compared with conventional methods, our technique was less sensitive to noise and demonstrated improved computational efficiency in quantifying the dominating direction of effective connectivity between brain waves of different spectral content. Moreover, we discovered an efferent Fθ connectivity pattern and an afferent POα2 one, in all the levels of the task. Further statistical analysis revealed an increasing dSTENG strength following the task's difficulty."
},
{
"pmid": "23372623",
"title": "A novel symbolization scheme for multichannel recordings with emphasis on phase information and its application to differentiate EEG activity from different mental tasks.",
"abstract": "UNLABELLED\nSymbolic dynamics is a powerful tool for studying complex dynamical systems. So far many techniques of this kind have been proposed as a means to analyze brain dynamics, but most of them are restricted to single-sensor measurements. Analyzing the dynamics in a channel-wise fashion is an invalid approach for multisite encephalographic recordings, since it ignores any pattern of coordinated activity that might emerge from the coherent activation of distinct brain areas. We suggest, here, the use of neural-gas algorithm (Martinez et al. in IEEE Trans Neural Netw 4:558-569, 1993) for encoding brain activity spatiotemporal dynamics in the form of a symbolic timeseries. A codebook of k prototypes, best representing the instantaneous multichannel data, is first designed. Each pattern of activity is then assigned to the most similar code vector. The symbolic timeseries derived in this way is mapped to a network, the topology of which encapsulates the most important phase transitions of the underlying dynamical system. Finally, global efficiency is used to characterize the obtained topology. We demonstrate the approach by applying it to EEG-data recorded from subjects while performing mental calculations. By working in a contrastive-fashion, and focusing in the phase aspects of the signals, we show that the underlying dynamics differ significantly in their symbolic representations.\n\n\nELECTRONIC SUPPLEMENTARY MATERIAL\nThe online version of this article (doi:10.1007/s11571-011-9186-5) contains supplementary material, which is available to authorized users."
},
{
"pmid": "22432952",
"title": "Functional and effective connectivity: a review.",
"abstract": "Over the past 20 years, neuroimaging has become a predominant technique in systems neuroscience. One might envisage that over the next 20 years the neuroimaging of distributed processing and connectivity will play a major role in disclosing the brain's functional architecture and operational principles. The inception of this journal has been foreshadowed by an ever-increasing number of publications on functional connectivity, causal modeling, connectomics, and multivariate analyses of distributed patterns of brain responses. I accepted the invitation to write this review with great pleasure and hope to celebrate and critique the achievements to date, while addressing the challenges ahead."
},
{
"pmid": "21931720",
"title": "Shannon and Renyi entropies to classify effects of Mild Traumatic Brain Injury on postural sway.",
"abstract": "BACKGROUND\nMild Traumatic Brain Injury (mTBI) has been identified as a major public and military health concern both in the United States and worldwide. Characterizing the effects of mTBI on postural sway could be an important tool for assessing recovery from the injury.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nWe assess postural sway by motion of the center of pressure (COP). Methods for data reduction include calculation of area of COP and fractal analysis of COP motion time courses. We found that fractal scaling appears applicable to sway power above about 0.5 Hz, thus fractal characterization is only quantifying the secondary effects (a small fraction of total power) in the sway time series, and is not effective in quantifying long-term effects of mTBI on postural sway. We also found that the area of COP sensitively depends on the length of data series over which the COP is obtained. These weaknesses motivated us to use instead Shannon and Renyi entropies to assess postural instability following mTBI. These entropy measures have a number of appealing properties, including capacity for determination of the optimal length of the time series for analysis and a new interpretation of the area of COP.\n\n\nCONCLUSIONS\nEntropy analysis can readily detect postural instability in athletes at least 10 days post-concussion so that it appears promising as a sensitive measure of effects of mTBI on postural sway.\n\n\nAVAILABILITY\nThe programs for analyses may be obtained from the authors."
},
{
"pmid": "28813231",
"title": "Time-Frequency Cross Mutual Information Analysis of the Brain Functional Networks Underlying Multiclass Motor Imagery.",
"abstract": "To study the physiologic mechanism of the brain during different motor imagery (MI) tasks, the authors employed a method of brain-network modeling based on time-frequency cross mutual information obtained from 4-class (left hand, right hand, feet, and tongue) MI tasks recorded as brain-computer interface (BCI) electroencephalography data. The authors explored the brain network revealed by these MI tasks using statistical analysis and the analysis of topologic characteristics, and observed significant differences in the reaction level, reaction time, and activated target during 4-class MI tasks. There was a great difference in the reaction level between the execution and resting states during different tasks: the reaction level of the left-hand MI task was the greatest, followed by that of the right-hand, feet, and tongue MI tasks. The reaction time required to perform the tasks also differed: during the left-hand and right-hand MI tasks, the brain networks of subjects reacted promptly and strongly, but there was a delay during the feet and tongue MI task. Statistical analysis and the analysis of network topology revealed the target regions of the brain network during different MI processes. In conclusion, our findings suggest a new way to explain the neural mechanism behind MI."
},
{
"pmid": "23006806",
"title": "A critical assessment of connectivity measures for EEG data: a simulation study.",
"abstract": "Information flow between brain areas is difficult to estimate from EEG measurements due to the presence of noise as well as due to volume conduction. We here test the ability of popular measures of effective connectivity to detect an underlying neuronal interaction from simulated EEG data, as well as the ability of commonly used inverse source reconstruction techniques to improve the connectivity estimation. We find that volume conduction severely limits the neurophysiological interpretability of sensor-space connectivity analyses. Moreover, it may generally lead to conflicting results depending on the connectivity measure and statistical testing approach used. In particular, we note that the application of Granger-causal (GC) measures combined with standard significance testing leads to the detection of spurious connectivity regardless of whether the analysis is performed on sensor-space data or on sources estimated using three different established inverse methods. This empirical result follows from the definition of GC. The phase-slope index (PSI) does not suffer from this theoretical limitation and therefore performs well on our simulated data. We develop a theoretical framework to characterize artifacts of volume conduction, which may still be present even in reconstructed source time series as zero-lag correlations, and to distinguish their time-delayed brain interaction. Based on this theory we derive a procedure which suppresses the influence of volume conduction, but preserves effects related to time-lagged brain interaction in connectivity estimates. This is achieved by using time-reversed data as surrogates for statistical testing. We demonstrate that this robustification makes Granger-causal connectivity measures applicable to EEG data, achieving similar results as PSI. Integrating the insights of our study, we provide a guidance for measuring brain interaction from EEG data. Software for generating benchmark data is made available."
},
{
"pmid": "23583615",
"title": "The neural network of motor imagery: an ALE meta-analysis.",
"abstract": "Motor imagery (MI) or the mental simulation of action is now increasingly being studied using neuroimaging techniques such as positron emission tomography and functional magnetic resonance imaging. The booming interest in capturing the neural underpinning of MI has provided a large amount of data which until now have never been quantitatively summarized. The aim of this activation likelihood estimation (ALE) meta-analysis was to provide a map of the brain structures involved in MI. Combining the data from 75 papers revealed that MI consistently recruits a large fronto-parietal network in addition to subcortical and cerebellar regions. Although the primary motor cortex was not shown to be consistently activated, the MI network includes several regions which are known to play a role during actual motor execution. The body part involved in the movements, the modality of MI and the nature of the MI tasks used all seem to influence the consistency of activation within the general MI network. In addition to providing the first quantitative cortical map of MI, we highlight methodological issues that should be addressed in future research."
},
{
"pmid": "15244698",
"title": "Estimating mutual information.",
"abstract": "We present two classes of improved estimators for mutual information M(X,Y), from samples of random points distributed according to some joint probability density mu(x,y). In contrast to conventional estimators based on binnings, they are based on entropy estimates from k -nearest neighbor distances. This means that they are data efficient (with k=1 we resolve structures down to the smallest possible scales), adaptive (the resolution is higher where data are more numerous), and have minimal bias. Indeed, the bias of the underlying entropy estimates is mainly due to nonuniformity of the density at the smallest resolved scale, giving typically systematic errors which scale as functions of k/N for N points. Numerically, we find that both families become exact for independent distributions, i.e. the estimator M(X,Y) vanishes (up to statistical fluctuations) if mu(x,y)=mu(x)mu(y). This holds for all tested marginal distributions and for all dimensions of x and y. In addition, we give estimators for redundancies between more than two random variables. We compare our algorithms in detail with existing algorithms. Finally, we demonstrate the usefulness of our estimators for assessing the actual independence of components obtained from independent component analysis (ICA), for improving ICA, and for estimating the reliability of blind source separation."
},
{
"pmid": "15376498",
"title": "Determination of EEG activity propagation: pair-wise versus multichannel estimate.",
"abstract": "Performance of different estimators describing propagation of electroencephalogram (EEG) activity, namely: Granger causality, directed transfer function (DTF), direct DTF (dDTF), short-time DTF (SDTF), bivariate coherence, and partial directed coherence are compared by means of simulations and on the examples of experimental signals. In particular, the differences between pair-wise and multichannel estimates are studied. The results show unequivocally that in most cases, the pair-wise estimates are incorrect and a complete set of signals involved in a given process has to be used to obtain the correct pattern of EEG flows. Different performance of multivariate estimators of propagation depending on their normalization is discussed. Advantages of multivariate autoregressive model are pointed out."
},
{
"pmid": "27259085",
"title": "Discrimination of motor imagery tasks via information flow pattern of brain connectivity.",
"abstract": "BACKGROUND\nThe effective connectivity refers explicitly to the influence that one neural system exerts over another in frequency domain. To investigate the propagation of neuronal activity in certain frequency can help us reveal the mechanisms of information processing by brain.\n\n\nOBJECTIVE\nThis study investigates the detection of effective connectivity and analyzes the complex brain network connection mode associated with motor imagery (MI) tasks.\n\n\nMETHODS\nThe effective connectivity among the primary motor area is firstly explored using partial directed coherence (PDC) combined with multivariate empirical mode decomposition (MEMD) based on electroencephalography (EEG) data. Then a new approach is proposed to analyze the connection mode of the complex brain network via the information flow pattern.\n\n\nRESULTS\nOur results demonstrate that significant effective connectivity exists in the bilateral hemisphere during the tasks, regardless of the left-/right-hand MI tasks. Furthermore, the out-in rate results of the information flow reveal the existence of the contralateral lateralization. The classification performance of left-/right-hand MI tasks can be improved by careful selection of intrinsic mode functions (IMFs).\n\n\nCONCLUSION\nThe proposed method can provide efficient features for the detection of MI tasks and has great potential to be applied in brain computer interface (BCI)."
},
{
"pmid": "25741277",
"title": "EEG entropy measures in anesthesia.",
"abstract": "HIGHLIGHTS\n► Twelve entropy indices were systematically compared in monitoring depth of anesthesia and detecting burst suppression.► Renyi permutation entropy performed best in tracking EEG changes associated with different anesthesia states.► Approximate Entropy and Sample Entropy performed best in detecting burst suppression.\n\n\nOBJECTIVE\nEntropy algorithms have been widely used in analyzing EEG signals during anesthesia. However, a systematic comparison of these entropy algorithms in assessing anesthesia drugs' effect is lacking. In this study, we compare the capability of 12 entropy indices for monitoring depth of anesthesia (DoA) and detecting the burst suppression pattern (BSP), in anesthesia induced by GABAergic agents.\n\n\nMETHODS\nTwelve indices were investigated, namely Response Entropy (RE) and State entropy (SE), three wavelet entropy (WE) measures [Shannon WE (SWE), Tsallis WE (TWE), and Renyi WE (RWE)], Hilbert-Huang spectral entropy (HHSE), approximate entropy (ApEn), sample entropy (SampEn), Fuzzy entropy, and three permutation entropy (PE) measures [Shannon PE (SPE), Tsallis PE (TPE) and Renyi PE (RPE)]. Two EEG data sets from sevoflurane-induced and isoflurane-induced anesthesia respectively were selected to assess the capability of each entropy index in DoA monitoring and BSP detection. To validate the effectiveness of these entropy algorithms, pharmacokinetic/pharmacodynamic (PK/PD) modeling and prediction probability (Pk) analysis were applied. The multifractal detrended fluctuation analysis (MDFA) as a non-entropy measure was compared.\n\n\nRESULTS\nAll the entropy and MDFA indices could track the changes in EEG pattern during different anesthesia states. Three PE measures outperformed the other entropy indices, with less baseline variability, higher coefficient of determination (R (2)) and prediction probability, and RPE performed best; ApEn and SampEn discriminated BSP best. Additionally, these entropy measures showed an advantage in computation efficiency compared with MDFA.\n\n\nCONCLUSION\nEach entropy index has its advantages and disadvantages in estimating DoA. Overall, it is suggested that the RPE index was a superior measure. Investigating the advantages and disadvantages of these entropy indices could help improve current clinical indices for monitoring DoA."
},
{
"pmid": "22098775",
"title": "TRENTOOL: a Matlab open source toolbox to analyse information flow in time series data with transfer entropy.",
"abstract": "BACKGROUND\nTransfer entropy (TE) is a measure for the detection of directed interactions. Transfer entropy is an information theoretic implementation of Wiener's principle of observational causality. It offers an approach to the detection of neuronal interactions that is free of an explicit model of the interactions. Hence, it offers the power to analyze linear and nonlinear interactions alike. This allows for example the comprehensive analysis of directed interactions in neural networks at various levels of description. Here we present the open-source MATLAB toolbox TRENTOOL that allows the user to handle the considerable complexity of this measure and to validate the obtained results using non-parametrical statistical testing. We demonstrate the use of the toolbox and the performance of the algorithm on simulated data with nonlinear (quadratic) coupling and on local field potentials (LFP) recorded from the retina and the optic tectum of the turtle (Pseudemys scripta elegans) where a neuronal one-way connection is likely present.\n\n\nRESULTS\nIn simulated data TE detected information flow in the simulated direction reliably with false positives not exceeding the rates expected under the null hypothesis. In the LFP data we found directed interactions from the retina to the tectum, despite the complicated signal transformations between these stages. No false positive interactions in the reverse directions were detected.\n\n\nCONCLUSIONS\nTRENTOOL is an implementation of transfer entropy and mutual information analysis that aims to support the user in the application of this information theoretic measure. TRENTOOL is implemented as a MATLAB toolbox and available under an open source license (GPL v3). For the use with neural data TRENTOOL seamlessly integrates with the popular FieldTrip toolbox."
},
{
"pmid": "2464490",
"title": "Spherical splines for scalp potential and current density mapping.",
"abstract": "Description of mapping methods using spherical splines, both to interpolate scalp potentials (SPs), and to approximate scalp current densities (SCDs). Compared to a previously published method using thin plate splines, the advantages are a very simple derivation of the SCD approximation, faster computing times, and greater accuracy in areas with few electrodes."
},
{
"pmid": "28597846",
"title": "Single-trial effective brain connectivity patterns enhance discriminability of mental imagery tasks.",
"abstract": "OBJECTIVE\nThe majority of the current approaches of connectivity based brain-computer interface (BCI) systems focus on distinguishing between different motor imagery (MI) tasks. Brain regions associated with MI are anatomically close to each other, hence these BCI systems suffer from low performances. Our objective is to introduce single-trial connectivity feature based BCI system for cognition imagery (CI) based tasks wherein the associated brain regions are located relatively far away as compared to those for MI.\n\n\nAPPROACH\nWe implemented time-domain partial Granger causality (PGC) for the estimation of the connectivity features in a BCI setting. The proposed hypothesis has been verified with two publically available datasets involving MI and CI tasks.\n\n\nMAIN RESULTS\nThe results support the conclusion that connectivity based features can provide a better performance than a classical signal processing framework based on bandpass features coupled with spatial filtering for CI tasks, including word generation, subtraction, and spatial navigation. These results show for the first time that connectivity features can provide a reliable performance for imagery-based BCI system.\n\n\nSIGNIFICANCE\nWe show that single-trial connectivity features for mixed imagery tasks (i.e. combination of CI and MI) can outperform the features obtained by current state-of-the-art method and hence can be successfully applied for BCI applications."
},
{
"pmid": "21794851",
"title": "Review of advanced techniques for the estimation of brain connectivity measured with EEG/MEG.",
"abstract": "Brain connectivity can be modeled and quantified with a large number of techniques. The main objective of this paper is to present the most modern and widely established mathematical methods for calculating connectivity that is commonly applied to functional high resolution multichannel neurophysiological signals, including electroencephalographic (EEG) and magnetoencephalographic (MEG) signals. A historical timeline of each technique is outlined along with some illustrative applications. The most crucial underlying assumptions of the presented methodologies are discussed in order to help the reader understand where each technique fits into the bigger picture of measuring brain connectivity. In this endeavor, linear, nonlinear, causality-assessing and information-based techniques are summarized in the framework of measuring functional and effective connectivity. Model based vs. data-driven techniques and bivariate vs. multivariate methods are also discussed. Finally, certain important caveats (i.e. stationarity assumption) pertaining to the applicability of the methods are also illustrated along with some examples of clinical applications."
},
{
"pmid": "10991308",
"title": "Measuring information transfer",
"abstract": "An information theoretic measure is derived that quantifies the statistical coherence between systems evolving in time. The standard time delayed mutual information fails to distinguish information that is actually exchanged from shared information due to common history and input signals. In our new approach, these influences are excluded by appropriate conditioning of transition probabilities. The resulting transfer entropy is able to distinguish effectively driving and responding elements and to detect asymmetry in the interaction of subsystems."
},
{
"pmid": "19961876",
"title": "A MATLAB toolbox for Granger causal connectivity analysis.",
"abstract": "Assessing directed functional connectivity from time series data is a key challenge in neuroscience. One approach to this problem leverages a combination of Granger causality analysis and network theory. This article describes a freely available MATLAB toolbox--'Granger causal connectivity analysis' (GCCA)--which provides a core set of methods for performing this analysis on a variety of neuroscience data types including neuroelectric, neuromagnetic, functional MRI, and other neural signals. The toolbox includes core functions for Granger causality analysis of multivariate steady-state and event-related data, functions to preprocess data, assess statistical significance and validate results, and to compute and display network-level indices of causal connectivity including 'causal density' and 'causal flow'. The toolbox is deliberately small, enabling its easy assimilation into the repertoire of researchers. It is however readily extensible given proficiency with the MATLAB language."
},
{
"pmid": "22811657",
"title": "Review of the BCI Competition IV.",
"abstract": "The BCI competition IV stands in the tradition of prior BCI competitions that aim to provide high quality neuroscientific data for open access to the scientific community. As experienced already in prior competitions not only scientists from the narrow field of BCI compete, but scholars with a broad variety of backgrounds and nationalities. They include high specialists as well as students. The goals of all BCI competitions have always been to challenge with respect to novel paradigms and complex data. We report on the following challenges: (1) asynchronous data, (2) synthetic, (3) multi-class continuous data, (4) session-to-session transfer, (5) directionally modulated MEG, (6) finger movements recorded by ECoG. As after past competitions, our hope is that winning entries may enhance the analysis methods of future BCIs."
},
{
"pmid": "30211307",
"title": "A Tutorial for Information Theory in Neuroscience.",
"abstract": "Understanding how neural systems integrate, encode, and compute information is central to understanding brain function. Frequently, data from neuroscience experiments are multivariate, the interactions between the variables are nonlinear, and the landscape of hypothesized or possible interactions between variables is extremely broad. Information theory is well suited to address these types of data, as it possesses multivariate analysis tools, it can be applied to many different types of data, it can capture nonlinear interactions, and it does not require assumptions about the structure of the underlying data (i.e., it is model independent). In this article, we walk through the mathematics of information theory along with common logistical problems associated with data type, data binning, data quantity requirements, bias, and significance testing. Next, we analyze models inspired by canonical neuroscience experiments to improve understanding and demonstrate the strengths of information theory analyses. To facilitate the use of information theory analyses, and an understanding of how these analyses are implemented, we also provide a free MATLAB software package that can be applied to a wide range of data from neuroscience experiments, as well as from other fields of study."
},
{
"pmid": "20706781",
"title": "Transfer entropy--a model-free measure of effective connectivity for the neurosciences.",
"abstract": "Understanding causal relationships, or effective connectivity, between parts of the brain is of utmost importance because a large part of the brain's activity is thought to be internally generated and, hence, quantifying stimulus response relationships alone does not fully describe brain dynamics. Past efforts to determine effective connectivity mostly relied on model based approaches such as Granger causality or dynamic causal modeling. Transfer entropy (TE) is an alternative measure of effective connectivity based on information theory. TE does not require a model of the interaction and is inherently non-linear. We investigated the applicability of TE as a metric in a test for effective connectivity to electrophysiological data based on simulations and magnetoencephalography (MEG) recordings in a simple motor task. In particular, we demonstrate that TE improved the detectability of effective connectivity for non-linear interactions, and for sensor level MEG signals where linear methods are hampered by signal-cross-talk due to volume conduction."
},
{
"pmid": "29149201",
"title": "The influence of filtering and downsampling on the estimation of transfer entropy.",
"abstract": "Transfer entropy (TE) provides a generalized and model-free framework to study Wiener-Granger causality between brain regions. Because of its nonparametric character, TE can infer directed information flow also from nonlinear systems. Despite its increasing number of applications in neuroscience, not much is known regarding the influence of common electrophysiological preprocessing on its estimation. We test the influence of filtering and downsampling on a recently proposed nearest neighborhood based TE estimator. Different filter settings and downsampling factors were tested in a simulation framework using a model with a linear coupling function and two nonlinear models with sigmoid and logistic coupling functions. For nonlinear coupling and progressively lower low-pass filter cut-off frequencies up to 72% false negative direct connections and up to 26% false positive connections were identified. In contrast, for the linear model, a monotonic increase was only observed for missed indirect connections (up to 86%). High-pass filtering (1 Hz, 2 Hz) had no impact on TE estimation. After low-pass filtering interaction delays were significantly underestimated. Downsampling the data by a factor greater than the assumed interaction delay erased most of the transmitted information and thus led to a very high percentage (67-100%) of false negative direct connections. Low-pass filtering increases the number of missed connections depending on the filters cut-off frequency. Downsampling should only be done if the sampling factor is smaller than the smallest assumed interaction delay of the analyzed network."
},
{
"pmid": "23468850",
"title": "Measuring information-transfer delays.",
"abstract": "In complex networks such as gene networks, traffic systems or brain circuits it is important to understand how long it takes for the different parts of the network to effectively influence one another. In the brain, for example, axonal delays between brain areas can amount to several tens of milliseconds, adding an intrinsic component to any timing-based processing of information. Inferring neural interaction delays is thus needed to interpret the information transfer revealed by any analysis of directed interactions across brain structures. However, a robust estimation of interaction delays from neural activity faces several challenges if modeling assumptions on interaction mechanisms are wrong or cannot be made. Here, we propose a robust estimator for neuronal interaction delays rooted in an information-theoretic framework, which allows a model-free exploration of interactions. In particular, we extend transfer entropy to account for delayed source-target interactions, while crucially retaining the conditioning on the embedded target state at the immediately previous time step. We prove that this particular extension is indeed guaranteed to identify interaction delays between two coupled systems and is the only relevant option in keeping with Wiener's principle of causality. We demonstrate the performance of our approach in detecting interaction delays on finite data by numerical simulations of stochastic and deterministic processes, as well as on local field potential recordings. We also show the ability of the extended transfer entropy to detect the presence of multiple delays, as well as feedback loops. While evaluated on neuroscience data, we expect the estimator to be useful in other fields dealing with network dynamics."
},
{
"pmid": "25068489",
"title": "Efficient transfer entropy analysis of non-stationary neural time series.",
"abstract": "Information theory allows us to investigate information processing in neural systems in terms of information transfer, storage and modification. Especially the measure of information transfer, transfer entropy, has seen a dramatic surge of interest in neuroscience. Estimating transfer entropy from two processes requires the observation of multiple realizations of these processes to estimate associated probability density functions. To obtain these necessary observations, available estimators typically assume stationarity of processes to allow pooling of observations over time. This assumption however, is a major obstacle to the application of these estimators in neuroscience as observed processes are often non-stationary. As a solution, Gomez-Herrero and colleagues theoretically showed that the stationarity assumption may be avoided by estimating transfer entropy from an ensemble of realizations. Such an ensemble of realizations is often readily available in neuroscience experiments in the form of experimental trials. Thus, in this work we combine the ensemble method with a recently proposed transfer entropy estimator to make transfer entropy estimation applicable to non-stationary time series. We present an efficient implementation of the approach that is suitable for the increased computational demand of the ensemble method's practical application. In particular, we use a massively parallel implementation for a graphics processing unit to handle the computationally most heavy aspects of the ensemble method for transfer entropy estimation. We test the performance and robustness of our implementation on data from numerical simulations of stochastic processes. We also demonstrate the applicability of the ensemble method to magnetoencephalographic data. While we mainly evaluate the proposed method for neuroscience data, we expect it to be applicable in a variety of fields that are concerned with the analysis of information transfer in complex biological, social, and artificial systems."
},
{
"pmid": "24290935",
"title": "Estimating cognitive workload using wavelet entropy-based features during an arithmetic task.",
"abstract": "Electroencephalography (EEG) has shown promise as an indicator of cognitive workload; however, precise workload estimation is an ongoing research challenge. In this investigation, seven levels of workload were induced using an arithmetic task, and the entropy of wavelet coefficients extracted from EEG signals is shown to distinguish all seven levels. For a subject-independent multi-channel classification scheme, the entropy features achieved high accuracy, up to 98% for channels from the frontal lobes, in the delta frequency band. This suggests that a smaller number of EEG channels in only one frequency band can be deployed for an effective EEG-based workload classification system. Together with analysis based on phase locking between channels, these results consistently suggest increased synchronization of neural responses for higher load levels."
}
] |
Scientific Reports | 31796768 | PMC6890696 | 10.1038/s41598-019-54388-4 | The Assessment of Twitter’s Potential for Outbreak Detection: Avian Influenza Case Study | Social media services such as Twitter are valuable sources of information for surveillance systems. A digital syndromic surveillance system has several advantages including its ability to overcome the problem of time delay in traditional surveillance systems. Despite the progress made with using digital syndromic surveillance systems, the possibility of tracking avian influenza (AI) using online sources has not been fully explored. In this study, a Twitter-based data analysis framework was developed to automatically monitor avian influenza outbreaks in a real-time manner. The framework was implemented to find worrisome posts and alerting news on Twitter, filter irrelevant ones, and detect the onset of outbreaks in several countries. The system collected and analyzed over 209,000 posts discussing avian influenza on Twitter from July 2017 to November 2018. We examined the potential of Twitter data to represent the date, severity and virus type of official reports. Furthermore, we investigated whether filtering irrelevant tweets can positively impact the performance of the system. The proposed approach was empirically evaluated using a real-world outbreak-reporting source. We found that 75% of real-world outbreak notifications of AI were identifiable from Twitter. This shows the capability of the system to serve as a complementary approach to official AI reporting methods. Moreover, we observed that one-third of outbreak notifications were reported on Twitter earlier than official reports. This feature could augment traditional surveillance systems and provide a possibility of early detection of outbreaks. This study could potentially provide a first stepping stone for building digital disease outbreak warning systems to assist epidemiologists and animal health professionals in making relevant decisions. | Related WorkEfforts have been made to detect common epidemic events such as seasonal influenza and Influenza-like Illness (ILI) through web and social media9,11,26–28. However, challenges in the detection of unexpected outbreak events through social media have not been fully explored. As outlined in the literature4,15,29,30, the approaches in the detection of common and recurring health events from web content are mature. These approaches17,29,31–33 have measured the strength of the relationship between the frequency of officially reported cases and online disease-related posts or keywords9,17,34–37.Several methods have been exploited to detect disease outbreaks from social media. In the study by Di Martino et al.15, the Early Aberration Reporting System (EARS) family of algorithms was selected for outbreak detection. In order to validate Twitter alerts, a method was proposed to find relevant official events from ProMED-mail unstructured documents. A document would contain a relevant event if both medical conditions and geographic reference were identified. In another study, van de Belt et al.13 developed an early outbreak system in the Netherlands using Google Trends and a local social media, named Coosto. In this system, a simple cut-off criterion (i.e. double standard deviation for Google Trends and a frequency above ten for Coosto) was specified to detect outbreaks. In this study, the Dutch outbreak notification system was used as the gold standard. Van de Belt and colleagues13 concluded that compared to Google Trends, a limited number of outbreaks were detectable from Coosto. However, the number of false positive detections in Coosto were lower than in Google Trends.Considering various geographical locations, Fast and colleagues16 designed a warning system to identify and forecast social response to diseases reported in news articles. The HealthMap news articles were automatically tagged with indicators of disease spread, severity, preventive measures and social responses. Then, a Bayesian Network and exponentially weighted moving average (EWMA) methods were used to detect unusual periods of social responses. The findings showed that when the news coverage is sufficient, the social reaction to disease spread can be predicted via online news.As outlined earlier, due to the noisy nature of the content of social media networks such as Twitter, the outbreak detection in these networks is challenging4,29,30. In addition to noises, sometimes terms can change their meaning in different contexts. In this regard, a few studies have considered filtering irrelevant contents as part of disease outbreak detection. Di Martino et al.15 exploited a list of negative keywords associated with diseases to filter irrelevant tweets. In another study by Avare et al.4, an adaptive classification method based on feature change was proposed to dynamically annotate tweets with relevant or irrelevant labels. In particular4, aimed to dynamically change the definition of relevant tweets as terminologies evolve in Twitter messages. The required labels were obtained from health experts and crowd-sourced workers. For incoming tweets, manual labelling was only performed if the extracted features were different than previous features. | [
"27455108",
"26068569",
"27014744",
"26513245",
"28085877",
"28934949",
"27880777",
"20616993",
"21573238",
"28756796",
"24349542"
] | [
{
"pmid": "27455108",
"title": "Applying GIS and Machine Learning Methods to Twitter Data for Multiscale Surveillance of Influenza.",
"abstract": "Traditional methods for monitoring influenza are haphazard and lack fine-grained details regarding the spatial and temporal dynamics of outbreaks. Twitter gives researchers and public health officials an opportunity to examine the spread of influenza in real-time and at multiple geographical scales. In this paper, we introduce an improved framework for monitoring influenza outbreaks using the social media platform Twitter. Relying upon techniques from geographic information science (GIS) and data mining, Twitter messages were collected, filtered, and analyzed for the thirty most populated cities in the United States during the 2013-2014 flu season. The results of this procedure are compared with national, regional, and local flu outbreak reports, revealing a statistically significant correlation between the two data sources. The main contribution of this paper is to introduce a comprehensive data mining process that enhances previous attempts to accurately identify tweets related to influenza. Additionally, geographical information systems allow us to target, filter, and normalize Twitter messages."
},
{
"pmid": "26068569",
"title": "New technologies in predicting, preventing and controlling emerging infectious diseases.",
"abstract": "Surveillance of emerging infectious diseases is vital for the early identification of public health threats. Emergence of novel infections is linked to human factors such as population density, travel and trade and ecological factors like climate change and agricultural practices. A wealth of new technologies is becoming increasingly available for the rapid molecular identification of pathogens but also for the more accurate monitoring of infectious disease activity. Web-based surveillance tools and epidemic intelligence methods, used by all major public health institutions, are intended to facilitate risk assessment and timely outbreak detection. In this review, we present new methods for regional and global infectious disease surveillance and advances in epidemic modeling aimed to predict and prevent future infectious diseases threats."
},
{
"pmid": "27014744",
"title": "Using Social Media to Perform Local Influenza Surveillance in an Inner-City Hospital: A Retrospective Observational Study.",
"abstract": "BACKGROUND\nPublic health officials and policy makers in the United States expend significant resources at the national, state, county, and city levels to measure the rate of influenza infection. These individuals rely on influenza infection rate information to make important decisions during the course of an influenza season driving vaccination campaigns, clinical guidelines, and medical staffing. Web and social media data sources have emerged as attractive alternatives to supplement existing practices. While traditional surveillance methods take 1-2 weeks, and significant labor, to produce an infection estimate in each locale, web and social media data are available in near real-time for a broad range of locations.\n\n\nOBJECTIVE\nThe objective of this study was to analyze the efficacy of flu surveillance from combining data from the websites Google Flu Trends and HealthTweets at the local level. We considered both emergency department influenza-like illness cases and laboratory-confirmed influenza cases for a single hospital in the City of Baltimore.\n\n\nMETHODS\nThis was a retrospective observational study comparing estimates of influenza activity of Google Flu Trends and Twitter to actual counts of individuals with laboratory-confirmed influenza, and counts of individuals presenting to the emergency department with influenza-like illness cases. Data were collected from November 20, 2011 through March 16, 2014. Each parameter was evaluated on the municipal, regional, and national scale. We examined the utility of social media data for tracking actual influenza infection at the municipal, state, and national levels. Specifically, we compared the efficacy of Twitter and Google Flu Trends data.\n\n\nRESULTS\nWe found that municipal-level Twitter data was more effective than regional and national data when tracking actual influenza infection rates in a Baltimore inner-city hospital. When combined, national-level Twitter and Google Flu Trends data outperformed each data source individually. In addition, influenza-like illness data at all levels of geographic granularity were best predicted by national Google Flu Trends data.\n\n\nCONCLUSIONS\nIn order to overcome sensitivity to transient events, such as the news cycle, the best-fitting Google Flu Trends model relies on a 4-week moving average, suggesting that it may also be sacrificing sensitivity to transient fluctuations in influenza infection to achieve predictive power. Implications for influenza forecasting are discussed in this report."
},
{
"pmid": "26513245",
"title": "Combining Search, Social Media, and Traditional Data Sources to Improve Influenza Surveillance.",
"abstract": "We present a machine learning-based methodology capable of providing real-time (\"nowcast\") and forecast estimates of influenza activity in the US by leveraging data from multiple data sources including: Google searches, Twitter microblogs, nearly real-time hospital visit records, and data from a participatory surveillance system. Our main contribution consists of combining multiple influenza-like illnesses (ILI) activity estimates, generated independently with each data source, into a single prediction of ILI utilizing machine learning ensemble approaches. Our methodology exploits the information in each data source and produces accurate weekly ILI predictions for up to four weeks ahead of the release of CDC's ILI reports. We evaluate the predictive ability of our ensemble approach during the 2013-2014 (retrospective) and 2014-2015 (live) flu seasons for each of the four weekly time horizons. Our ensemble approach demonstrates several advantages: (1) our ensemble method's predictions outperform every prediction using each data source independently, (2) our methodology can produce predictions one week ahead of GFT's real-time estimates with comparable accuracy, and (3) our two and three week forecast estimates have comparable accuracy to real-time predictions using an autoregressive model. Moreover, our results show that considerable insight is gained from incorporating disparate data streams, in the form of social media and crowd sourced data, into influenza predictions in all time horizons."
},
{
"pmid": "28085877",
"title": "Forecasting Zika Incidence in the 2016 Latin America Outbreak Combining Traditional Disease Surveillance with Search, Social Media, and News Report Data.",
"abstract": "BACKGROUND\nOver 400,000 people across the Americas are thought to have been infected with Zika virus as a consequence of the 2015-2016 Latin American outbreak. Official government-led case count data in Latin America are typically delayed by several weeks, making it difficult to track the disease in a timely manner. Thus, timely disease tracking systems are needed to design and assess interventions to mitigate disease transmission.\n\n\nMETHODOLOGY/PRINCIPAL FINDINGS\nWe combined information from Zika-related Google searches, Twitter microblogs, and the HealthMap digital surveillance system with historical Zika suspected case counts to track and predict estimates of suspected weekly Zika cases during the 2015-2016 Latin American outbreak, up to three weeks ahead of the publication of official case data. We evaluated the predictive power of these data and used a dynamic multivariable approach to retrospectively produce predictions of weekly suspected cases for five countries: Colombia, El Salvador, Honduras, Venezuela, and Martinique. Models that combined Google (and Twitter data where available) with autoregressive information showed the best out-of-sample predictive accuracy for 1-week ahead predictions, whereas models that used only Google and Twitter typically performed best for 2- and 3-week ahead predictions.\n\n\nSIGNIFICANCE\nGiven the significant delay in the release of official government-reported Zika case counts, we show that these Internet-based data streams can be used as timely and complementary ways to assess the dynamics of the outbreak."
},
{
"pmid": "28934949",
"title": "Online surveillance of media health event reporting in Nepal: digital disease detection from a One Health perspective.",
"abstract": "BACKGROUND\nTraditional media and the internet are crucial sources of health information. Media can significantly shape public opinion, knowledge and understanding of emerging and endemic health threats. As digital communication rapidly progresses, local access and dissemination of health information contribute significantly to global disease detection and reporting.\n\n\nMETHODS\nHealth event reports in Nepal (October 2013-December 2014) were used to characterize Nepal's media environment from a One Health perspective using HealthMap - a global online disease surveillance and mapping tool. Event variables (location, media source type, disease or risk factor of interest, and affected species) were extracted from HealthMap.\n\n\nRESULTS\nA total of 179 health reports were captured from various sources including newspapers, inter-government agency bulletins, individual reports, and trade websites, yielding 108 (60%) unique articles. Human health events were reported most often (n = 85; 79%), followed by animal health events (n = 23; 21%), with no reports focused solely on environmental health.\n\n\nCONCLUSIONS\nBy expanding event coverage across all of the health sectors, media in developing countries could play a crucial role in national risk communication efforts and could enhance early warning systems for disasters and disease outbreaks."
},
{
"pmid": "27880777",
"title": "Avian Influenza Risk Surveillance in North America with Online Media.",
"abstract": "The use of Internet-based sources of information for health surveillance applications has increased in recent years, as a greater share of social and media activity happens through online channels. The potential surveillance value in online sources of information about emergent health events include early warning, situational awareness, risk perception and evaluation of health messaging among others. The challenge in harnessing these sources of data is the vast number of potential sources to monitor and developing the tools to translate dynamic unstructured content into actionable information. In this paper we investigated the use of one social media outlet, Twitter, for surveillance of avian influenza risk in North America. We collected AI-related messages over a five-month period and compared these to official surveillance records of AI outbreaks. A fully automated data extraction and analysis pipeline was developed to acquire, structure, and analyze social media messages in an online context. Two methods of outbreak detection; a static threshold and a cumulative-sum dynamic threshold; based on a time series model of normal activity were evaluated for their ability to discern important time periods of AI-related messaging and media activity. Our findings show that peaks in activity were related to real-world events, with outbreaks in Nigeria, France and the USA receiving the most attention while those in China were less evident in the social media data. Topic models found themes related to specific AI events for the dynamic threshold method, while many for the static method were ambiguous. Further analyses of these data might focus on quantifying the bias in coverage and relation between outbreak characteristics and detectability in social media data. Finally, while the analyses here focused on broad themes and trends, there is likely additional value in developing methods for identifying low-frequency messages, operationalizing this methodology into a comprehensive system for visualizing patterns extracted from the Internet, and integrating these data with other sources of information such as wildlife, environment, and agricultural data."
},
{
"pmid": "20616993",
"title": "Text and structural data mining of influenza mentions in Web and social media.",
"abstract": "Text and structural data mining of web and social media (WSM) provides a novel disease surveillance resource and can identify online communities for targeted public health communications (PHC) to assure wide dissemination of pertinent information. WSM that mention influenza are harvested over a 24-week period, 5 October 2008 to 21 March 2009. Link analysis reveals communities for targeted PHC. Text mining is shown to identify trends in flu posts that correlate to real-world influenza-like illness patient report data. We also bring to bear a graph-based data mining technique to detect anomalies among flu blogs connected by publisher type, links, and user-tags."
},
{
"pmid": "21573238",
"title": "The use of Twitter to track levels of disease activity and public concern in the U.S. during the influenza A H1N1 pandemic.",
"abstract": "Twitter is a free social networking and micro-blogging service that enables its millions of users to send and read each other's \"tweets,\" or short, 140-character messages. The service has more than 190 million registered users and processes about 55 million tweets per day. Useful information about news and geopolitical events lies embedded in the Twitter stream, which embodies, in the aggregate, Twitter users' perspectives and reactions to current events. By virtue of sheer volume, content embedded in the Twitter stream may be useful for tracking or even forecasting behavior if it can be extracted in an efficient manner. In this study, we examine the use of information embedded in the Twitter stream to (1) track rapidly-evolving public sentiment with respect to H1N1 or swine flu, and (2) track and measure actual disease activity. We also show that Twitter can be used as a measure of public interest or concern about health-related events. Our results show that estimates of influenza-like illness derived from Twitter chatter accurately track reported disease levels."
},
{
"pmid": "28756796",
"title": "Identification of Keywords From Twitter and Web Blog Posts to Detect Influenza Epidemics in Korea.",
"abstract": "OBJECTIVE\nSocial media data are a highly contextual health information source. The objective of this study was to identify Korean keywords for detecting influenza epidemics from social media data.\n\n\nMETHODS\nWe included data from Twitter and online blog posts to obtain a sufficient number of candidate indicators and to represent a larger proportion of the Korean population. We performed the following steps: initial keyword selection; generation of a keyword time series using a preprocessing approach; optimal feature selection; model building and validation using least absolute shrinkage and selection operator, support vector machine (SVM), and random forest regression (RFR).\n\n\nRESULTS\nA total of 15 keywords optimally detected the influenza epidemic, evenly distributed across Twitter and blog data sources. Model estimates generated using our SVM model were highly correlated with recent influenza incidence data.\n\n\nCONCLUSIONS\nThe basic principles underpinning our approach could be applied to other countries, languages, infectious diseases, and social media sources. Social media monitoring using our approach may support and extend the capacity of traditional surveillance systems for detecting emerging influenza. (Disaster Med Public Health Preparedness. 2018; 12: 352-359)."
},
{
"pmid": "24349542",
"title": "National and local influenza surveillance through Twitter: an analysis of the 2012-2013 influenza epidemic.",
"abstract": "Social media have been proposed as a data source for influenza surveillance because they have the potential to offer real-time access to millions of short, geographically localized messages containing information regarding personal well-being. However, accuracy of social media surveillance systems declines with media attention because media attention increases \"chatter\" - messages that are about influenza but that do not pertain to an actual infection - masking signs of true influenza prevalence. This paper summarizes our recently developed influenza infection detection algorithm that automatically distinguishes relevant tweets from other chatter, and we describe our current influenza surveillance system which was actively deployed during the full 2012-2013 influenza season. Our objective was to analyze the performance of this system during the most recent 2012-2013 influenza season and to analyze the performance at multiple levels of geographic granularity, unlike past studies that focused on national or regional surveillance. Our system's influenza prevalence estimates were strongly correlated with surveillance data from the Centers for Disease Control and Prevention for the United States (r = 0.93, p < 0.001) as well as surveillance data from the Department of Health and Mental Hygiene of New York City (r = 0.88, p < 0.001). Our system detected the weekly change in direction (increasing or decreasing) of influenza prevalence with 85% accuracy, a nearly twofold increase over a simpler model, demonstrating the utility of explicitly distinguishing infection tweets from other chatter."
}
] |
Journal of Cheminformatics | 33430938 | PMC6892210 | 10.1186/s13321-019-0397-9 | A de novo molecular generation method using latent vector based generative adversarial network | Deep learning methods applied to drug discovery have been used to generate novel structures. In this study, we propose a new deep learning architecture, LatentGAN, which combines an autoencoder and a generative adversarial neural network for de novo molecular design. We applied the method in two scenarios: one to generate random drug-like compounds and another to generate target-biased compounds. Our results show that the method works well in both cases. Sampled compounds from the trained model can largely occupy the same chemical space as the training set and also generate a substantial fraction of novel compounds. Moreover, the drug-likeness score of compounds sampled from LatentGAN is also similar to that of the training set. Lastly, generated compounds differ from those obtained with a Recurrent Neural Network-based generative model approach, indicating that both methods can be used complementarily. | Related worksA related architecture to the LatentGAN is the Adversarial Autoencoder (AAE) [46]. The AAE uses a discriminator to introduce adversarial training to the autoencoder and is trained typically using a 3-step training scheme of (a) discriminator, (b) encoder, (c) encoder and decoder, compared to the LatentGANs 2-step training. The AAE have been used in generative modeling of molecules to sample molecular fingerprints using additional encoder training steps [47], as well as SMILES representations [48, 49]. In other application areas, Conditional AAEs with similar training schemes have been applied to manipulate images of faces [50]. For the later application, approaches that have utilized multiple discriminators have been used to combine conditional VAEs and conditional GANs to enforce constraints on the latent space [51] and thus increase the realism of the images. | [
"29366762",
"27599991",
"27491648",
"29750902",
"29392184",
"29086083",
"29532027",
"21452978",
"22270643",
"26881908",
"30868314",
"29235269",
"29995272",
"29340790",
"11259830",
"29762023",
"29569445",
"27732574",
"27899562",
"29086166",
"28029644",
"30180591",
"19774591",
"8709122",
"20298526",
"15667143",
"30118593"
] | [
{
"pmid": "29366762",
"title": "The rise of deep learning in drug discovery.",
"abstract": "Over the past decade, deep learning has achieved remarkable success in various artificial intelligence research areas. Evolved from the previous research on artificial neural networks, this technology has shown superior performance to other machine learning algorithms in areas such as image and voice recognition, natural language processing, among others. The first wave of applications of deep learning in pharmaceutical research has emerged in recent years, and its utility has gone beyond bioactivity predictions and has shown promise in addressing diverse problems in drug discovery. Examples will be discussed covering bioactivity prediction, de novo molecular design, synthesis prediction and biological image analysis."
},
{
"pmid": "27599991",
"title": "The Next Era: Deep Learning in Pharmaceutical Research.",
"abstract": "Over the past decade we have witnessed the increasing sophistication of machine learning algorithms applied in daily use from internet searches, voice recognition, social network software to machine vision software in cameras, phones, robots and self-driving cars. Pharmaceutical research has also seen its fair share of machine learning developments. For example, applying such methods to mine the growing datasets that are created in drug discovery not only enables us to learn from the past but to predict a molecule's properties and behavior in future. The latest machine learning algorithm garnering significant attention is deep learning, which is an artificial neural network with multiple hidden layers. Publications over the last 3 years suggest that this algorithm may have advantages over previous machine learning methods and offer a slight but discernable edge in predictive performance. The time has come for a balanced review of this technique but also to apply machine learning methods such as deep learning across a wider array of endpoints relevant to pharmaceutical research for which the datasets are growing such as physicochemical property prediction, formulation prediction, absorption, distribution, metabolism, excretion and toxicity (ADME/Tox), target prediction and skin permeation, etc. We also show that there are many potential applications of deep learning beyond cheminformatics. It will be important to perform prospective testing (which has been carried out rarely to date) in order to convince skeptics that there will be benefits from investing in this technique."
},
{
"pmid": "27491648",
"title": "Deep Learning in Drug Discovery.",
"abstract": "Artificial neural networks had their first heyday in molecular informatics and drug discovery approximately two decades ago. Currently, we are witnessing renewed interest in adapting advanced neural network architectures for pharmaceutical research by borrowing from the field of \"deep learning\". Compared with some of the other life sciences, their application in drug discovery is still limited. Here, we provide an overview of this emerging field of molecular informatics, present the basic concepts of prominent deep learning methods and offer motivation to explore these techniques for their usefulness in computer-assisted drug discovery and design. We specifically emphasize deep neural networks, restricted Boltzmann machine networks and convolutional networks."
},
{
"pmid": "29750902",
"title": "Machine learning in chemoinformatics and drug discovery.",
"abstract": "Chemoinformatics is an established discipline focusing on extracting, processing and extrapolating meaningful data from chemical structures. With the rapid explosion of chemical 'big' data from HTS and combinatorial synthesis, machine learning has become an indispensable tool for drug designers to mine chemical information from large compound databases to design drugs with important biological properties. To process the chemical data, we first reviewed multiple processing layers in the chemoinformatics pipeline followed by the introduction of commonly used machine learning models in drug discovery and QSAR analysis. Here, we present basic principles and recent case studies to demonstrate the utility of machine learning techniques in chemoinformatics analyses; and we discuss limitations and future directions to guide further development in this evolving field."
},
{
"pmid": "29392184",
"title": "Generating Focused Molecule Libraries for Drug Discovery with Recurrent Neural Networks.",
"abstract": "In de novo drug design, computational strategies are used to generate novel molecules with good affinity to the desired biological target. In this work, we show that recurrent neural networks can be trained as generative models for molecular structures, similar to statistical language models in natural language processing. We demonstrate that the properties of the generated molecules correlate very well with the properties of the molecules used to train the model. In order to enrich libraries with molecules active toward a given biological target, we propose to fine-tune the model with small sets of molecules, which are known to be active against that target. Against Staphylococcus aureus, the model reproduced 14% of 6051 hold-out test molecules that medicinal chemists designed, whereas against Plasmodium falciparum (Malaria), it reproduced 28% of 1240 test molecules. When coupled with a scoring function, our model can perform the complete de novo drug design cycle to generate large sets of novel molecules for drug discovery."
},
{
"pmid": "29086083",
"title": "Molecular de-novo design through deep reinforcement learning.",
"abstract": "This work introduces a method to tune a sequence-based generative model for molecular de novo design that through augmented episodic likelihood can learn to generate structures with certain specified desirable properties. We demonstrate how this model can execute a range of tasks such as generating analogues to a query structure and generating compounds predicted to be active against a biological target. As a proof of principle, the model is first trained to generate molecules that do not contain sulphur. As a second example, the model is trained to generate analogues to the drug Celecoxib, a technique that could be used for scaffold hopping or library expansion starting from a single molecule. Finally, when tuning the model towards generating compounds predicted to be active against the dopamine receptor type 2, the model generates structures of which more than 95% are predicted to be active, including experimentally confirmed actives that have not been included in either the generative model nor the activity prediction model. Graphical abstract ."
},
{
"pmid": "29532027",
"title": "Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules.",
"abstract": "We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This model allows us to generate new molecules for efficient exploration and optimization through open-ended spaces of chemical compounds. A deep neural network was trained on hundreds of thousands of existing chemical structures to construct three coupled functions: an encoder, a decoder, and a predictor. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to discrete molecular representations. The predictor estimates chemical properties from the latent continuous vector representation of the molecule. Continuous representations of molecules allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also allow the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. We demonstrate our method in the domain of drug-like molecules and also in a set of molecules with fewer that nine heavy atoms."
},
{
"pmid": "21452978",
"title": "Reaction-driven de novo design, synthesis and testing of potential type II kinase inhibitors.",
"abstract": "BACKGROUND\nDe novo design of drug-like compounds with a desired pharmacological activity profile has become feasible through innovative computer algorithms. Fragment-based design and simulated chemical reactions allow for the rapid generation of candidate compounds as blueprints for organic synthesis.\n\n\nMETHODS\nWe used a combination of complementary virtual-screening tools for the analysis of de novo designed compounds that were generated with the aim to inhibit inactive polo-like kinase 1 (Plk1), a target for the development of cancer therapeutics. A homology model of the inactive state of Plk1 was constructed and the nucleotide binding pocket conformations in the DFG-in and DFG-out state were compared. The de novo-designed compounds were analyzed using pharmacophore matching, structure-activity landscape analysis, and automated ligand docking. One compound was synthesized and tested in vitro.\n\n\nRESULTS\nThe majority of the designed compounds possess a generic architecture present in known kinase inhibitors. Predictions favor kinases as targets of these compounds but also suggest potential off-target effects. Several bioisosteric replacements were suggested, and de novo designed compounds were assessed by automated docking for potential binding preference toward the inactive (type II inhibitors) over the active conformation (type I inhibitors) of the kinase ATP binding site. One selected compound was successfully synthesized as suggested by the software. The de novo-designed compound exhibited inhibitory activity against inactive Plk1 in vitro, but did not show significant inhibition of active Plk1 and 38 other kinases tested.\n\n\nCONCLUSIONS\nComputer-based de novo design of screening candidates in combination with ligand- and receptor-based virtual screening generates motivated suggestions for focused library design in hit and lead discovery. Attractive, synthetically accessible compounds can be obtained together with predicted on- and off-target profiles and desired activities."
},
{
"pmid": "22270643",
"title": "Quantifying the chemical beauty of drugs.",
"abstract": "Drug-likeness is a key consideration when selecting compounds during the early stages of drug discovery. However, evaluation of drug-likeness in absolute terms does not reflect adequately the whole spectrum of compound quality. More worryingly, widely used rules may inadvertently foster undesirable molecular property inflation as they permit the encroachment of rule-compliant compounds towards their boundaries. We propose a measure of drug-likeness based on the concept of desirability called the quantitative estimate of drug-likeness (QED). The empirical rationale of QED reflects the underlying distribution of molecular properties. QED is intuitive, transparent, straightforward to implement in many practical settings and allows compounds to be ranked by their relative merit. We extended the utility of QED by applying it to the problem of molecular target druggability assessment by prioritizing a large set of published bioactive compounds. The measure may also capture the abstract notion of aesthetics in medicinal chemistry."
},
{
"pmid": "26881908",
"title": "De Novo Design at the Edge of Chaos.",
"abstract": "Computational medicinal chemistry offers viable strategies for finding, characterizing, and optimizing innovative pharmacologically active compounds. Technological advances in both computer hardware and software as well as biological chemistry have enabled a renaissance of computer-assisted \"de novo\" design of molecules with desired pharmacological properties. Here, we present our current perspective on the concept of automated molecule generation by highlighting chemocentric methods that may capture druglike chemical space, consider ligand promiscuity for hit and lead finding, and provide fresh ideas for the rational design of customized screening of compound libraries."
},
{
"pmid": "30868314",
"title": "Exploring the GDB-13 chemical space using deep generative models.",
"abstract": "Recent applications of recurrent neural networks (RNN) enable training models that sample the chemical space. In this study we train RNN with molecular string representations (SMILES) with a subset of the enumerated database GDB-13 (975 million molecules). We show that a model trained with 1 million structures (0.1% of the database) reproduces 68.9% of the entire database after training, when sampling 2 billion molecules. We also developed a method to assess the quality of the training process using negative log-likelihood plots. Furthermore, we use a mathematical model based on the \"coupon collector problem\" that compares the trained model to an upper bound and thus we are able to quantify how much it has learned. We also suggest that this method can be used as a tool to benchmark the learning capabilities of any molecular generative model architecture. Additionally, an analysis of the generated chemical space was performed, which shows that, mostly due to the syntax of SMILES, complex molecules with many rings and heteroatoms are more difficult to sample."
},
{
"pmid": "29235269",
"title": "Application of Generative Autoencoder in De Novo Molecular Design.",
"abstract": "A major challenge in computational chemistry is the generation of novel molecular structures with desirable pharmacological and physiochemical properties. In this work, we investigate the potential use of autoencoder, a deep learning methodology, for de novo molecular design. Various generative autoencoders were used to map molecule structures into a continuous latent space and vice versa and their performance as structure generator was assessed. Our results show that the latent space preserves chemical similarity principle and thus can be used for the generation of analogue structures. Furthermore, the latent space created by autoencoders were searched systematically to generate novel compounds with predicted activity against dopamine receptor type 2 and compounds similar to known active compounds not included in the trainings set were identified."
},
{
"pmid": "29995272",
"title": "Molecular generative model based on conditional variational autoencoder for de novo molecular design.",
"abstract": "We propose a molecular generative model based on the conditional variational autoencoder for de novo molecular design. It is specialized to control multiple molecular properties simultaneously by imposing them on a latent space. As a proof of concept, we demonstrate that it can be used to generate drug-like molecules with five target properties. We were also able to adjust a single property without changing the others and to manipulate it beyond the range of the dataset."
},
{
"pmid": "29340790",
"title": "An automated framework for QSAR model building.",
"abstract": "BACKGROUND\nIn-silico quantitative structure-activity relationship (QSAR) models based tools are widely used to screen huge databases of compounds in order to determine the biological properties of chemical molecules based on their chemical structure. With the passage of time, the exponentially growing amount of synthesized and known chemicals data demands computationally efficient automated QSAR modeling tools, available to researchers that may lack extensive knowledge of machine learning modeling. Thus, a fully automated and advanced modeling platform can be an important addition to the QSAR community.\n\n\nRESULTS\nIn the presented workflow the process from data preparation to model building and validation has been completely automated. The most critical modeling tasks (data curation, data set characteristics evaluation, variable selection and validation) that largely influence the performance of QSAR models were focused. It is also included the ability to quickly evaluate the feasibility of a given data set to be modeled. The developed framework is tested on data sets of thirty different problems. The best-optimized feature selection methodology in the developed workflow is able to remove 62-99% of all redundant data. On average, about 19% of the prediction error was reduced by using feature selection producing an increase of 49% in the percentage of variance explained (PVE) compared to models without feature selection. Selecting only the models with a modelability score above 0.6, average PVE scores were 0.71. A strong correlation was verified between the modelability scores and the PVE of the models produced with variable selection.\n\n\nCONCLUSIONS\nWe developed an extendable and highly customizable fully automated QSAR modeling framework. This designed workflow does not require any advanced parameterization nor depends on users decisions or expertise in machine learning/programming. With just a given target or problem, the workflow follows an unbiased standard protocol to develop reliable QSAR models by directly accessing online manually curated databases or by using private data sets. The other distinctive features of the workflow include prior estimation of data modelability to avoid time-consuming modeling trials for non modelable data sets, an efficient variable selection procedure and the facility of output availability at each modeling task for the diverse application and reproduction of historical predictions. The results reached on a selection of thirty QSAR problems suggest that the approach is capable of building reliable models even for challenging problems."
},
{
"pmid": "11259830",
"title": "Experimental and computational approaches to estimate solubility and permeability in drug discovery and development settings.",
"abstract": "Experimental and computational approaches to estimate solubility and permeability in discovery and development settings are described. In the discovery setting 'the rule of 5' predicts that poor absorption or permeation is more likely when there are more than 5 H-bond donors, 10 H-bond acceptors, the molecular weight (MWT) is greater than 500 and the calculated Log P (CLogP) is greater than 5 (or MlogP > 4.15). Computational methodology for the rule-based Moriguchi Log P (MLogP) calculation is described. Turbidimetric solubility measurement is described and applied to known drugs. High throughput screening (HTS) leads tend to have higher MWT and Log P and lower turbidimetric solubility than leads in the pre-HTS era. In the development setting, solubility calculations focus on exact value prediction and are difficult because of polymorphism. Recent work on linear free energy relationships and Log P approaches are critically reviewed. Useful predictions are possible in closely related analog series when coupled with experimental thermodynamic solubility measurements."
},
{
"pmid": "29762023",
"title": "Reinforced Adversarial Neural Computer for de Novo Molecular Design.",
"abstract": "In silico modeling is a crucial milestone in modern drug design and development. Although computer-aided approaches in this field are well-studied, the application of deep learning methods in this research area is at the beginning. In this work, we present an original deep neural network (DNN) architecture named RANC (Reinforced Adversarial Neural Computer) for the de novo design of novel small-molecule organic structures based on the generative adversarial network (GAN) paradigm and reinforcement learning (RL). As a generator RANC uses a differentiable neural computer (DNC), a category of neural networks, with increased generation capabilities due to the addition of an explicit memory bank, which can mitigate common problems found in adversarial settings. The comparative results have shown that RANC trained on the SMILES string representation of the molecules outperforms its first DNN-based counterpart ORGANIC by several metrics relevant to drug discovery: the number of unique structures, passing medicinal chemistry filters (MCFs), Muegge criteria, and high QED scores. RANC is able to generate structures that match the distributions of the key chemical features/descriptors (e.g., MW, logP, TPSA) and lengths of the SMILES strings in the training data set. Therefore, RANC can be reasonably regarded as a promising starting point to develop novel molecules with activity against different biological targets or pathways. In addition, this approach allows scientists to save time and covers a broad chemical space populated with novel and diverse compounds."
},
{
"pmid": "29569445",
"title": "Adversarial Threshold Neural Computer for Molecular de Novo Design.",
"abstract": "In this article, we propose the deep neural network Adversarial Threshold Neural Computer (ATNC). The ATNC model is intended for the de novo design of novel small-molecule organic structures. The model is based on generative adversarial network architecture and reinforcement learning. ATNC uses a Differentiable Neural Computer as a generator and has a new specific block, called adversarial threshold (AT). AT acts as a filter between the agent (generator) and the environment (discriminator + objective reward functions). Furthermore, to generate more diverse molecules we introduce a new objective reward function named Internal Diversity Clustering (IDC). In this work, ATNC is tested and compared with the ORGANIC model. Both models were trained on the SMILES string representation of the molecules, using four objective functions (internal similarity, Muegge druglikeness filter, presence or absence of sp3-rich fragments, and IDC). The SMILES representations of 15K druglike molecules from the ChemDiv collection were used as a training data set. For the different functions, ATNC outperforms ORGANIC. Combined with the IDC, ATNC generates 72% of valid and 77% of unique SMILES strings, while ORGANIC generates only 7% of valid and 86% of unique SMILES strings. For each set of molecules generated by ATNC and ORGANIC, we analyzed distributions of four molecular descriptors (number of atoms, molecular weight, logP, and tpsa) and calculated five chemical statistical features (internal diversity, number of unique heterocycles, number of clusters, number of singletons, and number of compounds that have not been passed through medicinal chemistry filters). Analysis of key molecular descriptors and chemical statistical features demonstrated that the molecules generated by ATNC elicited better druglikeness properties. We also performed in vitro validation of the molecules generated by ATNC; results indicated that ATNC is an effective method for producing hit compounds."
},
{
"pmid": "27732574",
"title": "Hybrid computing using a neural network with dynamic external memory.",
"abstract": "Artificial neural networks are remarkably adept at sensory processing, sequence learning and reinforcement learning, but are limited in their ability to represent variables and data structures and to store data over long timescales, owing to the lack of an external memory. Here we introduce a machine learning model called a differentiable neural computer (DNC), which consists of a neural network that can read from and write to an external memory matrix, analogous to the random-access memory in a conventional computer. Like a conventional computer, it can use its memory to represent and manipulate complex data structures, but, like a neural network, it can learn to do so from data. When trained with supervised learning, we demonstrate that a DNC can successfully answer synthetic questions designed to emulate reasoning and inference problems in natural language. We show that it can learn tasks such as finding the shortest path between specified points and inferring the missing links in randomly generated graphs, and then generalize these tasks to specific graphs such as transport networks and family trees. When trained with reinforcement learning, a DNC can complete a moving blocks puzzle in which changing goals are specified by sequences of symbols. Taken together, our results demonstrate that DNCs have the capacity to solve complex, structured tasks that are inaccessible to neural networks without external read-write memory."
},
{
"pmid": "27899562",
"title": "The ChEMBL database in 2017.",
"abstract": "ChEMBL is an open large-scale bioactivity database (https://www.ebi.ac.uk/chembl), previously described in the 2012 and 2014 Nucleic Acids Research Database Issues. Since then, alongside the continued extraction of data from the medicinal chemistry literature, new sources of bioactivity data have also been added to the database. These include: deposited data sets from neglected disease screening; crop protection data; drug metabolism and disposition data and bioactivity data from patents. A number of improvements and new features have also been incorporated. These include the annotation of assays and targets using ontologies, the inclusion of targets and indications for clinical candidates, addition of metabolic pathways for drugs and calculation of structural alerts. The ChEMBL data can be accessed via a web-interface, RDF distribution, data downloads and RESTful web-services."
},
{
"pmid": "28029644",
"title": "The cornucopia of meaningful leads: Applying deep adversarial autoencoders for new molecule development in oncology.",
"abstract": "Recent advances in deep learning and specifically in generative adversarial networks have demonstrated surprising results in generating new images and videos upon request even using natural language as input. In this paper we present the first application of generative adversarial autoencoders (AAE) for generating novel molecular fingerprints with a defined set of parameters. We developed a 7-layer AAE architecture with the latent middle layer serving as a discriminator. As an input and output the AAE uses a vector of binary fingerprints and concentration of the molecule. In the latent layer we also introduced a neuron responsible for growth inhibition percentage, which when negative indicates the reduction in the number of tumor cells after the treatment. To train the AAE we used the NCI-60 cell line assay data for 6252 compounds profiled on MCF-7 cell line. The output of the AAE was used to screen 72 million compounds in PubChem and select candidate molecules with potential anti-cancer properties. This approach is a proof of concept of an artificially-intelligent drug discovery engine, where AAEs are used to generate new molecular fingerprints with the desired molecular properties."
},
{
"pmid": "30180591",
"title": "Entangled Conditional Adversarial Autoencoder for de Novo Drug Discovery.",
"abstract": "Modern computational approaches and machine learning techniques accelerate the invention of new drugs. Generative models can discover novel molecular structures within hours, while conventional drug discovery pipelines require months of work. In this article, we propose a new generative architecture, entangled conditional adversarial autoencoder, that generates molecular structures based on various properties, such as activity against a specific protein, solubility, or ease of synthesis. We apply the proposed model to generate a novel inhibitor of Janus kinase 3, implicated in rheumatoid arthritis, psoriasis, and vitiligo. The discovered molecule was tested in vitro and showed good activity and selectivity."
},
{
"pmid": "8709122",
"title": "The properties of known drugs. 1. Molecular frameworks.",
"abstract": "In order to better understand the common features present in drug molecules, we use shape description methods to analyze a database of commercially available drugs and prepare a list of common drug shapes. A useful way of organizing this structural data is to group the atoms of each drug molecule into ring, linker, framework, and side chain atoms. On the basis of the two-dimensional molecular structures (without regard to atom type, hybridization, and bond order), there are 1179 different frameworks among the 5120 compounds analyzed. However, the shapes of half of the drugs in the database are described by the 32 most frequently occurring frameworks. This suggests that the diversity of shapes in the set of known drugs is extremely low. In our second method of analysis, in which atom type, hybridization, and bond order are considered, more diversity is seen; there are 2506 different frameworks among the 5120 compounds in the database, and the most frequently occurring 42 frameworks account for only one-fourth of the drugs. We discuss the possible interpretations of these findings and the way they may be used to guide future drug discovery research."
},
{
"pmid": "20298526",
"title": "Estimation of synthetic accessibility score of drug-like molecules based on molecular complexity and fragment contributions.",
"abstract": "BACKGROUND\nA method to estimate ease of synthesis (synthetic accessibility) of drug-like molecules is needed in many areas of the drug discovery process. The development and validation of such a method that is able to characterize molecule synthetic accessibility as a score between 1 (easy to make) and 10 (very difficult to make) is described in this article.\n\n\nRESULTS\nThe method for estimation of the synthetic accessibility score (SAscore) described here is based on a combination of fragment contributions and a complexity penalty. Fragment contributions have been calculated based on the analysis of one million representative molecules from PubChem and therefore one can say that they capture historical synthetic knowledge stored in this database. The molecular complexity score takes into account the presence of non-standard structural features, such as large rings, non-standard ring fusions, stereocomplexity and molecule size. The method has been validated by comparing calculated SAscores with ease of synthesis as estimated by experienced medicinal chemists for a set of 40 molecules. The agreement between calculated and manually estimated synthetic accessibility is very good with r2 = 0.89.\n\n\nCONCLUSION\nA novel method to estimate synthetic accessibility of molecules has been developed. This method uses historical synthetic knowledge obtained by analyzing information from millions of already synthesized chemicals and considers also molecule complexity. The method is sufficiently fast and provides results consistent with estimation of ease of synthesis by experienced medicinal chemists. The calculated SAscore may be used to support various drug discovery processes where a large number of molecules needs to be ranked based on their synthetic accessibility, for example when purchasing samples for screening, selecting hits from high-throughput screening for follow-up, or ranking molecules generated by various de novo design approaches."
},
{
"pmid": "15667143",
"title": "ZINC--a free database of commercially available compounds for virtual screening.",
"abstract": "A critical barrier to entry into structure-based virtual screening is the lack of a suitable, easy to access database of purchasable compounds. We have therefore prepared a library of 727,842 molecules, each with 3D structure, using catalogs of compounds from vendors (the size of this library continues to grow). The molecules have been assigned biologically relevant protonation states and are annotated with properties such as molecular weight, calculated LogP, and number of rotatable bonds. Each molecule in the library contains vendor and purchasing information and is ready for docking using a number of popular docking programs. Within certain limits, the molecules are prepared in multiple protonation states and multiple tautomeric forms. In one format, multiple conformations are available for the molecules. This database is available for free download (http://zinc.docking.org) in several common file formats including SMILES, mol2, 3D SDF, and DOCK flexibase format. A Web-based query tool incorporating a molecular drawing interface enables the database to be searched and browsed and subsets to be created. Users can process their own molecules by uploading them to a server. Our hope is that this database will bring virtual screening libraries to a wide community of structural biologists and medicinal chemists."
},
{
"pmid": "30118593",
"title": "Fréchet ChemNet Distance: A Metric for Generative Models for Molecules in Drug Discovery.",
"abstract": "The new wave of successful generative models in machine learning has increased the interest in deep learning driven de novo drug design. However, method comparison is difficult because of various flaws of the currently employed evaluation metrics. We propose an evaluation metric for generative models called Fréchet ChemNet distance (FCD). The advantage of the FCD over previous metrics is that it can detect whether generated molecules are diverse and have similar chemical and biological properties as real molecules."
}
] |
Frontiers in Genetics | 31827487 | PMC6892404 | 10.3389/fgene.2019.01110 | Channel-Unet: A Spatial Channel-Wise Convolutional Neural Network for Liver and Tumors Segmentation | It is a challenge to automatically and accurately segment the liver and tumors in computed tomography (CT) images, as the problem of over-segmentation or under-segmentation often appears when the Hounsfield unit (Hu) of liver and tumors is close to the Hu of other tissues or background. In this paper, we propose the spatial channel-wise convolution, a convolutional operation along the direction of the channel of feature maps, to extract mapping relationship of spatial information between pixels, which facilitates learning the mapping relationship between pixels in the feature maps and distinguishing the tumors from the liver tissue. In addition, we put forward an iterative extending learning strategy, which optimizes the mapping relationship of spatial information between pixels at different scales and enables spatial channel-wise convolution to map the spatial information between pixels in high-level feature maps. Finally, we propose an end-to-end convolutional neural network called Channel-UNet, which takes UNet as the main structure of the network and adds spatial channel-wise convolution in each up-sampling and down-sampling module. The network can converge the optimized mapping relationship of spatial information between pixels extracted by spatial channel-wise convolution and information extracted by feature maps and realizes multi-scale information fusion. The proposed ChannelUNet is validated by the segmentation task on the 3Dircadb dataset. The Dice values of liver and tumors segmentation were 0.984 and 0.940, which is slightly superior to current best performance. Besides, compared with the current best method, the number of parameters of our method reduces by 25.7%, and the training time of our method reduces by 33.3%. The experimental results demonstrate the efficiency and high accuracy of Channel-UNet in liver and tumors segmentation in CT images. | Related WorkConvolutional neural networks have been applied to various medical image segmentation tasks. However, medical images contains various soft tissues with complex structures, we need to distinguish the target tissue from various soft tissues. Current research mainly focuses on adding multi-scale image information and optimizing the extracted image information by using attention-aware methods to achieve accurate segmentation. | [
"28463186",
"26353135",
"26415173",
"29994201",
"28096782",
"29633960",
"28347562",
"30047874"
] | [
{
"pmid": "28463186",
"title": "DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs.",
"abstract": "In this work we address the task of semantic image segmentation with Deep Learning and make three main contributions that are experimentally shown to have substantial practical merit. First, we highlight convolution with upsampled filters, or 'atrous convolution', as a powerful tool in dense prediction tasks. Atrous convolution allows us to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks. It also allows us to effectively enlarge the field of view of filters to incorporate larger context without increasing the number of parameters or the amount of computation. Second, we propose atrous spatial pyramid pooling (ASPP) to robustly segment objects at multiple scales. ASPP probes an incoming convolutional feature layer with filters at multiple sampling rates and effective fields-of-views, thus capturing objects as well as image context at multiple scales. Third, we improve the localization of object boundaries by combining methods from DCNNs and probabilistic graphical models. The commonly deployed combination of max-pooling and downsampling in DCNNs achieves invariance but has a toll on localization accuracy. We overcome this by combining the responses at the final DCNN layer with a fully connected Conditional Random Field (CRF), which is shown both qualitatively and quantitatively to improve localization performance. Our proposed \"DeepLab\" system sets the new state-of-art at the PASCAL VOC-2012 semantic image segmentation task, reaching 79.7 percent mIOU in the test set, and advances the results on three other datasets: PASCAL-Context, PASCAL-Person-Part, and Cityscapes. All of our code is made publicly available online."
},
{
"pmid": "26353135",
"title": "Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition.",
"abstract": "Existing deep convolutional neural networks (CNNs) require a fixed-size (e.g., 224 × 224) input image. This requirement is \"artificial\" and may reduce the recognition accuracy for the images or sub-images of an arbitrary size/scale. In this work, we equip the networks with another pooling strategy, \"spatial pyramid pooling\", to eliminate the above requirement. The new network structure, called SPP-net, can generate a fixed-length representation regardless of image size/scale. Pyramid pooling is also robust to object deformations. With these advantages, SPP-net should in general improve all CNN-based image classification methods. On the ImageNet 2012 dataset, we demonstrate that SPP-net boosts the accuracy of a variety of CNN architectures despite their different designs. On the Pascal VOC 2007 and Caltech101 datasets, SPP-net achieves state-of-the-art classification results using a single full-image representation and no fine-tuning. The power of SPP-net is also significant in object detection. Using SPP-net, we compute the feature maps from the entire image only once, and then pool features in arbitrary regions (sub-images) to generate fixed-length representations for training the detectors. This method avoids repeatedly computing the convolutional features. In processing test images, our method is 24-102 × faster than the R-CNN method, while achieving better or comparable accuracy on Pascal VOC 2007. In ImageNet Large Scale Visual Recognition Challenge (ILSVRC) 2014, our methods rank #2 in object detection and #3 in image classification among all 38 teams. This manuscript also introduces the improvement made for this competition."
},
{
"pmid": "26415173",
"title": "Automatic Liver Segmentation Based on Shape Constraints and Deformable Graph Cut in CT Images.",
"abstract": "Liver segmentation is still a challenging task in medical image processing area due to the complexity of the liver's anatomy, low contrast with adjacent organs, and presence of pathologies. This investigation was used to develop and validate an automated method to segment livers in CT images. The proposed framework consists of three steps: 1) preprocessing; 2) initialization; and 3) segmentation. In the first step, a statistical shape model is constructed based on the principal component analysis and the input image is smoothed using curvature anisotropic diffusion filtering. In the second step, the mean shape model is moved using thresholding and Euclidean distance transformation to obtain a coarse position in a test image, and then the initial mesh is locally and iteratively deformed to the coarse boundary, which is constrained to stay close to a subspace of shapes describing the anatomical variability. Finally, in order to accurately detect the liver surface, deformable graph cut was proposed, which effectively integrates the properties and inter-relationship of the input images and initialized surface. The proposed method was evaluated on 50 CT scan images, which are publicly available in two databases Sliver07 and 3Dircadb. The experimental results showed that the proposed method was effective and accurate for detection of the liver surface."
},
{
"pmid": "29994201",
"title": "H-DenseUNet: Hybrid Densely Connected UNet for Liver and Tumor Segmentation From CT Volumes.",
"abstract": "Liver cancer is one of the leading causes of cancer death. To assist doctors in hepatocellular carcinoma diagnosis and treatment planning, an accurate and automatic liver and tumor segmentation method is highly demanded in the clinical practice. Recently, fully convolutional neural networks (FCNs), including 2-D and 3-D FCNs, serve as the backbone in many volumetric image segmentation. However, 2-D convolutions cannot fully leverage the spatial information along the third dimension while 3-D convolutions suffer from high computational cost and GPU memory consumption. To address these issues, we propose a novel hybrid densely connected UNet (H-DenseUNet), which consists of a 2-D DenseUNet for efficiently extracting intra-slice features and a 3-D counterpart for hierarchically aggregating volumetric contexts under the spirit of the auto-context algorithm for liver and tumor segmentation. We formulate the learning process of the H-DenseUNet in an end-to-end manner, where the intra-slice representations and inter-slice features can be jointly optimized through a hybrid feature fusion layer. We extensively evaluated our method on the data set of the MICCAI 2017 Liver Tumor Segmentation Challenge and 3DIRCADb data set. Our method outperformed other state-of-the-arts on the segmentation results of tumors and achieved very competitive performance for liver segmentation even with a single model."
},
{
"pmid": "28096782",
"title": "Automatic liver segmentation on Computed Tomography using random walkers for treatment planning.",
"abstract": "Segmentation of the liver from Computed Tomography (CT) volumes plays an important role during the choice of treatment strategies for liver diseases. Despite lots of attention, liver segmentation remains a challenging task due to the lack of visible edges on most boundaries of the liver coupled with high variability of both intensity patterns and anatomical appearances with all these difficulties becoming more prominent in pathological livers. To achieve a more accurate segmentation, a random walker based framework is proposed that can segment contrast-enhanced livers CT images with great accuracy and speed. Based on the location of the right lung lobe, the liver dome is automatically detected thus eliminating the need for manual initialization. The computational requirements are further minimized utilizing rib-caged area segmentation, the liver is then extracted by utilizing random walker method. The proposed method was able to achieve one of the highest accuracies reported in the literature against a mixed healthy and pathological liver dataset compared to other segmentation methods with an overlap error of 4.47 % and dice similarity coefficient of 0.94 while it showed exceptional accuracy on segmenting the pathological livers with an overlap error of 5.95 % and dice similarity coefficient of 0.91."
},
{
"pmid": "29633960",
"title": "Superpixel-based and boundary-sensitive convolutional neural network for automated liver segmentation.",
"abstract": "Segmentation of liver in abdominal computed tomography (CT) is an important step for radiation therapy planning of hepatocellular carcinoma. Practically, a fully automatic segmentation of liver remains challenging because of low soft tissue contrast between liver and its surrounding organs, and its highly deformable shape. The purpose of this work is to develop a novel superpixel-based and boundary sensitive convolutional neural network (SBBS-CNN) pipeline for automated liver segmentation. The entire CT images were first partitioned into superpixel regions, where nearby pixels with similar CT number were aggregated. Secondly, we converted the conventional binary segmentation into a multinomial classification by labeling the superpixels into three classes: interior liver, liver boundary, and non-liver background. By doing this, the boundary region of the liver was explicitly identified and highlighted for the subsequent classification. Thirdly, we computed an entropy-based saliency map for each CT volume, and leveraged this map to guide the sampling of image patches over the superpixels. In this way, more patches were extracted from informative regions (e.g. the liver boundary with irregular changes) and fewer patches were extracted from homogeneous regions. Finally, deep CNN pipeline was built and trained to predict the probability map of the liver boundary. We tested the proposed algorithm in a cohort of 100 patients. With 10-fold cross validation, the SBBS-CNN achieved mean Dice similarity coefficients of 97.31 ± 0.36% and average symmetric surface distance of 1.77 ± 0.49 mm. Moreover, it showed superior performance in comparison with state-of-art methods, including U-Net, pixel-based CNN, active contour, level-sets and graph-cut algorithms. SBBS-CNN provides an accurate and effective tool for automated liver segmentation. It is also envisioned that the proposed framework is directly applicable in other medical image segmentation scenarios."
},
{
"pmid": "28347562",
"title": "Automatic segmentation of liver tumors from multiphase contrast-enhanced CT images based on FCNs.",
"abstract": "This paper presents a novel, fully automatic approach based on a fully convolutional network (FCN) for segmenting liver tumors from CT images. Specifically, we designed a multi-channel fully convolutional network (MC-FCN) to segment liver tumors from multiphase contrast-enhanced CT images. Because each phase of contrast-enhanced data provides distinct information on pathological features, we trained one network for each phase of the CT images and fused their high-layer features together. The proposed approach was validated on CT images taken from two databases: 3Dircadb and JDRD. In the case of 3Dircadb, using the FCN, the mean ratios of the volumetric overlap error (VOE), relative volume difference (RVD), average symmetric surface distance (ASD), root mean square symmetric surface distance (RMSD) and maximum symmetric surface distance (MSSD) were 15.6±4.3%, 5.8±3.5%, 2.0±0.9%, 2.9±1.5mm, 7.1±6.2mm, respectively. For JDRD, using the MC-FCN, the mean ratios of VOE, RVD, ASD, RMSD, and MSSD were 8.1±4.5%, 1.7±1.0%, 1.5±0.7%, 2.0±1.2mm, 5.2±6.4mm, respectively. The test results demonstrate that the MC-FCN model provides greater accuracy and robustness than previous methods."
},
{
"pmid": "30047874",
"title": "Transfer Learning for Image Segmentation by Combining Image Weighting and Kernel Learning.",
"abstract": "Many medical image segmentation methods are based on the supervised classification of voxels. Such methods generally perform well when provided with a training set that is representative of the test images to the segment. However, problems may arise when training and test data follow different distributions, for example, due to differences in scanners, scanning protocols, or patient groups. Under such conditions, weighting training images according to distribution similarity have been shown to greatly improve performance. However, this assumes that a part of the training data is representative of the test data; it does not make unrepresentative data more similar. We, therefore, investigate kernel learning as a way to reduce differences between training and test data and explore the added value of kernel learning for image weighting. We also propose a new image weighting method that minimizes maximum mean discrepancy (MMD) between training and test data, which enables the joint optimization of image weights and kernel. Experiments on brain tissue, white matter lesion, and hippocampus segmentation show that both kernel learning and image weighting, when used separately, greatly improve performance on heterogeneous data. Here, MMD weighting obtains similar performance to previously proposed image weighting methods. Combining image weighting and kernel learning, optimized either individually or jointly, can give a small additional improvement in performance."
}
] |
BMC Medical Informatics and Decision Making | 31801523 | PMC6894109 | 10.1186/s12911-019-0936-3 | RCorp: a resource for chemical disease semantic extraction in Chinese | BackgroundTo robustly identify synergistic combinations of drugs, high-throughput screenings are desirable. It will be of great help to automatically identify the relations in the published papers with machine learning based tools. To support the chemical disease semantic relation extraction especially for chronic diseases, a chronic disease specific corpus for combination therapy discovery in Chinese (RCorp) is manually annotated.MethodsIn this study, we extracted abstracts from a Chinese medical literature server and followed the annotation framework of the BioCreative CDR corpus, with the guidelines modified to make the combination therapy related relations available. An annotation tool was incorporated to the standard annotation process.ResultsThe resulting RCorp consists of 339 Chinese biomedical articles with 2367 annotated chemicals, 2113 diseases, 237 symptoms, 164 chemical-induce-disease relations, 163 chemical-induce-symptom relations, and 805 chemical-treat-disease relations. Each annotation includes both the mention text spans and normalized concept identifiers. The corpus gets an inter-annotator agreement score of 0.883 for chemical entities, 0.791 for disease entities which are measured by F score. And the F score for chemical-treat-disease relations gets 0.788 after unifying the entity mentions.ConclusionsWe extracted and manually annotated a chronic disease specific corpus for combination therapy discovery in Chinese. The result analysis of the corpus proves its quality for the combination therapy related knowledge discovery task. Our annotated corpus would be a useful resource for the modelling of entity recognition and relation extraction tools. In the future, an evaluation based on the corpus will be held. | A comparison with other related worksComparing to other related works on the annotation and corpus building of the CDRs (Table 4), there are three main characteristics in this study. Firstly, our corpus is the only CDR corpus of biomedical articles in Chinese, which can be further applied in the text mining tasks targeted at biomedical texts in Chinese. Secondly, our topics are focused on specific chronic diseases, and combination therapies information is curated and expressed in the CDRs for the first time, which will facilitate researchers to extract combination therapy related knowledge. Thirdly, we tried a pipeline annotation workflow in which annotators annotate the entities and relations at the same time. The workflow improves the annotation efficiency and may provide more hints for training a joint model for NER and relation extraction, however, results of the disagreement analysis shows that the pipeline workflow approach causes much more discrepancies among different annotators and may result in lower inter-annotator agreements scores for relations.
Table 4A comparison of works on the corpus building of CDRsCorpus or author nameLanguageDictionarySourcesScaleText boundaryAnnotation resultsRoberts [11]enUMLSclinical text150sentenceCondition, intervention, drug, locus and their interaction relationsi2b2/VA [12]en–clinical text871sentenceRelation types that hold between medical problems, tests, and treatmentsEU-ADR [13]enMeSH/UMLS++ [23]abstracts300sentenceDrugs, disorders, targets and their inter-relationshipsRosario [14]enMeSHabstracts3495 sentencessentenceRelationships between treatment and diseaseIxaMed-GS [24]spaSNOMED CTClinical text75 docs/5410 sentencesdocumentRelationships between entities indicating adverse drug reaction eventsBioCreative CDR [16]enMESH [25]abstracts1500documentRelationships between chemicals and diseases (CID)RCorpcnCMESH [19]abstracts339documentRelationships between chemicals and diseases (CTD) | [
"25254099",
"27865463",
"21721598",
"30777010",
"19535011",
"21685143",
"22554700",
"23703206",
"26141794",
"10928714"
] | [
{
"pmid": "25254099",
"title": "An analysis on the entity annotations in biological corpora.",
"abstract": "Collection of documents annotated with semantic entities and relationships are crucial resources to support development and evaluation of text mining solutions for the biomedical domain. Here I present an overview of 36 corpora and show an analysis on the semantic annotations they contain. Annotations for entity types were classified into six semantic groups and an overview on the semantic entities which can be found in each corpus is shown. Results show that while some semantic entities, such as genes, proteins and chemicals are consistently annotated in many collections, corpora available for diseases, variations and mutations are still few, in spite of their importance in the biological domain."
},
{
"pmid": "27865463",
"title": "Molecular Changes During Acute Myeloid Leukemia (AML) Evolution and Identification of Novel Treatment Strategies Through Molecular Stratification.",
"abstract": "Acute myeloid leukemia (AML) is a hematopoietic malignancy characterized by impaired differentiation and uncontrollable proliferation of myeloid progenitor cells. Due to high relapse rates, overall survival for this rapidly progressing disease is poor. The significant challenge in AML treatment is disease heterogeneity stemming from variability in maturation state of leukemic cells of origin, genetic aberrations among patients, and existence of multiple disease clones within a single patient. Disease heterogeneity and the lack of biomarkers for drug sensitivity lie at the root of treatment failure as well as selective efficacy of AML chemotherapies and the emergence of drug resistance. Furthermore, standard-of-care treatment is aggressive, presenting significant tolerability concerns to the commonly advanced-age AML patient. In this review, we examine the concept and potential of molecular stratification, particularly with biologically relevant drug responses, in identifying low-toxicity precision therapeutic combinations and clinically relevant biomarkers for AML patient care as a way to overcome these challenges in AML treatment."
},
{
"pmid": "21721598",
"title": "Combination therapy for Alzheimer's disease.",
"abstract": "Alzheimer's disease (AD) is a progressive, degenerative brain disease. The mainstay of current management of patients with AD involves drugs that provide symptomatic therapy. Two classes of medications have been approved by the US FDA for the treatment of AD: the cholinesterase inhibitors (ChEIs), which include galantamine and rivastigmine (both approved for use in mild to moderate AD) and donepezil (approved for use in mild to severe AD); and the non-competitive NMDA receptor antagonist memantine (approved for use in moderate to severe AD). The European and Asian regulatory bodies have also approved ChEIs as monotherapy in mild to moderate AD. Future research directions are mostly focusing on disease modification and prevention. This review covers key studies of the efficacy, safety and tolerability of combination therapy in AD, defined as a combination of the NMDA receptor antagonist memantine with any of the ChEIs (donepezil, galantamine or rivastigmine) for the treatment of AD. Relevant studies were identified via a PubMed search. This review shows that combination therapy for AD seems to be safe, well tolerated and may represent the current gold standard for treatment of moderate to severe AD and possibly mild to moderate AD as well."
},
{
"pmid": "30777010",
"title": "Statistical assessment and visualization of synergies for large-scale sparse drug combination datasets.",
"abstract": "BACKGROUND\nDrug combinations have the potential to improve efficacy while limiting toxicity. To robustly identify synergistic combinations, high-throughput screens using full dose-response surface are desirable but require an impractical number of data points. Screening of a sparse number of doses per drug allows to screen large numbers of drug pairs, but complicates statistical assessment of synergy. Furthermore, since the number of pairwise combinations grows with the square of the number of drugs, exploration of large screens necessitates advanced visualization tools.\n\n\nRESULTS\nWe describe a statistical and visualization framework for the analysis of large-scale drug combination screens. We developed an approach suitable for datasets with large number of drugs pairs even if small number of data points are available per drug pair. We demonstrate our approach using a systematic screen of all possible pairs among 108 cancer drugs applied to melanoma cell lines. In this dataset only two dose-response data points per drug pair and two data points per single drug test were available. We used a Bliss-based linear model, effectively borrowing data from the drug pairs to obtain robust estimations of the singlet viabilities, consequently yielding better estimates of drug synergy. Our method improves data consistency across dosing thus likely reducing the number of false positives. The approach allows to compute p values accounting for standard errors of the modeled singlets and combination viabilities. We further develop a synergy specificity score that distinguishes specific synergies from those arising with promiscuous drugs. Finally, we developed a summarized interactive visualization in a web application, providing efficient access to any of the 439,000 data points in the combination matrix ( http://www.cmtlab.org:3000/combo_app.html ). The code of the analysis and the web application is available at https://github.com/arnaudmgh/synergy-screen .\n\n\nCONCLUSIONS\nWe show that statistical modeling of single drug response from drug combination data can help determine significance of synergy and antagonism in drug combination screens with few data point per drug pair. We provide a web application for the rapid exploration of large combinatorial drug screen. All codes are available to the community, as a resource for further analysis of published data and for analysis of other drug screens."
},
{
"pmid": "19535011",
"title": "Building a semantically annotated corpus of clinical texts.",
"abstract": "In this paper, we describe the construction of a semantically annotated corpus of clinical texts for use in the development and evaluation of systems for automatically extracting clinically significant information from the textual component of patient records. The paper details the sampling of textual material from a collection of 20,000 cancer patient records, the development of a semantic annotation scheme, the annotation methodology, the distribution of annotations in the final corpus, and the use of the corpus for development of an adaptive information extraction system. The resulting corpus is the most richly semantically annotated resource for clinical text processing built to date, whose value has been demonstrated through its use in developing an effective information extraction system. The detailed presentation of our corpus construction and annotation methodology will be of value to others seeking to build high-quality semantically annotated corpora in biomedical domains."
},
{
"pmid": "21685143",
"title": "2010 i2b2/VA challenge on concepts, assertions, and relations in clinical text.",
"abstract": "The 2010 i2b2/VA Workshop on Natural Language Processing Challenges for Clinical Records presented three tasks: a concept extraction task focused on the extraction of medical concepts from patient reports; an assertion classification task focused on assigning assertion types for medical problem concepts; and a relation classification task focused on assigning relation types that hold between medical problems, tests, and treatments. i2b2 and the VA provided an annotated reference standard corpus for the three tasks. Using this reference standard, 22 systems were developed for concept extraction, 21 for assertion classification, and 16 for relation classification. These systems showed that machine learning approaches could be augmented with rule-based systems to determine concepts, assertions, and relations. Depending on the task, the rule-based systems can either provide input for machine learning or post-process the output of machine learning. Ensembles of classifiers, information from unlabeled data, and external knowledge sources can help when the training data are inadequate."
},
{
"pmid": "22554700",
"title": "The EU-ADR corpus: annotated drugs, diseases, targets, and their relationships.",
"abstract": "Corpora with specific entities and relationships annotated are essential to train and evaluate text-mining systems that are developed to extract specific structured information from a large corpus. In this paper we describe an approach where a named-entity recognition system produces a first annotation and annotators revise this annotation using a web-based interface. The agreement figures achieved show that the inter-annotator agreement is much better than the agreement with the system provided annotations. The corpus has been annotated for drugs, disorders, genes and their inter-relationships. For each of the drug-disorder, drug-target, and target-disorder relations three experts have annotated a set of 100 abstracts. These annotated relationships will be used to train and evaluate text-mining software to capture these relationships in texts."
},
{
"pmid": "23703206",
"title": "PubTator: a web-based text mining tool for assisting biocuration.",
"abstract": "Manually curating knowledge from biomedical literature into structured databases is highly expensive and time-consuming, making it difficult to keep pace with the rapid growth of the literature. There is therefore a pressing need to assist biocuration with automated text mining tools. Here, we describe PubTator, a web-based system for assisting biocuration. PubTator is different from the few existing tools by featuring a PubMed-like interface, which many biocurators find familiar, and being equipped with multiple challenge-winning text mining algorithms to ensure the quality of its automatic results. Through a formal evaluation with two external user groups, PubTator was shown to be capable of improving both the efficiency and accuracy of manual curation. PubTator is publicly available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator/."
},
{
"pmid": "26141794",
"title": "On the creation of a clinical gold standard corpus in Spanish: Mining adverse drug reactions.",
"abstract": "The advances achieved in Natural Language Processing make it possible to automatically mine information from electronically created documents. Many Natural Language Processing methods that extract information from texts make use of annotated corpora, but these are scarce in the clinical domain due to legal and ethical issues. In this paper we present the creation of the IxaMed-GS gold standard composed of real electronic health records written in Spanish and manually annotated by experts in pharmacology and pharmacovigilance. The experts mainly annotated entities related to diseases and drugs, but also relationships between entities indicating adverse drug reaction events. To help the experts in the annotation task, we adapted a general corpus linguistic analyzer to the medical domain. The quality of the annotation process in the IxaMed-GS corpus has been assessed by measuring the inter-annotator agreement, which was 90.53% for entities and 82.86% for events. In addition, the corpus has been used for the automatic extraction of adverse drug reaction events using machine learning."
}
] |
Heliyon | 31844750 | PMC6895642 | 10.1016/j.heliyon.2019.e02877 | Parasite cloud service providers: on-demand prices on top of spot prices | On-demand resource provisioning and elasticity are two of the main characteristics of the cloud computing paradigm. As a result, the load on a cloud service provider (CSP) is not fixed and almost always a number of its physical resources are not used, called spare resources. As the CSPs typically don't want to be overprovisioned at any time, they procure physical resources in accordance to a pessimistic forecast of their loads and this leads to a large amount of spare resources most of the time. Some CSPs rent their spare resources with a lower price called the spot price, which varies over time with respect to the market or the internal state of the CSP. In this paper, we assume the spot price to be a function of the CSP's load. We introduce the concept of a parasite CSP, which rents spare resources from several CSPs simultaneously with spot prices and rents them to its customers with an on-demand price lower than the host CSPs' on-demand prices. We propose the overall architecture and interaction model of the parasite CSP. Mathematical analysis has been made to calculate the amount of spare resources of the host CSPs, the amount of resources that the parasite CSP can rent (its virtual capacity) as well as the probability of SLA violations. We evaluate our analysis over pricing data gathered from Amazon EC2 services. The results show that if the parasite CSP relies on several host CSPs, its virtual capacity can be considerable and the expected penalty due to SLA violation is acceptably low. | 5Related worksConsiderable work has been done on the spot instances of the CSPs, specifically Amazon EC2 services. In this section, we provide an overview on the existing work in three categories.The first category are research works that try to analyze and model price variations in Amazon EC2 spot instance prices. In [8], [9] traces of the spot prices are gathered and analyzed. They calculate statistical measures over the traces in different hours of day as well as different days of week. An estimation of the spot price function using a mixed Gaussian distribution is also proposed. Ben-Yehuda and Ben-Yehuda [10] analyze long-time traces of Amazon EC2 spot prices in different zones. They argue that although widely believed, the determination of spot prices is not totally market-driven and low spot prices are set by random. However, the higher spot prices are market-driven and are determined by user bids. Karunakaran and Sundarraj [11] use a simulation study on data gathered from Amazon EC2 to analyze the effect of increasing or decreasing the bid price on job completion cost, wait time and interruption rates during job execution. Li et al. [12] develop a Predator-Prey model for simulating market activities in order to explain variations in spot prices. They modeled demand and resource as predator and prey, respectively. They identify some regular patterns of market activities with respect to Amazon EC2 spot prices. Agarwal et al. [13] propose a method for forecasting Amazon EC2 spot prices based on recurrent neural networks. They argue that the error of their method is at most 8.6%. In [14], Baughman et al. presented a long/short-term memory (LSTM) recurrent neural network for spot price prediction, arguing the error being less than that of the ARIMA method. Portella et al. [15] propose static analysis over on-demand and spot prices of Amazon EC2 services. They capture the correlation between VM types and their on-demand price as well as spot price trends. With this information, they provide a price-availability tradeoff to the user. For instance, the user can set the bid to 30% of the on-demand price and ensure that the availability of the VM will be above 90%. Baughman et al. [16] recognize a major change in Amazon's spot price mechanism in 2017. They analyze spot prices before that time as well as current prices and compare some of their properties.The above works try to model the spot prices and their variations. In fact, these works are orthogonal to ours and the concept of a parasite CSP can be imagined in all models. However, the details of the analysis depend on the model accepted for spot prices.The second category of the existing work is about to optimize the behavior of the CSP about its spot instances. Zhang et al. [17] assume an auction-based model for exposing the VMs. They propose a mechanism that predicts the future demand for different VM types and then determines the optimum spot price as well as capacity for each VM type in order to maximize the CSP's profit. Toosi et al. [18] propose an auction-based mechanism for determining the price of perishable cloud resources. Their mechanism is envy-free, near optimal (in terms of profit) and is truthful with high probability.The mentioned works can be interpreted as suggestions to the CSPs for their spot prices. As the previous category of related research, the behavior of the CSPs about the spot prices affect the details of our analysis, but the concept of a parasite CSP is meaningful in all cases.Finally, some researchers try to leverage the spot instances of the existing cloud service providers and propose models and mechanisms for their external users to gain profit. Here, we introduce these works and compare them to ours.Mattess et al. [19] discuss the idea of using spot instances in peak loads. Computing clusters having variable loads can rent the spot instances of an IaaS provider when a peak occurs in their load. They analyze different service provisioning policies in this context. We can interpret our work as a basis for theirs. In fact, the analysis presented in our paper can be used to better understand the possibility of using spot prices in peak loads.Yi et al. [20], [21] propose and compare several checkpointing schemas when using spot instances of Amazon EC2 services, such as hourly, rising edge-driven and current-price based adaptive checkpointing. They also study the impact of work migrations on improving task completion times while maintaining low costs, by proposing and evaluating several migration heuristics. Compared to our work, although both try to rely on several CSPs to increase quality, the points of focus differ: They focus on how to migrate the work and we focus on how much resources can be rent. In fact, a future work can combine these works to obtain a more detailed picture of how a parasite CSP works.[22] proposes a method for hosting an always-available service over the spot instances in order to reduce the relevant costs. It mainly consists of a scheduler that bids appropriately for spot prices in order to remain available and a mechanism for migrating the VMs from spot instances to on-demand instances when needed. Compared to us, they have focused on the migration and bidding mechanisms for increasing availability, while our focus is on the mathematical analysis of such availability in terms of SLA penalties and the number of resources we can rent.[23] develops an information service named SpotLight that monitors the availability of different server types in different regions. Cloud applications can query this service to know about their server availability. Spot prices have an important role in their analysis. Their work can be used as a tool for deploying a parasite CSP. In fact, we assumed all information about spot prices and CSP availabilities can be accessed by the parasite CSP and SpotLight can be the tool to achieve this.[7] develops a cloud platform named SpotCheck which provides IaaS on top of the spot instances of a native IaaS provider. The price of the provided service is near the spot price of the underlying CSP but its availability is about 99.9989%. This work is also related to ours in the manner that they try to create a tool that rents spare resources from a native provider and exposes them to its users. In comparison to that work, our contribution is to expose spare instances with a fixed price and also to calculate the virtual capacity of the parasite CSP. We used their availability result in our evaluations in subsection 4.5. | [] | [] |
Cancers | 31683818 | PMC6896042 | 10.3390/cancers11111700 | Segmentation and Grade Prediction of Colon Cancer Digital Pathology Images Across Multiple Institutions | Distinguishing benign from malignant disease is a primary challenge for colon histopathologists. Current clinical methods rely on qualitative visual analysis of features such as glandular architecture and size that exist on a continuum from benign to malignant. Consequently, discordance between histopathologists is common. To provide more reliable analysis of colon specimens, we propose an end-to-end computational pathology pipeline that encompasses gland segmentation, cancer detection, and then further breaking down the malignant samples into different cancer grades. We propose a multi-step gland segmentation method, which models tissue components as ellipsoids. For cancer detection/grading, we encode cellular morphology, spatial architectural patterns of glands, and texture by extracting multi-scale features: (i) Gland-based: extracted from individual glands, (ii) local-patch-based: computed from randomly-selected image patches, and (iii) image-based: extracted from images, and employ a hierarchical ensemble-classification method. Using two datasets (Rawalpindi Medical College (RMC), n = 174 and gland segmentation (GlaS), n = 165) with three cancer grades, our method reliably delineated gland regions (RMC = 87.5%, GlaS = 88.4%), detected the presence of malignancy (RMC = 97.6%, GlaS = 98.3%), and predicted tumor grade (RMC = 98.6%, GlaS = 98.6%). Training the model using one dataset and testing it on the other showed strong concordance in cancer detection (Train RMC – Test GlaS = 94.5%, Train GlaS – Test RMC = 93.7%) and grading (Train RMC – Test GlaS = 95%, Train GlaS – Test RMC = 95%) suggesting that the model will be applicable across institutions. With further prospective validation, the techniques demonstrated here may provide a reproducible and easily accessible method to standardize analysis of colon cancer specimens. | 1.1. Related WorkThis section presents a review of contemporary literature in the three research directions on colon cancer diagnosis, which have also been investigated in the current work, i.e., automated gland segmentation, colon cancer detection and grading.Glandular structures in histopathology images can be segmented either by using generic [15] or specialized methods [16,17,18]. Graph-based methods, which rely on generating graphs from glandular structures, have been the most common method for gland segmentation. For example, Demir et al. decomposed the glandular structures in their primitive objects and utilized the organizational properties of these objects instead of traditional pixel-based properties [16]. The approach relies on a region growing procedure, where the gland seeds are determined based on a graph constructed from the nucleus and lumen objects. The seeds are grown using another object-graph constructed on the nucleus objects alone. The final boundary of glands is obtained based on the locations of the nucleus objects and a false-positive elimination process based on information of the segmented grown regions. Similarly, Fu et al. proposed a graph-based GlandVision algorithm [19]. Using the random field modelling in the polar space, the gland contours were extracted based on an inference strategy that approximates a circular graph using two chain graphs. Then, a support vector regressor based on visual features was used to verify that the extracted contour belonged to a true gland.Some recent research efforts have also employed deep neural networks for gland segmentation. To this end, Kainz et al. [18] presented a deep convolutional neural network based pixel classifier for semantic segmentation of colon gland images. Two 7-layer convolutional neural networks (CNNs) were used to predict whether individual pixels belonged to normal or malignant glands. These predictions were then normalized based on weighted total variation using a figure-ground segmentation approach. Wenqi et al. [20] also employed fine-tuned CNNs for segmenting glandular structures, but combined them with an SVM classifier based on traditional radiomic features. Recently, for gland segmentation, Chen et al. proposed a deep contour-aware network that uses a unified multi-task learning framework, exploits multi-level contextual features, and employs an auxiliary supervision method to solve the problem of vanishing gradients [21]. Graham et al. proposed a CNN that counters the information loss incurred in max-pooling layers by re-introducing the original image at multiple points within the network [17]. They used atrous spatial pyramid pooling with varying dilation rates for preserving the resolution and multi-level aggregation, and introduced random transformations during test time for an enhanced segmentation result that concurrently generates an uncertainty map and highlights ambiguous areas. We aim to improve upon this prior art by proposing a gland segmentation algorithm that not only delineates gland boundaries, but also demarcates the internal glandular structures using a multi-step process based on the geometrical/morphological properties of the structures.Several radiomic methods exist in the literature to distinguish benign and malignant colon lesions. For example, Masood et al. [22] investigated local-binary-patterns along with Gaussian SVM to produce reasonable classification results. Classification accuracy can be improved using an ensemble of different classifiers and hybrid combinations of discriminative features. For example, Rathore et al. [3] employed different ensemble classifiers such as rotation boost, rotation forest and random forest on a hybrid feature set, comprising white run-length and area features, and achieved better classification accuracy than from any individual classifier or feature. Similarly, combining a textural analysis of color components with a histogram-of-oriented-gradients improved classification performance [23]. Also, chaddad et al. proposed several radiomic pipeline to evaluate the continuum of colorectal cancer using various types of shape and texture features with multi classifier models [12,13,14]. Considered the CNN models, the classification accuracy was improved compared to the conventional classifier models [13].Similar to automated gland segmentation, some researchers have employed graph-based techniques to standardize colon cancer grading. Altunbay et al. [9] employed the characteristic of circularity in shape of pink, purple, and white clusters of colon biopsy images. They applied a circle finding algorithm on these clusters and computed discriminative features on a graph generated from circular objects in these clusters. They validated this technique by detecting colon cancer from colon biopsy images and discriminating different cancer grades with high accuracy. Ozdemir et al. [10] presented a similar technique based on graph creation from the three clusters of colon biopsy images of normal subjects. These test graphs were compared with training graphs to determine whether tissues were normal or malignant based on the extent of correlation with the test graph. In contrast, Rathore et al. [11] used lumen circularity, convexity, concavity, and ratio of lumen area to the size of image and white cluster as features, which improved the accuracy of cancer grading. In a recent study, Awan et al. measured the shape of glands with a novel metric that they called the “best alignment metric” (BAM). A SVM classifier was then trained using shape features derived from BAM that yielded an accuracy of 91% in a three-class classification into normal, low grade cancer, and high grade cancer [24]. A comprehensive review further describes colon cancer segmentation, detection and grading techniques [7].Despite the significant advances in the past two decades, end-to-end computational pathology pipelines have rarely been developed by the researchers. Our paper aims to bridge this gap and provides an end-to-end computational pathology pipeline for histologic colon cancer detection and grade prediction by incorporating gland segmentation, cancer detection, and grading into a single automated analysis. | [
"18654431",
"27713422",
"17354810",
"24845283",
"20671804",
"24091390",
"25819060",
"19846369",
"23204283",
"29670857",
"28400990",
"28331793",
"19819181",
"30594772",
"24595348",
"27898306",
"24561346",
"29203775",
"26357050",
"25853091",
"26994146",
"27614792",
"25993703",
"22270352",
"31185611",
"30917548"
] | [
{
"pmid": "18654431",
"title": "Nanomechanical analysis of cells from cancer patients.",
"abstract": "Change in cell stiffness is a new characteristic of cancer cells that affects the way they spread. Despite several studies on architectural changes in cultured cell lines, no ex vivo mechanical analyses of cancer cells obtained from patients have been reported. Using atomic force microscopy, we report the stiffness of live metastatic cancer cells taken from the body (pleural) fluids of patients with suspected lung, breast and pancreas cancer. Within the same sample, we find that the cell stiffness of metastatic cancer cells is more than 70% softer, with a standard deviation over five times narrower, than the benign cells that line the body cavity. Different cancer types were found to display a common stiffness. Our work shows that mechanical analysis can distinguish cancerous cells from normal ones even when they show similar shapes. These results show that nanomechanical analysis correlates well with immunohistochemical testing currently used for detecting cancer."
},
{
"pmid": "27713422",
"title": "Diagnosis of T1 colorectal cancer in pedunculated polyps in daily clinical practice: a multicenter study.",
"abstract": "T1 colorectal cancer can be mimicked by pseudo-invasion in pedunculated polyps. British guidelines are currently one of the few which recommend diagnostic confirmation of T1 colorectal cancer by a second pathologist. The aim of this study was to provide insights into the accuracy of histological diagnosis of pedunculated T1 colorectal cancer in daily clinical practice. A sample of 128 cases diagnosed as pedunculated T1 colorectal cancer between 2000 and 2014 from 10 Dutch hospitals was selected for histological review. Firstly, two Dutch expert gastrointestinal pathologists reviewed all hematoxylin-eosin stained slides. In 20 cases the diagnosis T1 colorectal cancer was not confirmed (20/128; 16%). The discordant cases were subsequently discussed with a third Dutch gastrointestinal pathologist and a consensus diagnosis was agreed. The revised diagnoses were pseudo-invasion in 10 cases (10/128; 8%), high-grade dysplasia in 4 cases (4/128; 3%), and equivocal in 6 cases (6/128; 5%). To further validate the consensus diagnosis, the discordant cases were reviewed by an independent expert pathologist from the United Kingdom. A total of 39 cases were reviewed blindly including the 20 cases with a revised diagnosis and 19 control cases where the Dutch expert panel agreed with the original reporting pathologists diagnosis. In 19 of the 20 cases with a revised diagnosis the British pathologist agreed that T1 colorectal cancer could not be confirmed. Additionally, amongst the 19 control cases the British pathologist was unable to confirm T1 colorectal cancer in a further 4 cases and was equivocal in 3 cases. In conclusion, both generalist and expert pathologists experience diagnostic difficulty distinguishing pseudo-invasion and high-grade dysplasia from T1 colorectal cancer. In order to prevent overtreatment, review of the histology of pedunculated T1 colorectal cancers by a second pathologist should be considered with discussion of these cases at a multidisciplinary meeting."
},
{
"pmid": "17354810",
"title": "A boosting cascade for automated detection of prostate cancer from digitized histology.",
"abstract": "Current diagnosis of prostatic adenocarcinoma is done by manual analysis of biopsy tissue samples for tumor presence. However, the recent advent of whole slide digital scanners has made histopathological tissue specimens amenable to computer-aided diagnosis (CAD). In this paper, we present a CAD system to assist pathologists by automatically detecting prostate cancer from digitized images of prostate histological specimens. Automated diagnosis on very large high resolution images is done via a multi-resolution scheme similar to the manner in which a pathologist isolates regions of interest on a glass slide. Nearly 600 image texture features are extracted and used to perform pixel-wise Bayesian classification at each image scale to obtain corresponding likelihood scenes. Starting at the lowest scale, we apply the AdaBoost algorithm to combine the most discriminating features, and we analyze only pixels with a high combined probability of malignancy at subsequent higher scales. The system was evaluated on 22 studies by comparing the CAD result to a pathologist's manual segmentation of cancer (which served as ground truth) and found to have an overall accuracy of 88%. Our results show that (1) CAD detection sensitivity remains consistently high across image scales while CAD specificity increases with higher scales, (2) the method is robust to choice of training samples, and (3) the multi-scale cascaded approach results in significant savings in computational time."
},
{
"pmid": "24845283",
"title": "A nonlinear mapping approach to stain normalization in digital histopathology images using image-specific color deconvolution.",
"abstract": "Histopathology diagnosis is based on visual examination of the morphology of histological sections under a microscope. With the increasing popularity of digital slide scanners, decision support systems based on the analysis of digital pathology images are in high demand. However, computerized decision support systems are fraught with problems that stem from color variations in tissue appearance due to variation in tissue preparation, variation in stain reactivity from different manufacturers/batches, user or protocol variation, and the use of scanners from different manufacturers. In this paper, we present a novel approach to stain normalization in histopathology images. The method is based on nonlinear mapping of a source image to a target image using a representation derived from color deconvolution. Color deconvolution is a method to obtain stain concentration values when the stain matrix, describing how the color is affected by the stain concentration, is given. Rather than relying on standard stain matrices, which may be inappropriate for a given image, we propose the use of a color-based classifier that incorporates a novel stain color descriptor to calculate image-specific stain matrix. In order to demonstrate the efficacy of the proposed stain matrix estimation and stain normalization methods, they are applied to the problem of tumor segmentation in breast histopathology images. The experimental results suggest that the paradigm of color normalization, as a preprocessing step, can significantly help histological image analysis algorithms to demonstrate stable performance which is insensitive to imaging conditions in general and scanner variations in particular."
},
{
"pmid": "20671804",
"title": "Histopathological image analysis: a review.",
"abstract": "Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe."
},
{
"pmid": "24091390",
"title": "A recent survey on colon cancer detection techniques.",
"abstract": "Colon cancer causes deaths of about half a million people every year. Common method of its detection is histopathological tissue analysis, which, though leads to vital diagnosis, is significantly correlated to the tiredness, experience, and workload of the pathologist. Researchers have been working since decades to get rid of manual inspection, and to develop trustworthy systems for detecting colon cancer. Several techniques, based on spectral/spatial analysis of colon biopsy images, and serum and gene analysis of colon samples, have been proposed in this regard. Due to rapid evolution of colon cancer detection techniques, a latest review of recent research in this field is highly desirable. The aim of this paper is to discuss various colon cancer detection techniques. In this survey, we categorize the techniques on the basis of the adopted methodology and underlying data set, and provide detailed description of techniques in each category. Additionally, this study provides an extensive comparison of various colon cancer detection categories, and of multiple techniques within each category. Further, most of the techniques have been evaluated on similar data set to provide a fair performance comparison. Analysis reveals that neither of the techniques is perfect; however, research community is progressively inching toward the finest possible solution."
},
{
"pmid": "25819060",
"title": "Automated colon cancer detection using hybrid of novel geometric features and some traditional features.",
"abstract": "Automatic classification of colon into normal and malignant classes is complex due to numerous factors including similar colors in different biological constituents of histopathological imagery. Therefore, such techniques, which exploit the textural and geometric properties of constituents of colon tissues, are desired. In this paper, a novel feature extraction strategy that mathematically models the geometric characteristics of constituents of colon tissues is proposed. In this study, we also show that the hybrid feature space encompassing diverse knowledge about the tissues׳ characteristics is quite promising for classification of colon biopsy images. This paper thus presents a hybrid feature space based colon classification (HFS-CC) technique, which utilizes hybrid features for differentiating normal and malignant colon samples. The hybrid feature space is formed to provide the classifier different types of discriminative features such as features having rich information about geometric structure and image texture. Along with the proposed geometric features, a few conventional features such as morphological, texture, scale invariant feature transform (SIFT), and elliptic Fourier descriptors (EFDs) are also used to develop a hybrid feature set. The SIFT features are reduced using minimum redundancy and maximum relevancy (mRMR). Various kernels of support vector machines (SVM) are employed as classifiers, and their performance is analyzed on 174 colon biopsy images. The proposed geometric features have achieved an accuracy of 92.62%, thereby showing their effectiveness. Moreover, the proposed HFS-CC technique achieves 98.07% testing and 99.18% training accuracy. The better performance of HFS-CC is largely due to the discerning ability of the proposed geometric features and the developed hybrid feature space."
},
{
"pmid": "19846369",
"title": "Color graphs for automated cancer diagnosis and grading.",
"abstract": "This paper reports a new structural method to mathematically represent and quantify a tissue for the purpose of automated and objective cancer diagnosis and grading. Unlike the previous structural methods, which quantify a tissue considering the spatial distributions of its cell nuclei, the proposed method relies on the use of distributions of multiple tissue components for the representation. To this end, it constructs a graph on multiple tissue components and colors its edges depending on the component types of their endpoints. Subsequently, it extracts a new set of structural features from these color graphs and uses these features in the classification of tissues. Working with the images of colon tissues, our experiments demonstrate that the color-graph approach leads to 82.65% test accuracy and that it significantly improves the performance of its counterparts."
},
{
"pmid": "23204283",
"title": "A hybrid classification model for digital pathology using structural and statistical pattern recognition.",
"abstract": "Cancer causes deviations in the distribution of cells, leading to changes in biological structures that they form. Correct localization and characterization of these structures are crucial for accurate cancer diagnosis and grading. In this paper, we introduce an effective hybrid model that employs both structural and statistical pattern recognition techniques to locate and characterize the biological structures in a tissue image for tissue quantification. To this end, this hybrid model defines an attributed graph for a tissue image and a set of query graphs as a reference to the normal biological structure. It then locates key regions that are most similar to a normal biological structure by searching the query graphs over the entire tissue graph. Unlike conventional approaches, this hybrid model quantifies the located key regions with two different types of features extracted using structural and statistical techniques. The first type includes embedding of graph edit distances to the query graphs whereas the second one comprises textural features of the key regions. Working with colon tissue images, our experiments demonstrate that the proposed hybrid model leads to higher classification accuracies, compared against the conventional approaches that use only statistical techniques for tissue quantification."
},
{
"pmid": "29670857",
"title": "Radiomics Evaluation of Histological Heterogeneity Using Multiscale Textures Derived From 3D Wavelet Transformation of Multispectral Images.",
"abstract": "PURPOSE\nColorectal cancer (CRC) is markedly heterogeneous and develops progressively toward malignancy through several stages which include stroma (ST), benign hyperplasia (BH), intraepithelial neoplasia (IN) or precursor cancerous lesion, and carcinoma (CA). Identification of the malignancy stage of CRC pathology tissues (PT) allows the most appropriate therapeutic intervention.\n\n\nMETHODS\nThis study investigates multiscale texture features extracted from CRC pathology sections using 3D wavelet transform (3D-WT) filter. Multiscale features were extracted from digital whole slide images of 39 patients that were segmented in a pre-processing step using an active contour model. The capacity for multiscale texture to compare and classify between PTs was investigated using ANOVA significance test and random forest classifier models, respectively.\n\n\nRESULTS\n12 significant features derived from the multiscale texture (i.e., variance, entropy, and energy) were found to discriminate between CRC grades at a significance value of p < 0.01 after correction. Combining multiscale texture features lead to a better predictive capacity compared to prediction models based on individual scale features with an average (±SD) classification accuracy of 93.33 (±3.52)%, sensitivity of 88.33 (± 4.12)%, and specificity of 96.89 (± 3.88)%. Entropy was found to be the best classifier feature across all the PT grades with an average of the area under the curve (AUC) value of 91.17, 94.21, 97.70, 100% for ST, BH, IN, and CA, respectively.\n\n\nCONCLUSION\nOur results suggest that multiscale texture features based on 3D-WT are sensitive enough to discriminate between CRC grades with the entropy feature, the best predictor of pathology grade."
},
{
"pmid": "28400990",
"title": "Classifications of Multispectral Colorectal Cancer Tissues Using Convolution Neural Network.",
"abstract": "BACKGROUND\nColorectal cancer (CRC) is the third most common cancer among men and women. Its diagnosis in early stages, typically done through the analysis of colon biopsy images, can greatly improve the chances of a successful treatment. This paper proposes to use convolution neural networks (CNNs) to predict three tissue types related to the progression of CRC: benign hyperplasia (BH), intraepithelial neoplasia (IN), and carcinoma (Ca).\n\n\nMETHODS\nMultispectral biopsy images of thirty CRC patients were retrospectively analyzed. Images of tissue samples were divided into three groups, based on their type (10 BH, 10 IN, and 10 Ca). An active contour model was used to segment image regions containing pathological tissues. Tissue samples were classified using a CNN containing convolution, max-pooling, and fully-connected layers. Available tissue samples were split into a training set, for learning the CNN parameters, and test set, for evaluating its performance.\n\n\nRESULTS\nAn accuracy of 99.17% was obtained from segmented image regions, outperforming existing approaches based on traditional feature extraction, and classification techniques.\n\n\nCONCLUSIONS\nExperimental results demonstrate the effectiveness of CNN for the classification of CRC tissue types, in particular when using presegmented regions of interest."
},
{
"pmid": "28331793",
"title": "Texture Analysis of Abnormal Cell Images for Predicting the Continuum of Colorectal Cancer.",
"abstract": "Abnormal cell (ABC) is a markedly heterogeneous tissue area and can be categorized into three main types: benign hyperplasia (BH), carcinoma (Ca), and intraepithelial neoplasia (IN) or precursor cancerous lesion. In this study, the goal is to determine and characterize the continuum of colorectal cancer by using a 3D-texture approach. ABC was segmented in preprocessing step using an active contour segmentation technique. Cell types were analyzed based on textural features extracted from the gray level cooccurrence matrices (GLCMs). Significant texture features were selected using an analysis of variance (ANOVA) of ABC with a p value cutoff of p < 0.01. Features selected were reduced with a principal component analysis (PCA), which accounted for 97% of the cumulative variance from significant features. The simulation results identified 158 significant features based on ANOVA from a total of 624 texture features extracted from GLCMs. Performance metrics of ABC discrimination based on significant texture features showed 92.59% classification accuracy, 100% sensitivity, and 94.44% specificity. These findings suggest that texture features extracted from GLCMs are sensitive enough to discriminate between the ABC types and offer the opportunity to predict cell characteristics of colorectal cancer."
},
{
"pmid": "19819181",
"title": "Automatic segmentation of colon glands using object-graphs.",
"abstract": "Gland segmentation is an important step to automate the analysis of biopsies that contain glandular structures. However, this remains a challenging problem as the variation in staining, fixation, and sectioning procedures lead to a considerable amount of artifacts and variances in tissue sections, which may result in huge variances in gland appearances. In this work, we report a new approach for gland segmentation. This approach decomposes the tissue image into a set of primitive objects and segments glands making use of the organizational properties of these objects, which are quantified with the definition of object-graphs. As opposed to the previous literature, the proposed approach employs the object-based information for the gland segmentation problem, instead of using the pixel-based information alone. Working with the images of colon tissues, our experiments demonstrate that the proposed object-graph approach yields high segmentation accuracies for the training and test sets and significantly improves the segmentation performance of its pixel-based counterparts. The experiments also show that the object-based structure of the proposed approach provides more tolerance to artifacts and variances in tissues."
},
{
"pmid": "30594772",
"title": "MILD-Net: Minimal information loss dilated network for gland instance segmentation in colon histology images.",
"abstract": "The analysis of glandular morphology within colon histopathology images is an important step in determining the grade of colon cancer. Despite the importance of this task, manual segmentation is laborious, time-consuming and can suffer from subjectivity among pathologists. The rise of computational pathology has led to the development of automated methods for gland segmentation that aim to overcome the challenges of manual segmentation. However, this task is non-trivial due to the large variability in glandular appearance and the difficulty in differentiating between certain glandular and non-glandular histological structures. Furthermore, a measure of uncertainty is essential for diagnostic decision making. To address these challenges, we propose a fully convolutional neural network that counters the loss of information caused by max-pooling by re-introducing the original image at multiple points within the network. We also use atrous spatial pyramid pooling with varying dilation rates for preserving the resolution and multi-level aggregation. To incorporate uncertainty, we introduce random transformations during test time for an enhanced segmentation result that simultaneously generates an uncertainty map, highlighting areas of ambiguity. We show that this map can be used to define a metric for disregarding predictions with high uncertainty. The proposed network achieves state-of-the-art performance on the GlaS challenge dataset and on a second independent colorectal adenocarcinoma dataset. In addition, we perform gland instance segmentation on whole-slide images from two further datasets to highlight the generalisability of our method. As an extension, we introduce MILD-Net+ for simultaneous gland and lumen segmentation, to increase the diagnostic power of the network."
},
{
"pmid": "24595348",
"title": "A novel polar space random field model for the detection of glandular structures.",
"abstract": "In this paper, we propose a novel method to detect glandular structures in microscopic images of human tissue. We first convert the image from Cartesian space to polar space and then introduce a novel random field model to locate the possible boundary of a gland. Next, we develop a visual feature-based support vector regressor to verify if the detected contour corresponds to a true gland. And finally, we combine the outputs of the random field and the regressor to form the GlandVision algorithm for the detection of glandular structures. Our approach can not only detect the existence of the gland, but also can accurately locate it with pixel accuracy. In the experiments, we treat the task of detecting glandular structures as object (gland) detection and segmentation problems respectively. The results indicate that our new technique outperforms state-of-the-art computer vision algorithms in respective fields."
},
{
"pmid": "27898306",
"title": "DCAN: Deep contour-aware networks for object instance segmentation from histology images.",
"abstract": "In histopathological image analysis, the morphology of histological structures, such as glands and nuclei, has been routinely adopted by pathologists to assess the malignancy degree of adenocarcinomas. Accurate detection and segmentation of these objects of interest from histology images is an essential prerequisite to obtain reliable morphological statistics for quantitative diagnosis. While manual annotation is error-prone, time-consuming and operator-dependant, automated detection and segmentation of objects of interest from histology images can be very challenging due to the large appearance variation, existence of strong mimics, and serious degeneration of histological structures. In order to meet these challenges, we propose a novel deep contour-aware network (DCAN) under a unified multi-task learning framework for more accurate detection and segmentation. In the proposed network, multi-level contextual features are explored based on an end-to-end fully convolutional network (FCN) to deal with the large appearance variation. We further propose to employ an auxiliary supervision mechanism to overcome the problem of vanishing gradients when training such a deep network. More importantly, our network can not only output accurate probability maps of histological objects, but also depict clear contours simultaneously for separating clustered object instances, which further boosts the segmentation performance. Our method ranked the first in two histological object segmentation challenges, including 2015 MICCAI Gland Segmentation Challenge and 2015 MICCAI Nuclei Segmentation Challenge. Extensive experiments on these two challenging datasets demonstrate the superior performance of our method, surpassing all the other methods by a significant margin."
},
{
"pmid": "24561346",
"title": "Ensemble classification of colon biopsy images based on information rich hybrid features.",
"abstract": "In recent years, classification of colon biopsy images has become an active research area. Traditionally, colon cancer is diagnosed using microscopic analysis. However, the process is subjective and leads to considerable inter/intra observer variation. Therefore, reliable computer-aided colon cancer detection techniques are in high demand. In this paper, we propose a colon biopsy image classification system, called CBIC, which benefits from discriminatory capabilities of information rich hybrid feature spaces, and performance enhancement based on ensemble classification methodology. Normal and malignant colon biopsy images differ with each other in terms of the color distribution of different biological constituents. The colors of different constituents are sharp in normal images, whereas the colors diffuse with each other in malignant images. In order to exploit this variation, two feature types, namely color components based statistical moments (CCSM) and Haralick features have been proposed, which are color components based variants of their traditional counterparts. Moreover, in normal colon biopsy images, epithelial cells possess sharp and well-defined edges. Histogram of oriented gradients (HOG) based features have been employed to exploit this information. Different combinations of hybrid features have been constructed from HOG, CCSM, and Haralick features. The minimum Redundancy Maximum Relevance (mRMR) feature selection method has been employed to select meaningful features from individual and hybrid feature sets. Finally, an ensemble classifier based on majority voting has been proposed, which classifies colon biopsy images using the selected features. Linear, RBF, and sigmoid SVM have been employed as base classifiers. The proposed system has been tested on 174 colon biopsy images, and improved performance (=98.85%) has been observed compared to previously reported studies. Additionally, the use of mRMR method has been justified by comparing the performance of CBIC on original and reduced feature sets."
},
{
"pmid": "29203775",
"title": "Glandular Morphometrics for Objective Grading of Colorectal Adenocarcinoma Histology Images.",
"abstract": "Determining the grade of colon cancer from tissue slides is a routine part of the pathological analysis. In the case of colorectal adenocarcinoma (CRA), grading is partly determined by morphology and degree of formation of glandular structures. Achieving consistency between pathologists is difficult due to the subjective nature of grading assessment. An objective grading using computer algorithms will be more consistent, and will be able to analyse images in more detail. In this paper, we measure the shape of glands with a novel metric that we call the Best Alignment Metric (BAM). We show a strong correlation between a novel measure of glandular shape and grade of the tumour. We used shape specific parameters to perform a two-class classification of images into normal or cancerous tissue and a three-class classification into normal, low grade cancer, and high grade cancer. The task of detecting gland boundaries, which is a prerequisite of shape-based analysis, was carried out using a deep convolutional neural network designed for segmentation of glandular structures. A support vector machine (SVM) classifier was trained using shape features derived from BAM. Through cross-validation, we achieved an accuracy of 97% for the two-class and 91% for three-class classification."
},
{
"pmid": "26357050",
"title": "GECC: Gene Expression Based Ensemble Classification of Colon Samples.",
"abstract": "Gene expression deviates from its normal composition in case a patient has cancer. This variation can be used as an effective tool to find cancer. In this study, we propose a novel gene expressions based colon classification scheme (GECC) that exploits the variations in gene expressions for classifying colon gene samples into normal and malignant classes. Novelty of GECC is in two complementary ways. First, to cater overwhelmingly larger size of gene based data sets, various feature extraction strategies, like, chi-square, F-Score, principal component analysis (PCA) and minimum redundancy and maximum relevancy (mRMR) have been employed, which select discriminative genes amongst a set of genes. Second, a majority voting based ensemble of support vector machine (SVM) has been proposed to classify the given gene based samples. Previously, individual SVM models have been used for colon classification, however, their performance is limited. In this research study, we propose an SVM-ensemble based new approach for gene based classification of colon, wherein the individual SVM models are constructed through the learning of different SVM kernels, like, linear, polynomial, radial basis function (RBF), and sigmoid. The predicted results of individual models are combined through majority voting. In this way, the combined decision space becomes more discriminative. The proposed technique has been tested on four colon, and several other binary-class gene expression data sets, and improved performance has been achieved compared to previously reported gene based colon cancer detection techniques. The computational time required for the training and testing of 208 × 5,851 data set has been 591.01 and 0.019 s, respectively."
},
{
"pmid": "25853091",
"title": "Cancer-associated fibroblasts connect metastasis-promoting communication in colorectal cancer.",
"abstract": "Colorectal cancer (CRC) progression and eventually metastasis is directed in many aspects by a circuitous ecosystem consisting of an extracellular matrix scaffold populated by cancer-associated fibroblasts (CAFs), endothelial cells, and diverse immune cells. CAFs are recruited from local tissue-resident fibroblasts or pericryptal fibroblasts and distant fibroblast precursors. CAFs are highly abundant in CRC. In this review, we apply the metastasis-promoting communication of colorectal CAFs to 10 cancer hallmarks described by Hanahan and Weinberg. CAFs influence innate and adaptive tumor immune responses. Using datasets from previously published work, we re-explore the potential messages implicated in this process. Fibroblasts present in metastasis (metastasis-associated fibroblasts) from CRC may have other characteristics and functional roles than CAFs in the primary tumor. Since CAFs connect metastasis-promoting communication, CAF markers are potential prognostic biomarkers. CAFs and their products are possible targets for novel therapeutic strategies."
},
{
"pmid": "26994146",
"title": "Immune and Stromal Classification of Colorectal Cancer Is Associated with Molecular Subtypes and Relevant for Precision Immunotherapy.",
"abstract": "PURPOSE\nThe tumor microenvironment is formed by many distinct and interacting cell populations, and its composition may predict patients' prognosis and response to therapies. Colorectal cancer is a heterogeneous disease in which immune classifications and four consensus molecular subgroups (CMS) have been described. Our aim was to integrate the composition of the tumor microenvironment with the consensus molecular classification of colorectal cancer.\n\n\nEXPERIMENTAL DESIGN\nWe retrospectively analyzed the composition and the functional orientation of the immune, fibroblastic, and angiogenic microenvironment of 1,388 colorectal cancer tumors from three independent cohorts using transcriptomics. We validated our findings using immunohistochemistry.\n\n\nRESULTS\nWe report that colorectal cancer molecular subgroups and microenvironmental signatures are highly correlated. Out of the four molecular subgroups, two highly express immune-specific genes. The good-prognosis microsatellite instable-enriched subgroup (CMS1) is characterized by overexpression of genes specific to cytotoxic lymphocytes. In contrast, the poor-prognosis mesenchymal subgroup (CMS4) expresses markers of lymphocytes and of cells of monocytic origin. The mesenchymal subgroup also displays an angiogenic, inflammatory, and immunosuppressive signature, a coordinated pattern that we also found in breast (n = 254), ovarian (n = 97), lung (n = 80), and kidney (n = 143) cancers. Pathologic examination revealed that the mesenchymal subtype is characterized by a high density of fibroblasts that likely produce the chemokines and cytokines that favor tumor-associated inflammation and support angiogenesis, resulting in a poor prognosis. In contrast, the canonical (CMS2) and metabolic (CMS3) subtypes with intermediate prognosis exhibit low immune and inflammatory signatures.\n\n\nCONCLUSIONS\nThe distinct immune orientations of the colorectal cancer molecular subtypes pave the way for tailored immunotherapies. Clin Cancer Res; 22(16); 4057-66. ©2016 AACR."
},
{
"pmid": "27614792",
"title": "Gland segmentation in colon histology images: The glas challenge contest.",
"abstract": "Colorectal adenocarcinoma originating in intestinal glandular structures is the most common form of colon cancer. In clinical practice, the morphology of intestinal glands, including architectural appearance and glandular formation, is used by pathologists to inform prognosis and plan the treatment of individual patients. However, achieving good inter-observer as well as intra-observer reproducibility of cancer grading is still a major challenge in modern pathology. An automated approach which quantifies the morphology of glands is a solution to the problem. This paper provides an overview to the Gland Segmentation in Colon Histology Images Challenge Contest (GlaS) held at MICCAI'2015. Details of the challenge, including organization, dataset and evaluation criteria, are presented, along with the method descriptions and evaluation results from the top performing methods."
},
{
"pmid": "25993703",
"title": "A Stochastic Polygons Model for Glandular Structures in Colon Histology Images.",
"abstract": "In this paper, we present a stochastic model for glandular structures in histology images of tissue slides stained with Hematoxylin and Eosin, choosing colon tissue as an example. The proposed Random Polygons Model (RPM) treats each glandular structure in an image as a polygon made of a random number of vertices, where the vertices represent approximate locations of epithelial nuclei. We formulate the RPM as a Bayesian inference problem by defining a prior for spatial connectivity and arrangement of neighboring epithelial nuclei and a likelihood for the presence of a glandular structure. The inference is made via a Reversible-Jump Markov chain Monte Carlo simulation. To the best of our knowledge, all existing published algorithms for gland segmentation are designed to mainly work on healthy samples, adenomas, and low grade adenocarcinomas. One of them has been demonstrated to work on intermediate grade adenocarcinomas at its best. Our experimental results show that the RPM yields favorable results, both quantitatively and qualitatively, for extraction of glandular structures in histology images of normal human colon tissues as well as benign and cancerous tissues, excluding undifferentiated carcinomas."
},
{
"pmid": "22270352",
"title": "Ensemble sparse classification of Alzheimer's disease.",
"abstract": "The high-dimensional pattern classification methods, e.g., support vector machines (SVM), have been widely investigated for analysis of structural and functional brain images (such as magnetic resonance imaging (MRI)) to assist the diagnosis of Alzheimer's disease (AD) including its prodromal stage, i.e., mild cognitive impairment (MCI). Most existing classification methods extract features from neuroimaging data and then construct a single classifier to perform classification. However, due to noise and small sample size of neuroimaging data, it is challenging to train only a global classifier that can be robust enough to achieve good classification performance. In this paper, instead of building a single global classifier, we propose a local patch-based subspace ensemble method which builds multiple individual classifiers based on different subsets of local patches and then combines them for more accurate and robust classification. Specifically, to capture the local spatial consistency, each brain image is partitioned into a number of local patches and a subset of patches is randomly selected from the patch pool to build a weak classifier. Here, the sparse representation-based classifier (SRC) method, which has shown to be effective for classification of image data (e.g., face), is used to construct each weak classifier. Then, multiple weak classifiers are combined to make the final decision. We evaluate our method on 652 subjects (including 198 AD patients, 225 MCI and 229 normal controls) from Alzheimer's Disease Neuroimaging Initiative (ADNI) database using MR images. The experimental results show that our method achieves an accuracy of 90.8% and an area under the ROC curve (AUC) of 94.86% for AD classification and an accuracy of 87.85% and an AUC of 92.90% for MCI classification, respectively, demonstrating a very promising performance of our method compared with the state-of-the-art methods for AD/MCI classification using MR images."
},
{
"pmid": "31185611",
"title": "Machine-Learning-Based Prediction of Treatment Outcomes Using MR Imaging-Derived Quantitative Tumor Information in Patients with Sinonasal Squamous Cell Carcinomas: A Preliminary Study.",
"abstract": "The purpose of this study was to determine the predictive power for treatment outcome of a machine-learning algorithm combining magnetic resonance imaging (MRI)-derived data in patients with sinonasal squamous cell carcinomas (SCCs). Thirty-six primary lesions in 36 patients were evaluated. Quantitative morphological parameters and intratumoral characteristics from T2-weighted images, tumor perfusion parameters from arterial spin labeling (ASL) and tumor diffusion parameters of five diffusion models from multi-b-value diffusion-weighted imaging (DWI) were obtained. Machine learning by a non-linear support vector machine (SVM) was used to construct the best diagnostic algorithm for the prediction of local control and failure. The diagnostic accuracy was evaluated using a 9-fold cross-validation scheme, dividing patients into training and validation sets. Classification criteria for the division of local control and failure in nine training sets could be constructed with a mean sensitivity of 0.98, specificity of 0.91, positive predictive value (PPV) of 0.94, negative predictive value (NPV) of 0.97, and accuracy of 0.96. The nine validation data sets showed a mean sensitivity of 1.0, specificity of 0.82, PPV of 0.86, NPV of 1.0, and accuracy of 0.92. In conclusion, a machine-learning algorithm using various MR imaging-derived data can be helpful for the prediction of treatment outcomes in patients with sinonasal SCCs."
},
{
"pmid": "30917548",
"title": "Validation of miRNAs as Breast Cancer Biomarkers with a Machine Learning Approach.",
"abstract": "Certain small noncoding microRNAs (miRNAs) are differentially expressed in normal tissues and cancers, which makes them great candidates for biomarkers for cancer. Previously, a selected subset of miRNAs has been experimentally verified to be linked to breast cancer. In this paper, we validated the importance of these miRNAs using a machine learning approach on miRNA expression data. We performed feature selection, using Information Gain (IG), Chi-Squared (CHI2) and Least Absolute Shrinkage and Selection Operation (LASSO), on the set of these relevant miRNAs to rank them by importance. We then performed cancer classification using these miRNAs as features using Random Forest (RF) and Support Vector Machine (SVM) classifiers. Our results demonstrated that the miRNAs ranked higher by our analysis had higher classifier performance. Performance becomes lower as the rank of the miRNA decreases, confirming that these miRNAs had different degrees of importance as biomarkers. Furthermore, we discovered that using a minimum of three miRNAs as biomarkers for breast cancers can be as effective as using the entire set of 1800 miRNAs. This work suggests that machine learning is a useful tool for functional studies of miRNAs for cancer detection and diagnosis."
}
] |
BMC Medical Informatics and Decision Making | 31805934 | PMC6896258 | 10.1186/s12911-019-0992-8 | Improving reference prioritisation with PICO recognition | BackgroundMachine learning can assist with multiple tasks during systematic reviews to facilitate the rapid retrieval of relevant references during screening and to identify and extract information relevant to the study characteristics, which include the PICO elements of patient/population, intervention, comparator, and outcomes. The latter requires techniques for identifying and categorising fragments of text, known as named entity recognition.MethodsA publicly available corpus of PICO annotations on biomedical abstracts is used to train a named entity recognition model, which is implemented as a recurrent neural network. This model is then applied to a separate collection of abstracts for references from systematic reviews within biomedical and health domains. The occurrences of words tagged in the context of specific PICO contexts are used as additional features for a relevancy classification model. Simulations of the machine learning-assisted screening are used to evaluate the work saved by the relevancy model with and without the PICO features. Chi-squared and statistical significance of positive predicted values are used to identify words that are more indicative of relevancy within PICO contexts.ResultsInclusion of PICO features improves the performance metric on 15 of the 20 collections, with substantial gains on certain systematic reviews. Examples of words whose PICO context are more precise can explain this increase.ConclusionsWords within PICO tagged segments in abstracts are predictive features for determining inclusion. Combining PICO annotation model into the relevancy classification pipeline is a promising approach. The annotations may be useful on their own to aid users in pinpointing necessary information for data extraction, or to facilitate semantic search. | Related workPrevious work has shown that there are multiple avenues for automation within systematic reviews [26–28]. Examples include retrieval of high-quality articles [29–32], risk-of-bias assessment [33–36], and identification of randomised control trials [37, 38]. Matching the focus of the work, we review previous work on data extraction [39] to automatically isolate PICO and other study characteristics, can be methods for aiding abstract-level screening. The two are clearly related, since inclusion and exclusion criteria can be decomposed into requirements for PICO and study characteristics to facilitate search [40].Extracting PICO elements (or information in broader schema [41]) at the phrase level [42–44] is a difficult problem due to the disagreement between human experts on the exact words constituting a PICO mention [45, 46]. Thus, many approaches [39] firstly determine the sentences relevant to the different PICO elements, using either rules (formulated as regular expressions) or ML models [42, 46–52]. Finer-grained data extraction can then be applied to the identified sentences to extract the words or phrases for demographic information (age, sex, ethnicity, etc.) [42, 48, 52–54], specific intervention arms [55], or the number of trial participants [56]. Instead of classifying each sentence independently, the structured form of abstracts can be exploited by identifying PICO sentences simultaneously with rhetorical types (aim, method, results, and conclusions) in the abstract [57–60]. More broadly, PICO and other information can be extracted directly from full text articles [61–65].Rather than extract specific text, Singh et al. predict which medical concepts in the unified medical language system (UMLS) [66] are described in the full-text for each PICO element [67]. They use a neural network model that exploits embeddings of UMLS concepts in addition to word embeddings. The predicted concepts could be used as alternative features rather than just the extracted text. This would supplement manually added metadata such as Medical Subject Headings (MeSH) curated by the U.S. National Library of Medicine [68], which are not always available or have the necessary categorisations.Our proposed approach differs from existing by both operating at the subsentence level (words and phrases) and using a neural network model for processing text [69] without hand-engineered features. In particular, the proposed approach uses an existing model architecture [19] originally designed for named entity recognition [70] to identify mentions of biomedical concepts such as diseases, drugs, anatomical parts [71, 72]. The model builds from previous neural architectures [22, 73, 74]. The model is jointly trained to predict population, intervention, and outcomes in each sentence in the abstract, and can handle nested mentions where one element’s mention (like an intervention) can be contained within another like a population. This capability is novel to this work, and in theory, can provide higher recall than methods that do not allow nested PICO elements.Automatically identified PICO information can improve other automation tasks such as clinical question answering [51] and predicting clinical trial eligibility [75, 76]. Likewise, inclusion and exclusion criteria can be decomposed into requirements for PICO and study characteristics to facilitate search [40]. Recently, Tsafnat et al. have shown the screening ability of automatic PICO extraction [18] for systematic reviews. They use manually designed filters (using dictionaries and rules) [77, 78] for key inclusion criterion, mentions of specific outcomes, population characteristics, and interventions (exposures) to filter collections with impressive gains. Our goal is to replace the manually designed filters with ML modelling that leverages the automatically extracted PICO text to determine an efficient filter. A variety of ML models (different classifiers, algorithms, and feature sets) have been proposed for screening references for systematic reviews [14, 15, 79–95]. Yet, to our knowledge none of relevancy classifiers have used as input the output of PICO recognition. | [
"8411577",
"10517715",
"29695296",
"25005128",
"29778096",
"15561789",
"26104742",
"26659355",
"25656516",
"26073888",
"21208435",
"19166975",
"18852316",
"22677493",
"20595313",
"21084178",
"23428470",
"22961102",
"27293211",
"16112549",
"9377276",
"10877288",
"22912343",
"28648605",
"19567792"
] | [
{
"pmid": "29695296",
"title": "Automated screening of research studies for systematic reviews using study characteristics.",
"abstract": "BACKGROUND\nScreening candidate studies for inclusion in a systematic review is time-consuming when conducted manually. Automation tools could reduce the human effort devoted to screening. Existing methods use supervised machine learning which train classifiers to identify relevant words in the abstracts of candidate articles that have previously been labelled by a human reviewer for inclusion or exclusion. Such classifiers typically reduce the number of abstracts requiring manual screening by about 50%.\n\n\nMETHODS\nWe extracted four key characteristics of observational studies (population, exposure, confounders and outcomes) from the text of titles and abstracts for all articles retrieved using search strategies from systematic reviews. Our screening method excluded studies if they did not meet a predefined set of characteristics. The method was evaluated using three systematic reviews. Screening results were compared to the actual inclusion list of the reviews.\n\n\nRESULTS\nThe best screening threshold rule identified studies that mentioned both exposure (E) and outcome (O) in the study abstract. This screening rule excluded 93.7% of retrieved studies with a recall of 98%.\n\n\nCONCLUSIONS\nFiltering studies for inclusion in a systematic review based on the detection of key study characteristics in abstracts significantly outperformed standard approaches to automated screening and appears worthy of further development and evaluation."
},
{
"pmid": "25005128",
"title": "Systematic review automation technologies.",
"abstract": "Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects.We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time."
},
{
"pmid": "29778096",
"title": "Making progress with the automation of systematic reviews: principles of the International Collaboration for the Automation of Systematic Reviews (ICASR).",
"abstract": "Systematic reviews (SR) are vital to health care, but have become complicated and time-consuming, due to the rapid expansion of evidence to be synthesised. Fortunately, many tasks of systematic reviews have the potential to be automated or may be assisted by automation. Recent advances in natural language processing, text mining and machine learning have produced new algorithms that can accurately mimic human endeavour in systematic review activity, faster and more cheaply. Automation tools need to be able to work together, to exchange data and results. Therefore, we initiated the International Collaboration for the Automation of Systematic Reviews (ICASR), to successfully put all the parts of automation of systematic review production together. The first meeting was held in Vienna in October 2015. We established a set of principles to enable tools to be developed and integrated into toolkits.This paper sets out the principles devised at that meeting, which cover the need for improvement in efficiency of SR tasks, automation across the spectrum of SR tasks, continuous improvement, adherence to high quality standards, flexibility of use and combining components, the need for a collaboration and varied skills, the desire for open source, shared code and evaluation, and a requirement for replicability through rigorous and open evaluation.Automation has a great potential to improve the speed of systematic reviews. Considerable work is already being done on many of the steps involved in a review. The 'Vienna Principles' set out in this paper aim to guide a more coordinated effort which will allow the integration of work by separate teams and build on the experience, code and evaluations done by the many teams working across the globe."
},
{
"pmid": "15561789",
"title": "Text categorization models for high-quality article retrieval in internal medicine.",
"abstract": "OBJECTIVE Finding the best scientific evidence that applies to a patient problem is becoming exceedingly difficult due to the exponential growth of medical publications. The objective of this study was to apply machine learning techniques to automatically identify high-quality, content-specific articles for one time period in internal medicine and compare their performance with previous Boolean-based PubMed clinical query filters of Haynes et al. DESIGN The selection criteria of the ACP Journal Club for articles in internal medicine were the basis for identifying high-quality articles in the areas of etiology, prognosis, diagnosis, and treatment. Naive Bayes, a specialized AdaBoost algorithm, and linear and polynomial support vector machines were applied to identify these articles. MEASUREMENTS The machine learning models were compared in each category with each other and with the clinical query filters using area under the receiver operating characteristic curves, 11-point average recall precision, and a sensitivity/specificity match method. RESULTS In most categories, the data-induced models have better or comparable sensitivity, specificity, and precision than the clinical query filters. The polynomial support vector machine models perform the best among all learning methods in ranking the articles as evaluated by area under the receiver operating curve and 11-point average recall precision. CONCLUSION This research shows that, using machine learning methods, it is possible to automatically build models for retrieving high-quality, content-specific articles using inclusion or citation by the ACP Journal Club as a gold standard in a given time period in internal medicine that perform better than the 1994 PubMed clinical query filters."
},
{
"pmid": "26104742",
"title": "RobotReviewer: evaluation of a system for automatically assessing bias in clinical trials.",
"abstract": "OBJECTIVE\nTo develop and evaluate RobotReviewer, a machine learning (ML) system that automatically assesses bias in clinical trials. From a (PDF-formatted) trial report, the system should determine risks of bias for the domains defined by the Cochrane Risk of Bias (RoB) tool, and extract supporting text for these judgments.\n\n\nMETHODS\nWe algorithmically annotated 12,808 trial PDFs using data from the Cochrane Database of Systematic Reviews (CDSR). Trials were labeled as being at low or high/unclear risk of bias for each domain, and sentences were labeled as being informative or not. This dataset was used to train a multi-task ML model. We estimated the accuracy of ML judgments versus humans by comparing trials with two or more independent RoB assessments in the CDSR. Twenty blinded experienced reviewers rated the relevance of supporting text, comparing ML output with equivalent (human-extracted) text from the CDSR.\n\n\nRESULTS\nBy retrieving the top 3 candidate sentences per document (top3 recall), the best ML text was rated more relevant than text from the CDSR, but not significantly (60.4% ML text rated 'highly relevant' v 56.5% of text from reviews; difference +3.9%, [-3.2% to +10.9%]). Model RoB judgments were less accurate than those from published reviews, though the difference was <10% (overall accuracy 71.0% with ML v 78.3% with CDSR).\n\n\nCONCLUSION\nRisk of bias assessment may be automated with reasonable accuracy. Automatically identified text supporting bias assessment is of equal quality to the manually identified text in the CDSR. This technology could substantially reduce reviewer workload and expedite evidence syntheses."
},
{
"pmid": "26659355",
"title": "Machine learning to assist risk-of-bias assessments in systematic reviews.",
"abstract": "BACKGROUND\nRisk-of-bias assessments are now a standard component of systematic reviews. At present, reviewers need to manually identify relevant parts of research articles for a set of methodological elements that affect the risk of bias, in order to make a risk-of-bias judgement for each of these elements. We investigate the use of text mining methods to automate risk-of-bias assessments in systematic reviews. We aim to identify relevant sentences within the text of included articles, to rank articles by risk of bias and to reduce the number of risk-of-bias assessments that the reviewers need to perform by hand.\n\n\nMETHODS\nWe use supervised machine learning to train two types of models, for each of the three risk-of-bias properties of sequence generation, allocation concealment and blinding. The first model predicts whether a sentence in a research article contains relevant information. The second model predicts a risk-of-bias value for each research article. We use logistic regression, where each independent variable is the frequency of a word in a sentence or article, respectively.\n\n\nRESULTS\nWe found that sentences can be successfully ranked by relevance with area under the receiver operating characteristic (ROC) curve (AUC) > 0.98. Articles can be ranked by risk of bias with AUC > 0.72. We estimate that more than 33% of articles can be assessed by just one reviewer, where two reviewers are normally required.\n\n\nCONCLUSIONS\nWe show that text mining can be used to assist risk-of-bias assessments."
},
{
"pmid": "25656516",
"title": "Automated confidence ranked classification of randomized controlled trial articles: an aid to evidence-based medicine.",
"abstract": "OBJECTIVE\nFor many literature review tasks, including systematic review (SR) and other aspects of evidence-based medicine, it is important to know whether an article describes a randomized controlled trial (RCT). Current manual annotation is not complete or flexible enough for the SR process. In this work, highly accurate machine learning predictive models were built that include confidence predictions of whether an article is an RCT.\n\n\nMATERIALS AND METHODS\nThe LibSVM classifier was used with forward selection of potential feature sets on a large human-related subset of MEDLINE to create a classification model requiring only the citation, abstract, and MeSH terms for each article.\n\n\nRESULTS\nThe model achieved an area under the receiver operating characteristic curve of 0.973 and mean squared error of 0.013 on the held out year 2011 data. Accurate confidence estimates were confirmed on a manually reviewed set of test articles. A second model not requiring MeSH terms was also created, and performs almost as well.\n\n\nDISCUSSION\nBoth models accurately rank and predict article RCT confidence. Using the model and the manually reviewed samples, it is estimated that about 8000 (3%) additional RCTs can be identified in MEDLINE, and that 5% of articles tagged as RCTs in Medline may not be identified.\n\n\nCONCLUSION\nRetagging human-related studies with a continuously valued RCT confidence is potentially more useful for article ranking and review than a simple yes/no prediction. The automated RCT tagging tool should offer significant savings of time and effort during the process of writing SRs, and is a key component of a multistep text mining pipeline that we are building to streamline SR workflow. In addition, the model may be useful for identifying errors in MEDLINE publication types. The RCT confidence predictions described here have been made available to users as a web service with a user query form front end at: http://arrowsmith.psych.uic.edu/cgi-bin/arrowsmith_uic/RCT_Tagger.cgi."
},
{
"pmid": "26073888",
"title": "Automating data extraction in systematic reviews: a systematic review.",
"abstract": "BACKGROUND\nAutomation of the parts of systematic review process, specifically the data extraction step, may be an important strategy to reduce the time necessary to complete a systematic review. However, the state of the science of automatically extracting data elements from full texts has not been well described. This paper performs a systematic review of published and unpublished methods to automate data extraction for systematic reviews.\n\n\nMETHODS\nWe systematically searched PubMed, IEEEXplore, and ACM Digital Library to identify potentially relevant articles. We included reports that met the following criteria: 1) methods or results section described what entities were or need to be extracted, and 2) at least one entity was automatically extracted with evaluation results that were presented for that entity. We also reviewed the citations from included reports.\n\n\nRESULTS\nOut of a total of 1190 unique citations that met our search criteria, we found 26 published reports describing automatic extraction of at least one of more than 52 potential data elements used in systematic reviews. For 25 (48 %) of the data elements used in systematic reviews, there were attempts from various researchers to extract information automatically from the publication text. Out of these, 14 (27 %) data elements were completely extracted, but the highest number of data elements extracted automatically by a single study was 7. Most of the data elements were extracted with F-scores (a mean of sensitivity and positive predictive value) of over 70 %.\n\n\nCONCLUSIONS\nWe found no unified information extraction framework tailored to the systematic review process, and published reports focused on a limited (1-7) number of data elements. Biomedical natural language processing techniques have not been fully utilized to fully or even partially automate the data extraction step of systematic reviews."
},
{
"pmid": "21208435",
"title": "Combinatorial analysis and algorithms for quasispecies reconstruction using next-generation sequencing.",
"abstract": "BACKGROUND\nNext-generation sequencing (NGS) offers a unique opportunity for high-throughput genomics and has potential to replace Sanger sequencing in many fields, including de-novo sequencing, re-sequencing, meta-genomics, and characterisation of infectious pathogens, such as viral quasispecies. Although methodologies and software for whole genome assembly and genome variation analysis have been developed and refined for NGS data, reconstructing a viral quasispecies using NGS data remains a challenge. This application would be useful for analysing intra-host evolutionary pathways in relation to immune responses and antiretroviral therapy exposures. Here we introduce a set of formulae for the combinatorial analysis of a quasispecies, given a NGS re-sequencing experiment and an algorithm for quasispecies reconstruction. We require that sequenced fragments are aligned against a reference genome, and that the reference genome is partitioned into a set of sliding windows (amplicons). The reconstruction algorithm is based on combinations of multinomial distributions and is designed to minimise the reconstruction of false variants, called in-silico recombinants.\n\n\nRESULTS\nThe reconstruction algorithm was applied to error-free simulated data and reconstructed a high percentage of true variants, even at a low genetic diversity, where the chance to obtain in-silico recombinants is high. Results on empirical NGS data from patients infected with hepatitis B virus, confirmed its ability to characterise different viral variants from distinct patients.\n\n\nCONCLUSIONS\nThe combinatorial analysis provided a description of the difficulty to reconstruct a quasispecies, given a determined amplicon partition and a measure of population diversity. The reconstruction algorithm showed good performance both considering simulated data and real data, even in presence of sequencing errors."
},
{
"pmid": "19166975",
"title": "Towards identifying intervention arms in randomized controlled trials: extracting coordinating constructions.",
"abstract": "BACKGROUND\nLarge numbers of reports of randomized controlled trials (RCTs) are published each year, and it is becoming increasingly difficult for clinicians practicing evidence-based medicine to find answers to clinical questions. The automatic machine extraction of RCT experimental details, including design methodology and outcomes, could help clinicians and reviewers locate relevant studies more rapidly and easily.\n\n\nAIM\nThis paper investigates how the comparison of interventions is documented in the abstracts of published RCTs. The ultimate goal is to use automated text mining to locate each intervention arm of a trial. This preliminary work aims to identify coordinating constructions, which are prevalent in the expression of intervention comparisons.\n\n\nMETHODS AND RESULTS\nAn analysis of the types of constructs that describe the allocation of intervention arms is conducted, revealing that the compared interventions are predominantly embedded in coordinating constructions. A method is developed for identifying the descriptions of the assignment of treatment arms in clinical trials, using a full sentence parser to locate coordinating constructions and a statistical classifier for labeling positive examples. Predicate-argument structures are used along with other linguistic features with a maximum entropy classifier. An F-score of 0.78 is obtained for labeling relevant coordinating constructions in an independent test set.\n\n\nCONCLUSIONS\nThe intervention arms of a randomized controlled trials can be identified by machine extraction incorporating syntactic features derived from full sentence parsing."
},
{
"pmid": "18852316",
"title": "A method of extracting the number of trial participants from abstracts describing randomized controlled trials.",
"abstract": "We have developed a method for extracting the number of trial participants from abstracts describing randomized controlled trials (RCTs); the number of trial participants may be an indication of the reliability of the trial. The method depends on statistical natural language processing. The number of interest was determined by a binary supervised classification based on a support vector machine algorithm. The method was trialled on 223 abstracts in which the number of trial participants was identified manually to act as a gold standard. Automatic extraction resulted in 2 false-positive and 19 false-negative classifications. The algorithm was capable of extracting the number of trial participants with an accuracy of 97% and an F-measure of 0.84. The algorithm may improve the selection of relevant articles in regard to question-answering, and hence may assist in decision-making."
},
{
"pmid": "22677493",
"title": "Screening nonrandomized studies for medical systematic reviews: a comparative study of classifiers.",
"abstract": "OBJECTIVES\nTo investigate whether (1) machine learning classifiers can help identify nonrandomized studies eligible for full-text screening by systematic reviewers; (2) classifier performance varies with optimization; and (3) the number of citations to screen can be reduced.\n\n\nMETHODS\nWe used an open-source, data-mining suite to process and classify biomedical citations that point to mostly nonrandomized studies from 2 systematic reviews. We built training and test sets for citation portions and compared classifier performance by considering the value of indexing, various feature sets, and optimization. We conducted our experiments in 2 phases. The design of phase I with no optimization was: 4 classifiers × 3 feature sets × 3 citation portions. Classifiers included k-nearest neighbor, naïve Bayes, complement naïve Bayes, and evolutionary support vector machine. Feature sets included bag of words, and 2- and 3-term n-grams. Citation portions included titles, titles and abstracts, and full citations with metadata. Phase II with optimization involved a subset of the classifiers, as well as features extracted from full citations, and full citations with overweighted titles. We optimized features and classifier parameters by manually setting information gain thresholds outside of a process for iterative grid optimization with 10-fold cross-validations. We independently tested models on data reserved for that purpose and statistically compared classifier performance on 2 types of feature sets. We estimated the number of citations needed to screen by reviewers during a second pass through a reduced set of citations.\n\n\nRESULTS\nIn phase I, the evolutionary support vector machine returned the best recall for bag of words extracted from full citations; the best classifier with respect to overall performance was k-nearest neighbor. No classifier attained good enough recall for this task without optimization. In phase II, we boosted performance with optimization for evolutionary support vector machine and complement naïve Bayes classifiers. Generalization performance was better for the latter in the independent tests. For evolutionary support vector machine and complement naïve Bayes classifiers, the initial retrieval set was reduced by 46% and 35%, respectively.\n\n\nCONCLUSIONS\nMachine learning classifiers can help identify nonrandomized studies eligible for full-text screening by systematic reviewers. Optimization can markedly improve performance of classifiers. However, generalizability varies with the classifier. The number of citations to screen during a second independent pass through the citations can be substantially reduced."
},
{
"pmid": "20595313",
"title": "A new algorithm for reducing the workload of experts in performing systematic reviews.",
"abstract": "OBJECTIVE\nTo determine whether a factorized version of the complement naïve Bayes (FCNB) classifier can reduce the time spent by experts reviewing journal articles for inclusion in systematic reviews of drug class efficacy for disease treatment.\n\n\nDESIGN\nThe proposed classifier was evaluated on a test collection built from 15 systematic drug class reviews used in previous work. The FCNB classifier was constructed to classify each article as containing high-quality, drug class-specific evidence or not. Weight engineering (WE) techniques were added to reduce underestimation for Medical Subject Headings (MeSH)-based and Publication Type (PubType)-based features. Cross-validation experiments were performed to evaluate the classifier's parameters and performance.\n\n\nMEASUREMENTS\nWork saved over sampling (WSS) at no less than a 95% recall was used as the main measure of performance.\n\n\nRESULTS\nThe minimum workload reduction for a systematic review for one topic, achieved with a FCNB/WE classifier, was 8.5%; the maximum was 62.2% and the average over the 15 topics was 33.5%. This is 15.0% higher than the average workload reduction obtained using a voting perceptron-based automated citation classification system.\n\n\nCONCLUSION\nThe FCNB/WE classifier is simple, easy to implement, and produces significantly better results in reducing the workload than previously achieved. The results support it being a useful algorithm for machine-learning-based automation of systematic reviews of drug class efficacy for disease treatment."
},
{
"pmid": "21084178",
"title": "Exploiting the systematic review protocol for classification of medical abstracts.",
"abstract": "OBJECTIVE\nTo determine whether the automatic classification of documents can be useful in systematic reviews on medical topics, and specifically if the performance of the automatic classification can be enhanced by using the particular protocol of questions employed by the human reviewers to create multiple classifiers.\n\n\nMETHODS AND MATERIALS\nThe test collection is the data used in large-scale systematic review on the topic of the dissemination strategy of health care services for elderly people. From a group of 47,274 abstracts marked by human reviewers to be included in or excluded from further screening, we randomly selected 20,000 as a training set, with the remaining 27,274 becoming a separate test set. As a machine learning algorithm we used complement naïve Bayes. We tested both a global classification method, where a single classifier is trained on instances of abstracts and their classification (i.e., included or excluded), and a novel per-question classification method that trains multiple classifiers for each abstract, exploiting the specific protocol (questions) of the systematic review. For the per-question method we tested four ways of combining the results of the classifiers trained for the individual questions. As evaluation measures, we calculated precision and recall for several settings of the two methods. It is most important not to exclude any relevant documents (i.e., to attain high recall for the class of interest) but also desirable to exclude most of the non-relevant documents (i.e., to attain high precision on the class of interest) in order to reduce human workload.\n\n\nRESULTS\nFor the global method, the highest recall was 67.8% and the highest precision was 37.9%. For the per-question method, the highest recall was 99.2%, and the highest precision was 63%. The human-machine workflow proposed in this paper achieved a recall value of 99.6%, and a precision value of 17.8%.\n\n\nCONCLUSION\nThe per-question method that combines classifiers following the specific protocol of the review leads to better results than the global method in terms of recall. Because neither method is efficient enough to classify abstracts reliably by itself, the technology should be applied in a semi-automatic way, with a human expert still involved. When the workflow includes one human expert and the trained automatic classifier, recall improves to an acceptable level, showing that automatic classification techniques can reduce the human workload in the process of building a systematic review."
},
{
"pmid": "23428470",
"title": "A new iterative method to reduce workload in systematic review process.",
"abstract": "High cost for systematic review of biomedical literature has generated interest in decreasing overall workload. This can be done by applying natural language processing techniques to 'automate' the classification of publications that are potentially relevant for a given question. Existing solutions need training using a specific supervised machine-learning algorithm and feature-extraction system separately for each systematic review. We propose a system that only uses the input and feedback of human reviewers during the course of review. As the reviewers classify articles, the query is modified using a simple relevance feedback algorithm, and the semantically closest document to the query is presented. An evaluation of our approach was performed using a set of 15 published drug systematic reviews. The number of articles that needed to be reviewed was substantially reduced (ranging from 6% to 30% for a 95% recall)."
},
{
"pmid": "22961102",
"title": "A pilot study using machine learning and domain knowledge to facilitate comparative effectiveness review updating.",
"abstract": "BACKGROUND\nComparative effectiveness and systematic reviews require frequent and time-consuming updating.\n\n\nRESULTS\nof earlier screening should be useful in reducing the effort needed to screen relevant articles.\n\n\nMETHODS\nWe collected 16,707 PubMed citation classification decisions from 2 comparative effectiveness reviews: interventions to prevent fractures in low bone density (LBD) and off-label uses of atypical antipsychotic drugs (AAP). We used previously written search strategies to guide extraction of a limited number of explanatory variables pertaining to the intervention, outcome, and\n\n\nSTUDY DESIGN\nWe empirically derived statistical models (based on a sparse generalized linear model with convex penalties [GLMnet] and a gradient boosting machine [GBM]) that predicted article relevance. We evaluated model sensitivity, positive predictive value (PPV), and screening workload reductions using 11,003 PubMed citations retrieved for the LBD and AAP updates. Results. GLMnet-based models performed slightly better than GBM-based models. When attempting to maximize sensitivity for all relevant articles, GLMnet-based models achieved high sensitivities (0.99 and 1.0 for AAP and LBD, respectively) while reducing projected screening by 55.4% and 63.2%. The GLMnet-based model yielded sensitivities of 0.921 and 0.905 and PPVs of 0.185 and 0.102 when predicting articles relevant to the AAP and LBD efficacy/effectiveness analyses, respectively (using a threshold of P ≥ 0.02). GLMnet performed better when identifying adverse effect relevant articles for the AAP review (sensitivity = 0.981) than for the LBD review (0.685). The system currently requires MEDLINE-indexed articles.\n\n\nCONCLUSIONS\nWe evaluated statistical classifiers that used previous classification decisions and explanatory variables derived from MEDLINE indexing terms to predict inclusion decisions. This pilot system reduced workload associated with screening 2 simulated comparative effectiveness review updates by more than 50% with minimal loss of relevant articles."
},
{
"pmid": "27293211",
"title": "Topic detection using paragraph vectors to support active learning in systematic reviews.",
"abstract": "Systematic reviews require expert reviewers to manually screen thousands of citations in order to identify all relevant articles to the review. Active learning text classification is a supervised machine learning approach that has been shown to significantly reduce the manual annotation workload by semi-automating the citation screening process of systematic reviews. In this paper, we present a new topic detection method that induces an informative representation of studies, to improve the performance of the underlying active learner. Our proposed topic detection method uses a neural network-based vector space model to capture semantic similarities between documents. We firstly represent documents within the vector space, and cluster the documents into a predefined number of clusters. The centroids of the clusters are treated as latent topics. We then represent each document as a mixture of latent topics. For evaluation purposes, we employ the active learning strategy using both our novel topic detection method and a baseline topic model (i.e., Latent Dirichlet Allocation). Results obtained demonstrate that our method is able to achieve a high sensitivity of eligible studies and a significantly reduced manual annotation cost when compared to the baseline method. This observation is consistent across two clinical and three public health reviews. The tool introduced in this work is available from https://nactem.ac.uk/pvtopic/."
},
{
"pmid": "16112549",
"title": "Framewise phoneme classification with bidirectional LSTM and other neural network architectures.",
"abstract": "In this paper, we present bidirectional Long Short Term Memory (LSTM) networks, and a modified, full gradient version of the LSTM learning algorithm. We evaluate Bidirectional LSTM (BLSTM) and several other network architectures on the benchmark task of framewise phoneme classification, using the TIMIT database. Our main findings are that bidirectional networks outperform unidirectional ones, and Long Short Term Memory (LSTM) is much faster and also more accurate than both standard Recurrent Neural Nets (RNNs) and time-windowed Multilayer Perceptrons (MLPs). Our results support the view that contextual information is crucial to speech processing, and suggest that BLSTM is an effective architecture with which to exploit it."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "10877288",
"title": "Comparisons of predictive values of binary medical diagnostic tests for paired designs.",
"abstract": "Positive and negative predictive values of a diagnostic test are key clinically relevant measures of test accuracy. Surprisingly, statistical methods for comparing tests with regard to these parameters have not been available for the most common study design in which each test is applied to each study individual. In this paper, we propose a statistic for comparing the predictive values of two diagnostic tests using this paired study design. The proposed statistic is a score statistic derived from a marginal regression model and bears some relation to McNemar's statistic. As McNemar's statistic can be used to compare sensitivities and specificities of diagnostic tests, parameters that condition on disease status, our statistic can be considered as an analog of McNemar's test for the problem of comparing predictive values, parameters that condition on test outcome. We report on the results of a simulation study designed to examine the properties of this test under a variety of conditions. The method is illustrated with data from a study of methods for diagnosis of coronary artery disease."
},
{
"pmid": "22912343",
"title": "A weighted generalized score statistic for comparison of predictive values of diagnostic tests.",
"abstract": "Positive and negative predictive values are important measures of a medical diagnostic test performance. We consider testing equality of two positive or two negative predictive values within a paired design in which all patients receive two diagnostic tests. The existing statistical tests for testing equality of predictive values are either Wald tests based on the multinomial distribution or the empirical Wald and generalized score tests within the generalized estimating equations (GEE) framework. As presented in the literature, these test statistics have considerably complex formulas without clear intuitive insight. We propose their re-formulations that are mathematically equivalent but algebraically simple and intuitive. As is clearly seen with a new re-formulation we presented, the generalized score statistic does not always reduce to the commonly used score statistic in the independent samples case. To alleviate this, we introduce a weighted generalized score (WGS) test statistic that incorporates empirical covariance matrix with newly proposed weights. This statistic is simple to compute, always reduces to the score statistic in the independent samples situation, and preserves type I error better than the other statistics as demonstrated by simulations. Thus, we believe that the proposed WGS statistic is the preferred statistic for testing equality of two predictive values and for corresponding sample size computations. The new formulas of the Wald statistics may be useful for easy computation of confidence intervals for difference of predictive values. The introduced concepts have potential to lead to development of the WGS test statistic in a general GEE setting."
},
{
"pmid": "28648605",
"title": "A semi-supervised approach using label propagation to support citation screening.",
"abstract": "Citation screening, an integral process within systematic reviews that identifies citations relevant to the underlying research question, is a time-consuming and resource-intensive task. During the screening task, analysts manually assign a label to each citation, to designate whether a citation is eligible for inclusion in the review. Recently, several studies have explored the use of active learning in text classification to reduce the human workload involved in the screening task. However, existing approaches require a significant amount of manually labelled citations for the text classification to achieve a robust performance. In this paper, we propose a semi-supervised method that identifies relevant citations as early as possible in the screening process by exploiting the pairwise similarities between labelled and unlabelled citations to improve the classification performance without additional manual labelling effort. Our approach is based on the hypothesis that similar citations share the same label (e.g., if one citation should be included, then other similar citations should be included also). To calculate the similarity between labelled and unlabelled citations we investigate two different feature spaces, namely a bag-of-words and a spectral embedding based on the bag-of-words. The semi-supervised method propagates the classification codes of manually labelled citations to neighbouring unlabelled citations in the feature space. The automatically labelled citations are combined with the manually labelled citations to form an augmented training set. For evaluation purposes, we apply our method to reviews from clinical and public health. The results show that our semi-supervised method with label propagation achieves statistically significant improvements over two state-of-the-art active learning approaches across both clinical and public health reviews."
},
{
"pmid": "19567792",
"title": "Cross-topic learning for work prioritization in systematic review creation and update.",
"abstract": "OBJECTIVE\nMachine learning systems can be an aid to experts performing systematic reviews (SRs) by automatically ranking journal articles for work-prioritization. This work investigates whether a topic-specific automated document ranking system for SRs can be improved using a hybrid approach, combining topic-specific training data with data from other SR topics.\n\n\nDESIGN\nA test collection was built using annotated reference files from 24 systematic drug class reviews. A support vector machine learning algorithm was evaluated with cross-validation, using seven different fractions of topic-specific training data in combination with samples from the other 23 topics. This approach was compared to both a baseline system, which used only topic-specific training data, and to a system using only the nontopic data sampled from the remaining topics.\n\n\nMEASUREMENTS\nMean area under the receiver-operating curve (AUC) was used as the measure of comparison.\n\n\nRESULTS\nOn average, the hybrid system improved mean AUC over the baseline system by 20%, when topic-specific training data were scarce. The system performed significantly better than the baseline system at all levels of topic-specific training data. In addition, the system performed better than the nontopic system at all but the two smallest fractions of topic specific training data, and no worse than the nontopic system with these smallest amounts of topic specific training data.\n\n\nCONCLUSIONS\nAutomated literature prioritization could be helpful in assisting experts to organize their time when performing systematic reviews. Future work will focus on extending the algorithm to use additional sources of topic-specific data, and on embedding the algorithm in an interactive system available to systematic reviewers during the literature review process."
}
] |
Scientific Reports | 31811244 | PMC6898064 | 10.1038/s41598-019-54707-9 | A Deep Learning Framework for Design and Analysis of Surgical Bioprosthetic Heart Valves | Bioprosthetic heart valves (BHVs) are commonly used as heart valve replacements but they are prone to fatigue failure; estimating their remaining life directly from medical images is difficult. Analyzing the valve performance can provide better guidance for personalized valve design. However, such analyses are often computationally intensive. In this work, we introduce the concept of deep learning (DL) based finite element analysis (DLFEA) to learn the deformation biomechanics of bioprosthetic aortic valves directly from simulations. The proposed DL framework can eliminate the time-consuming biomechanics simulations, while predicting valve deformations with the same fidelity. We present statistical results that demonstrate the high performance of the DLFEA framework and the applicability of the framework to predict bioprosthetic aortic valve deformations. With further development, such a tool can provide fast decision support for designing surgical bioprosthetic aortic valves. Ultimately, this framework could be extended to other BHVs and improve patient care. | Related WorksDeep learning applications to create surrogate models for finite element analysis is a very recent area of research that specifically focuses on bridging physics based models and data-driven models. Some of the recent advances in this research area are summarized below:Physics-consistency in deep learning: The overall idea is to merge the ideas of deep learning and physics by using physics-based features while training the deep learning models. For example, researchers have modified the loss functions to ensure some physical constraints are satisfied41–45. There has also been work on interpreting the predictions of the deep learning model based on physical conditions22,23.Incorporating partial differential equations (PDEs) in deep learning models: The key idea is to use the underlying governing equations such as Berger’s equation, Navier-Stokes equation, Cahn-Hillard’s equation, etc. to compute the residual for the sample. Since modern software systems can define these partial differential equations numerically in terms of automatic differentiable functions, it is easy to minimize these residuals. There are several recent works on learning from partial differential equations22,46–51.Generative vs. distinctive predictions: While there are methods in Deep Learning for generating and even predicting desired outputs, the underlying physics is often more strict. For example, given a set of physical conditions (such as loads on a well-defined geometry) will result in a deterministic desired output (such as displacements). Modeling them as a generative model is not consistent with the physics. On the contrary, the inverse problem, of defining the displacement of a given geometry and predicting the set of physics conditions is often ill-posed and could be consistently modeled as a generative model. There are several works showing the capability of Deep Learning methods to act as a surrogate23,41,49,51. These surrogates are modeled as distinctive (non-generative) networks, since the physics is deterministic and the problem is well-posed. On the contrary, there are some recent works22,42,46, which deal with stochastic PDEs or with ill-posed problems such as inverse design which demand the use of a generative model.In the application area of biomechanics, most of the existing works use simple deep learning or physics-consistent deep learning methods. Specifically, these methods have been applied for modeling the aorta and estimating the stress fields52–54 or estimating the constitutive model parameters for aortic wall55,56. For BHVs, while there are optimization based methods for design of transcatheter aortic valves57,58, machine learning methods have mainly been used for 3D reconstruction of the valve geometry59. Further, there are physics-informed deep learning approaches for modeling cardiovascular flows, which incorporate residual minimization of the PDEs using deep learning60,61. However, to the best knowledge of the authors, deep learning has not been used for analyzing the deformation behavior of bioprosthetic valves.In this work, we leverage the advances in deep learning to model the analysis of BHVs and to accelerate their design. The efforts made in physics-consistent deep learning and deep learning applications to biomechanics motivates this work. While there are several works in this fast growing area, there are still some gaps which are yet to be filled. This paper addresses some of those gaps:Predicting raw values vs. descriptors for biomechanics applications: The current state-of-the-art machine learning works in biomechanics directly predict the stress field. However, during the physics solve, the stresses are not directly obtained. While it is challenging to obtain displacements from stresses, it is straight-forward to obtain the stresses and strains from displacements. Even though, small variations in the displacements can lead to large oscillations in the strain computations, in our case (see Fig. 3), the maximum principal strain obtained from predicted deformations is accurate and without any numerical oscillations. Therefore, we attempt to be consistent with the way physics is modeled to enable future work in using PDEs for computing residuals. Incorporating PDEs for modeling the complex dynamics involved in the bioprosthetic aortic valves is not a trivial extension of the present work, but, this contribution is a step forward towards that end.Contact prediction in deep learning: Current physics consistent deep learning models and deep learning for biomechanics applications consider simple cases which doesnt involve interaction of multiple objects (or multiple features of the same object). This is important when we need to model contact physics among the objects. This is necessary in BHVs while predicting the contact between the leaflets. As seen in the Figures shown in Fig. 3, our method is able to learn the complex interaction of the three leaflets which is necessary while modeling the non-smooth behavior of the materials.Accurate representation of 3D geometries: In general, there is a disconnect between the geometry, the physics domain mesh, and the data representation for training physics consistent deep learning model. Converting one form of data to another is computationally expensive and not accurate. To avoid this, often researchers use a structured mesh which could be expensive in case of representing geometries with complex geometric features such as the heart valve. On the contrary, we make use of a NURBS-aware convolution operation and isogeometric analysis to alleviate this issue. | [
"27507280",
"29386200",
"19237674",
"20606715",
"28980492",
"29119728",
"29736092",
"25580046",
"25541566",
"28117445",
"29301111",
"26392645",
"10788818",
"24451180",
"28939354",
"31160830"
] | [
{
"pmid": "27507280",
"title": "Biomechanical Behavior of Bioprosthetic Heart Valve Heterograft Tissues: Characterization, Simulation, and Performance.",
"abstract": "The use of replacement heart valves continues to grow due to the increased prevalence of valvular heart disease resulting from an ageing population. Since bioprosthetic heart valves (BHVs) continue to be the preferred replacement valve, there continues to be a strong need to develop better and more reliable BHVs through and improved the general understanding of BHV failure mechanisms. The major technological hurdle for the lifespan of the BHV implant continues to be the durability of the constituent leaflet biomaterials, which if improved can lead to substantial clinical impact. In order to develop improved solutions for BHV biomaterials, it is critical to have a better understanding of the inherent biomechanical behaviors of the leaflet biomaterials, including chemical treatment technologies, the impact of repetitive mechanical loading, and the inherent failure modes. This review seeks to provide a comprehensive overview of these issues, with a focus on developing insight on the mechanisms of BHV function and failure. Additionally, this review provides a detailed summary of the computational biomechanical simulations that have been used to inform and develop a higher level of understanding of BHV tissues and their failure modes. Collectively, this information should serve as a tool not only to infer reliable and dependable prosthesis function, but also to instigate and facilitate the design of future bioprosthetic valves and clinically impact cardiology."
},
{
"pmid": "20606715",
"title": "Role of Computational Simulations in Heart Valve Dynamics and Design of Valvular Prostheses.",
"abstract": "Computational simulations are playing an increasingly important role in enhancing our understanding of the normal human physiological function, etiology of diseased states, surgical and interventional planning, and in the design and evaluation of artificial implants. Researchers are taking advantage of computational simulations to speed up the initial design of implantable devices before a prototype is developed and hence able to reduce animal experimentation for the functional evaluation of the devices under development. A review of the reported studies to date relevant to the simulation of the native and prosthetic heart valve dynamics is the subject of the present paper. Potential future directions toward multi-scale simulation studies for our further understanding of the physiology and pathophysiology of heart valve dynamics and valvular implants are also discussed."
},
{
"pmid": "28980492",
"title": "Computational methods for the aortic heart valve and its replacements.",
"abstract": "Replacement with a prosthetic device remains a major treatment option for the patients suffering from heart valve disease, with prevalence growing resulting from an ageing population. While the most popular replacement heart valve continues to be the bioprosthetic heart valve (BHV), its durability remains limited. There is thus a continued need to develop a general understanding of the underlying mechanisms limiting BHV durability to facilitate development of a more durable prosthesis. In this regard, computational models can play a pivotal role as they can evaluate our understanding of the underlying mechanisms and be used to optimize designs that may not always be intuitive. Areas covered: This review covers recent progress in computational models for the simulation of BHV, with a focus on aortic valve (AV) replacement. Recent contributions in valve geometry, leaflet material models, novel methods for numerical simulation, and applications to BHV optimization are discussed. This information should serve not only to infer reliable and dependable BHV function, but also to establish guidelines and insight for the design of future prosthetic valves by analyzing the influence of design, hemodynamics and tissue mechanics. Expert commentary: The paradigm of predictive modeling of heart valve prosthesis are becoming a reality which can simultaneously improve clinical outcomes and reduce costs. It can also lead to patient-specific valve design."
},
{
"pmid": "29119728",
"title": "A framework for designing patient-specific bioprosthetic heart valves using immersogeometric fluid-structure interaction analysis.",
"abstract": "Numerous studies have suggested that medical image derived computational mechanics models could be developed to reduce mortality and morbidity due to cardiovascular diseases by allowing for patient-specific surgical planning and customized medical device design. In this work, we present a novel framework for designing prosthetic heart valves using a parametric design platform and immersogeometric fluid-structure interaction (FSI) analysis. We parameterize the leaflet geometry using several key design parameters. This allows for generating various perturbations of the leaflet design for the patient-specific aortic root reconstructed from the medical image data. Each design is analyzed using our hybrid arbitrary Lagrangian-Eulerian/immersogeometric FSI methodology, which allows us to efficiently simulate the coupling of the deforming aortic root, the parametrically designed prosthetic valves, and the surrounding blood flow under physiological conditions. A parametric study is performed to investigate the influence of the geometry on heart valve performance, indicated by the effective orifice area and the coaptation area. Finally, the FSI simulation result of a design that balances effective orifice area and coaptation area reasonably well is compared with patient-specific phase contrast magnetic resonance imaging data to demonstrate the qualitative similarity of the flow patterns in the ascending aorta."
},
{
"pmid": "29736092",
"title": "A contact formulation based on a volumetric potential: Application to isogeometric simulations of atrioventricular valves.",
"abstract": "This work formulates frictionless contact between solid bodies in terms of a repulsive potential energy term and illustrates how numerical integration of the resulting forces is computationally similar to the \"pinball algorithm\" proposed and studied by Belytschko and collaborators in the 1990s. We thereby arrive at a numerical approach that has both the theoretical advantages of a potential-based formulation and the algorithmic simplicity, computational efficiency, and geometrical versatility of pinball contact. The singular nature of the contact potential requires a specialized nonlinear solver and an adaptive time stepping scheme to ensure reliable convergence of implicit dynamic calculations. We illustrate the effectiveness of this numerical method by simulating several benchmark problems and the structural mechanics of the right atrioventricular (tricuspid) heart valve. Atrioventricular valve closure involves contact between every combination of shell surfaces, edges of shells, and cables, but our formulation handles all contact scenarios in a unified manner. We take advantage of this versatility to demonstrate the effects of chordal rupture on tricuspid valve coaptation behavior."
},
{
"pmid": "25580046",
"title": "Fluid-structure interaction analysis of bioprosthetic heart valves: Significance of arterial wall deformation.",
"abstract": "We propose a framework that combines variational immersed-boundary and arbitrary Lagrangian-Eulerian (ALE) methods for fluid-structure interaction (FSI) simulation of a bioprosthetic heart valve implanted in an artery that is allowed to deform in the model. We find that the variational immersed-boundary method for FSI remains robust and effective for heart valve analysis when the background fluid mesh undergoes deformations corresponding to the expansion and contraction of the elastic artery. Furthermore, the computations presented in this work show that the arterial wall deformation contributes significantly to the realism of the simulation results, leading to flow rates and valve motions that more closely resemble those observed in practice."
},
{
"pmid": "25541566",
"title": "An immersogeometric variational framework for fluid-structure interaction: application to bioprosthetic heart valves.",
"abstract": "In this paper, we develop a geometrically flexible technique for computational fluid-structure interaction (FSI). The motivating application is the simulation of tri-leaflet bioprosthetic heart valve function over the complete cardiac cycle. Due to the complex motion of the heart valve leaflets, the fluid domain undergoes large deformations, including changes of topology. The proposed method directly analyzes a spline-based surface representation of the structure by immersing it into a non-boundary-fitted discretization of the surrounding fluid domain. This places our method within an emerging class of computational techniques that aim to capture geometry on non-boundary-fitted analysis meshes. We introduce the term \"immersogeometric analysis\" to identify this paradigm. The framework starts with an augmented Lagrangian formulation for FSI that enforces kinematic constraints with a combination of Lagrange multipliers and penalty forces. For immersed volumetric objects, we formally eliminate the multiplier field by substituting a fluid-structure interface traction, arriving at Nitsche's method for enforcing Dirichlet boundary conditions on object surfaces. For immersed thin shell structures modeled geometrically as surfaces, the tractions from opposite sides cancel due to the continuity of the background fluid solution space, leaving a penalty method. Application to a bioprosthetic heart valve, where there is a large pressure jump across the leaflets, reveals shortcomings of the penalty approach. To counteract steep pressure gradients through the structure without the conditioning problems that accompany strong penalty forces, we resurrect the Lagrange multiplier field. Further, since the fluid discretization is not tailored to the structure geometry, there is a significant error in the approximation of pressure discontinuities across the shell. This error becomes especially troublesome in residual-based stabilized methods for incompressible flow, leading to problematic compressibility at practical levels of refinement. We modify existing stabilized methods to improve performance. To evaluate the accuracy of the proposed methods, we test them on benchmark problems and compare the results with those of established boundary-fitted techniques. Finally, we simulate the coupling of the bioprosthetic heart valve and the surrounding blood flow under physiological conditions, demonstrating the effectiveness of the proposed techniques in practical computations."
},
{
"pmid": "28117445",
"title": "Dermatologist-level classification of skin cancer with deep neural networks.",
"abstract": "Skin cancer, the most common human malignancy, is primarily diagnosed visually, beginning with an initial clinical screening and followed potentially by dermoscopic analysis, a biopsy and histopathological examination. Automated classification of skin lesions using images is a challenging task owing to the fine-grained variability in the appearance of skin lesions. Deep convolutional neural networks (CNNs) show potential for general and highly variable tasks across many fine-grained object categories. Here we demonstrate classification of skin lesions using a single CNN, trained end-to-end from images directly, using only pixels and disease labels as inputs. We train a CNN using a dataset of 129,450 clinical images-two orders of magnitude larger than previous datasets-consisting of 2,032 different diseases. We test its performance against 21 board-certified dermatologists on biopsy-proven clinical images with two critical binary classification use cases: keratinocyte carcinomas versus benign seborrheic keratoses; and malignant melanomas versus benign nevi. The first case represents the identification of the most common cancers, the second represents the identification of the deadliest skin cancer. The CNN achieves performance on par with all tested experts across both tasks, demonstrating an artificial intelligence capable of classifying skin cancer with a level of competence comparable to dermatologists. Outfitted with deep neural networks, mobile devices can potentially extend the reach of dermatologists outside of the clinic. It is projected that 6.3 billion smartphone subscriptions will exist by the year 2021 (ref. 13) and can therefore potentially provide low-cost universal access to vital diagnostic care."
},
{
"pmid": "29301111",
"title": "A deep learning framework for causal shape transformation.",
"abstract": "Recurrent neural network (RNN) and Long Short-term Memory (LSTM) networks are the common go-to architecture for exploiting sequential information where the output is dependent on a sequence of inputs. However, in most considered problems, the dependencies typically lie in the latent domain which may not be suitable for applications involving the prediction of a step-wise transformation sequence that is dependent on the previous states only in the visible domain with a known terminal state. We propose a hybrid architecture of convolution neural networks (CNN) and stacked autoencoders (SAE) to learn a sequence of causal actions that nonlinearly transform an input visual pattern or distribution into a target visual pattern or distribution with the same support and demonstrated its practicality in a real-world engineering problem involving the physics of fluids. We solved a high-dimensional one-to-many inverse mapping problem concerning microfluidic flow sculpting, where the use of deep learning methods as an inverse map is very seldom explored. This work serves as a fruitful use-case to applied scientists and engineers in how deep learning can be beneficial as a solution for high-dimensional physical problems, and potentially opening doors to impactful advance in fields such as material sciences and medical biology where multistep topological transformations is a key element."
},
{
"pmid": "26392645",
"title": "Dynamic and fluid-structure interaction simulations of bioprosthetic heart valves using parametric design with T-splines and Fung-type material models.",
"abstract": "This paper builds on a recently developed immersogeometric fluid-structure interaction (FSI) methodology for bioprosthetic heart valve (BHV) modeling and simulation. It enhances the proposed framework in the areas of geometry design and constitutive modeling. With these enhancements, BHV FSI simulations may be performed with greater levels of automation, robustness and physical realism. In addition, the paper presents a comparison between FSI analysis and standalone structural dynamics simulation driven by prescribed transvalvular pressure, the latter being a more common modeling choice for this class of problems. The FSI computation achieved better physiological realism in predicting the valve leaflet deformation than its standalone structural dynamics counterpart."
},
{
"pmid": "10788818",
"title": "Body surface area as a predictor of aortic and pulmonary valve diameter.",
"abstract": "BACKGROUND\nPredicting cardiac valve size from noncardiac anatomic measurements would benefit pediatric cardiologists, adult cardiologists, and cardiac surgeons in a number of decision-making situations. Previous studies correlating valve size with body size have been generated with the use of fixed autopsy specimens, angiography, and echocardiography, but primarily in the young. This study examines the relation of body surface area to measurements of the left ventricular-aortic junction (aortic valve anulus diameter) and the right ventricular-pulmonary trunk junction (pulmonary valve anulus diameter) in 6801 hearts across a wide spectrum of ages.\n\n\nMETHODS\nFrom June 1985 to October 1998, cardiac valves from 6801 donated hearts were analyzed morphologically. Donor age was newborn to 59 years (mean 31 +/- 17 years; median 32 years). Calculated body surface areas ranged from 0.18 to 3.55 m(2). Aortic (n = 4636) and pulmonary valve diameters (n = 5480) were measured from enucleated valves suitable for allograft transplantation. Mean valve sizes were computed for ranges in body surface area in 0.1-m(2) increments.\n\n\nRESULTS\nFor adult men (age >/= 17 years), the mean aortic valve diameter was 23.1 +/- 2.0 mm (n = 2214) and the mean pulmonary valve diameter was 26.2 +/- 2.3 mm (n = 2589). For adult women, the mean aortic valve diameter was 21.0 +/- 1.8 mm (n = 1156) and the mean pulmonary valve diameter was 23.9 +/- 2.2 mm (n = 1408). The mean indexed aortic valve area was 2.02 +/- 0.52 cm(2)/m(2) and the pulmonary valve area 2.65 +/- 0.52 cm(2)/m(2). Between 82% and 85% of the variability was explained by the size of the patient. Regression equations were developed both overall and separately for men and women, although the additional contribution of sex above that of body size was less than 1%.\n\n\nCONCLUSIONS\nAortic and pulmonary valve diameters are closely related to body size. Thus, body surface area, when used in conjunction with other clinically accepted evaluations, is a useful tool for estimating normal aortic and pulmonary valve size."
},
{
"pmid": "24451180",
"title": "Echocardiographic reference ranges for normal cardiac chamber size: results from the NORRE study.",
"abstract": "AIMS\nAvailability of normative reference values for cardiac chamber quantitation is a prerequisite for accurate clinical application of echocardiography. In this study, we report normal reference ranges for cardiac chambers size obtained in a large group of healthy volunteers accounting for gender and age. Echocardiographic data were acquired using state-of-the-art cardiac ultrasound equipment following chamber quantitation protocols approved by the European Association of Cardiovascular Imaging.\n\n\nMETHODS\nA total of 734 (mean age: 45.8 ± 13.3 years) healthy volunteers (320 men and 414 women) were enrolled at 22 collaborating institutions of the Normal Reference Ranges for Echocardiography (NORRE) study. A comprehensive echocardiographic examination was performed on all subjects following pre-defined protocols. There were no gender differences in age or cholesterol levels. Compared with men, women had significantly smaller body surface areas, and lower blood pressure. Quality of echocardiographic data sets was good to excellent in the majority of patients. Upper and lower reference limits were higher in men than in women. The reference values varied with age. These age-related changes persisted for most parameters after normalization for the body surface area.\n\n\nCONCLUSION\nThe NORRE study provides useful two-dimensional echocardiographic reference ranges for cardiac chamber quantification. These data highlight the need for body size normalization that should be performed together with age-and gender-specific assessment for the most echocardiographic parameters."
},
{
"pmid": "28939354",
"title": "A deep learning approach to estimate chemically-treated collagenous tissue nonlinear anisotropic stress-strain responses from microscopy images.",
"abstract": "Biological collagenous tissues comprised of networks of collagen fibers are suitable for a broad spectrum of medical applications owing to their attractive mechanical properties. In this study, we developed a noninvasive approach to estimate collagenous tissue elastic properties directly from microscopy images using Machine Learning (ML) techniques. Glutaraldehyde-treated bovine pericardium (GLBP) tissue, widely used in the fabrication of bioprosthetic heart valves and vascular patches, was chosen to develop a representative application. A Deep Learning model was designed and trained to process second harmonic generation (SHG) images of collagen networks in GLBP tissue samples, and directly predict the tissue elastic mechanical properties. The trained model is capable of identifying the overall tissue stiffness with a classification accuracy of 84%, and predicting the nonlinear anisotropic stress-strain curves with average regression errors of 0.021 and 0.031. Thus, this study demonstrates the feasibility and great potential of using the Deep Learning approach for fast and noninvasive assessment of collagenous tissue elastic properties from microstructural images.\n\n\nSTATEMENT OF SIGNIFICANCE\nIn this study, we developed, to our best knowledge, the first Deep Learning-based approach to estimate the elastic properties of collagenous tissues directly from noninvasive second harmonic generation images. The success of this study holds promise for the use of Machine Learning techniques to noninvasively and efficiently estimate the mechanical properties of many structure-based biological materials, and it also enables many potential applications such as serving as a quality control tool to select tissue for the manufacturing of medical devices (e.g. bioprosthetic heart valves)."
},
{
"pmid": "31160830",
"title": "Estimation of in vivo constitutive parameters of the aortic wall using a machine learning approach.",
"abstract": "The patient-specific biomechanical analysis of the aorta requires the quantification of the in vivo mechanical properties of individual patients. Current inverse approaches have attempted to estimate the nonlinear, anisotropic material parameters from in vivo image data using certain optimization schemes. However, since such inverse methods are dependent on iterative nonlinear optimization, these methods are highly computation-intensive. A potential paradigm-changing solution to the bottleneck associated with patient-specific computational modeling is to incorporate machine learning (ML) algorithms to expedite the procedure of in vivo material parameter identification. In this paper, we developed an ML-based approach to estimate the material parameters from three-dimensional aorta geometries obtained at two different blood pressure (i.e., systolic and diastolic) levels. The nonlinear relationship between the two loaded shapes and the constitutive parameters are established by an ML-model, which was trained and tested using finite element (FE) simulation datasets. Cross-validations were used to adjust the ML-model structure on a training/validation dataset. The accuracy of the ML-model was examined using a testing dataset."
}
] |
Frontiers in Neuroscience | 31849587 | PMC6901997 | 10.3389/fnins.2019.01275 | A Parallel Multiscale Filter Bank Convolutional Neural Networks for Motor Imagery EEG Classification | ObjectiveElectroencephalogram (EEG) based brain–computer interfaces (BCI) in motor imagery (MI) have developed rapidly in recent years. A reliable feature extraction method is essential because of a low signal-to-noise ratio (SNR) and time-dependent covariates of EEG signals. Because of efficient application in various fields, deep learning has been adopted in EEG signal processing and has obtained competitive results compared with the traditional methods. However, designing and training an end-to-end network to fully extract potential features from EEG signals remains a challenge in MI.ApproachIn this study, we propose a parallel multiscale filter bank convolutional neural network (MSFBCNN) for MI classification. We introduce a layered end-to-end network structure, in which a feature-extraction network is used to extract temporal and spatial features. To enhance the transfer learning ability, we propose a network initialization and fine-tuning strategy to train an individual model for inter-subject classification on small datasets. We compare our MSFBCNN with the state-of-the-art approaches on open datasets.ResultsThe proposed method has a higher accuracy than the baselines in intra-subject classification. In addition, the transfer learning experiments indicate that our network can build an individual model and obtain acceptable results in inter-subject classification. The results suggest that the proposed network has superior performance, robustness, and transfer learning ability. | Related WorkAccording to the input styles of the networks, DL-based MI is categorized into two types: the feature input network and the raw signal input network.In the former input style, the MI is accomplished in two stages. First, EEG signals are transformed into vectors by traditional feature-extraction approaches (such as spectrograms, wavelets, and spatial filtering). Next, these feature vectors are fed into the networks. DL is adopted to train a model and classify the features. Kumar et al. (2017) used multilayer perceptrons (MLPs) to replace the traditional support vector machine classifier. Sakhavi et al. (2015) combined CNN and MLP as a new classifier to deal with multiclass MI-EEG tasks. To improve performance of networks, transfer learning and knowledge distillation were explored in which CNN was used as a specific 2D-input classifier (Sakhavi and Guan, 2017). Huijuan et al. adopted augmented-CSP and CNN to discriminate MI-EEG signals, surpassing FBCSP with a novel feature map selection scheme (Yang et al., 2015). Tabar and Halici (2017) fed time-frequency features generated by short-time Fourier transform into a CNN with stacked autoencoders and obtained a competitive accuracy. Bashivan et al. (2015) transformed the temporal EEG into topology-preserving multispectral images and trained a deep recurrent-convolutional network. Zhu et al. (2019) proposed a separated channel convolutional network to encode the multi-channel data. Then, the encoded features are concatenated and fed into a recognition network to perform the final MI task recognition.The other input style fed time series EEG signals, i.e., the C (channel) × T (time point) matrices, into deep neural networks directly. Therefore, it is an end-to-end approach. In this network, the steps of feature extraction and classification are combined in a single end-to-end model, with (or without) only minimum preprocessing. The DL model has to learn both an optimal intermediate representation and a classifier for EEG signals in a supervised manner. Several end-to-end models have been proposed and obtained competitive performance in different tasks. As a light network, EEGNet used a few parameters to achieve considerable performance on various EEG classification tasks. Inspired by FBCSP, Schirrmeister et al. (2017) proposed a shallow CNN and a deeper CNN respectively. Both of them yielded higher accuracies compared with FBCSP. Hauke et al. used a simplified CNN model to validate that a DL model was effective in transfer learning tasks for recordings from 109 subjects (Goldberger et al., 2000) without any preprocessing (Dose et al., 2018).Both input styles have their advantages and disadvantages. The two-stage approach is interpretable and robust, which is guaranteed by handcrafted feature-extraction algorithms. Thus, it is suitable for small training sets and outperforms the traditional methods.However, the feature input network lost some potential information after the handcrafted feature extraction, which affected the performance. On the contrary, end-to-end models may learn useful features automatically from raw EEG data and achieve satisfactory results. However, for small training datasets, it is hard for the end-to-end methods to train a satisfactory model. As follows from the literature, designing a feasible end-to-end deep neural architecture for MI-EEG classification remains a challenge.In this paper, to overcome the problem of insufficient number of training samples and improve the robustness of the network, we will focus on the end-to-end style and propose a layered end-to-end network structure of CNNs for MI-EEG signal classification. It is well known that the insufficient number of training samples is prone to cause the overfitting problem of large networks. A common solution is to reduce the scale of network by dropout, network pruning, etc. These tricks work well for the signals with significant features like images and videos. For these signals, the network maybe confused to learn the most general distinguishable features from a small training set, thus one can sacrifice the network capacity to increase the generality and robustness. However, extracting the cerebral activity features from low SNR EEG signal is very challenging. A crude reduction of network connections may decrease the feature extraction capability of network. Therefore, we propose a layered network structure to accomplish the feature extraction task and feature reduction task separately. For feature extraction layer, we propose a MSFBCNN structure to extract sufficient potential features. For feature reduction layer, we adopt a set of non-linear operators followed by dropout connection strategy. In this way, the network is expected to be simplified without loss of feature extraction capacity which fits the characteristic of EEG signals. | [
"17409474",
"20303409",
"25852511",
"23633412",
"31341093",
"18621580",
"12899258",
"18701380",
"18848844",
"29243122",
"18310804",
"26017442",
"17409472",
"16792278",
"27966546",
"25107852",
"24110155",
"22438708",
"11204034",
"28782865",
"16317224",
"25462637",
"17015237",
"22431526",
"27900952",
"30472579",
"29932424",
"17281471",
"15188883"
] | [
{
"pmid": "17409474",
"title": "A survey of signal processing algorithms in brain-computer interfaces based on electrical brain signals.",
"abstract": "Brain-computer interfaces (BCIs) aim at providing a non-muscular channel for sending commands to the external world using the electroencephalographic activity or other electrophysiological measures of the brain function. An essential factor in the successful operation of BCI systems is the methods used to process the brain signals. In the BCI literature, however, there is no comprehensive review of the signal processing techniques used. This work presents the first such comprehensive survey of all BCI designs using electrical signal recordings published prior to January 2006. Detailed results from this survey are presented and discussed. The following key research questions are addressed: (1) what are the key signal processing components of a BCI, (2) what signal processing algorithms have been used in BCIs and (3) which signal processing techniques have received more attention?"
},
{
"pmid": "20303409",
"title": "Neurophysiological predictor of SMR-based BCI performance.",
"abstract": "Brain-computer interfaces (BCIs) allow a user to control a computer application by brain activity as measured, e.g., by electroencephalography (EEG). After about 30years of BCI research, the success of control that is achieved by means of a BCI system still greatly varies between subjects. For about 20% of potential users the obtained accuracy does not reach the level criterion, meaning that BCI control is not accurate enough to control an application. The determination of factors that may serve to predict BCI performance, and the development of methods to quantify a predictor value from psychological and/or physiological data serve two purposes: a better understanding of the 'BCI-illiteracy phenomenon', and avoidance of a costly and eventually frustrating training procedure for participants who might not obtain BCI control. Furthermore, such predictors may lead to approaches to antagonize BCI illiteracy. Here, we propose a neurophysiological predictor of BCI performance which can be determined from a two minute recording of a 'relax with eyes open' condition using two Laplacian EEG channels. A correlation of r=0.53 between the proposed predictor and BCI feedback performance was obtained on a large data base with N=80 BCI-naive participants in their first session with the Berlin brain-computer interface (BBCI) system which operates on modulations of sensory motor rhythms (SMRs)."
},
{
"pmid": "25852511",
"title": "Altered baseline brain activity in experts measured by amplitude of low frequency fluctuations (ALFF): a resting state fMRI study using expertise model of acupuncturists.",
"abstract": "It is well established that expertise modulates evoked brain activity in response to specific stimuli. Recently, researchers have begun to investigate how expertise influences the resting brain. Among these studies, most focused on the connectivity features within/across regions, i.e., connectivity patterns/strength. However, little concern has been given to a more fundamental issue whether or not expertise modulates baseline brain activity. We investigated this question using amplitude of low-frequency (<0.08 Hz) fluctuation (ALFF) as the metric of brain activity and a novel expertise model, i.e., acupuncturists, due to their robust proficiency in tactile perception and emotion regulation. After the psychophysical and behavioral expertise screening procedure, 23 acupuncturists and 23 matched non-acupuncturists (NA) were enrolled. Our results explicated higher ALFF for acupuncturists in the left ventral medial prefrontal cortex (VMPFC) and the contralateral hand representation of the primary somatosensory area (SI) (corrected for multiple comparisons). Additionally, ALFF of VMPFC was negatively correlated with the outcomes of the emotion regulation task (corrected for multiple comparisons). We suggest that our study may reveal a novel connection between the neuroplasticity mechanism and resting state activity, which would upgrade our understanding of the central mechanism of learning. Furthermore, by showing that expertise can affect the baseline brain activity as indicated by ALFF, our findings may have profound implication for functional neuroimaging studies especially those involving expert models, in that difference in baseline brain activity may either smear the spatial pattern of activations for task data or introduce biased results into connectivity-based analysis for resting data."
},
{
"pmid": "23633412",
"title": "Expertise modulates local regional homogeneity of spontaneous brain activity in the resting brain: an fMRI study using the model of skilled acupuncturists.",
"abstract": "Studies on training/expertise-related effects on human brain in context of neuroplasticity have revealed that plastic changes modulate not only task activations but also patterns and strength of internetworks and intranetworks functional connectivity in the resting state. Much has known about plastic changes in resting state on global level; however, how training/expertise-related effect affects patterns of local spontaneous activity in resting brain remains elusive. We investigated the homogeneity of local blood oxygen level-dependent fluctuations in the resting state using a regional homogeneity (ReHo) analysis among 16 acupuncturists and 16 matched nonacupuncturists (NA). To prove acupuncturists' expertise, we used a series of psychophysical tests. Our results demonstrated that, acupuncturists significantly outperformed NA in tactile-motor and emotional regulation domain and the acupuncturist group showed increased coherence in local BOLD signal fluctuations in the left primary motor cortex (MI), the left primary somatosensory cortex (SI) and the left ventral medial prefrontal cortex/orbitofrontal cortex (VMPFC/OFC). Regression analysis displayed that, in the acupuncturists group, ReHo of VMPFC/OFC could predict behavioral outcomes, evidenced by negative correlation between unpleasantness ratings and ReHo of VMPFC/OFC and ReHo of SI and MI positively correlated with the duration of acupuncture practice. We suggest that expertise could modulate patterns of local resting state activity by increasing regional clustering strength, which is likely to contribute to advanced local information processing efficiency. Our study completes the understanding of neuroplasticity changes by adding the evidence of local resting state activity alterations, which is helpful for elucidating in what manner training effect extends beyond resting state."
},
{
"pmid": "31341093",
"title": "A novel hybrid deep learning scheme for four-class motor imagery classification.",
"abstract": "OBJECTIVE\nLearning the structures and unknown correlations of a motor imagery electroencephalogram (MI-EEG) signal is important for its classification. It is also a major challenge to obtain good classification accuracy from the increased number of classes and increased variability from different people. In this study, a four-class MI task is investigated.\n\n\nAPPROACH\nAn end-to-end novel hybrid deep learning scheme is developed to decode the MI task from EEG data. The proposed algorithm consists of two parts: a. A one-versus-rest filter bank common spatial pattern is adopted to preprocess and pre-extract the features of the four-class MI signal. b. A hybrid deep network based on the convolutional neural network and long-term short-term memory network is proposed to extract and learn the spatial and temporal features of the MI signal simultaneously.\n\n\nMAIN RESULTS\nThe main contribution of this paper is to propose a hybrid deep network framework to improve the classification accuracy of the four-class MI-EEG signal. The hybrid deep network is a subject-independent shared neural network, which means it can be trained by using the training data from all subjects to form one model.\n\n\nSIGNIFICANCE\nThe classification performance obtained by the proposed algorithm on brain-computer interface (BCI) competition IV dataset 2a in terms of accuracy is 83% and Cohen's kappa value is 0.80. Finally, the shared hybrid deep network is evaluated by every subject respectively, and the experimental results illustrate that the shared neural network has satisfactory accuracy. Thus, the proposed algorithm could be of great interest for real-life BCIs."
},
{
"pmid": "18621580",
"title": "A brain-actuated wheelchair: asynchronous and non-invasive Brain-computer interfaces for continuous control of robots.",
"abstract": "OBJECTIVE\nTo assess the feasibility and robustness of an asynchronous and non-invasive EEG-based Brain-Computer Interface (BCI) for continuous mental control of a wheelchair.\n\n\nMETHODS\nIn experiment 1 two subjects were asked to mentally drive both a real and a simulated wheelchair from a starting point to a goal along a pre-specified path. Here we only report experiments with the simulated wheelchair for which we have extensive data in a complex environment that allows a sound analysis. Each subject participated in five experimental sessions, each consisting of 10 trials. The time elapsed between two consecutive experimental sessions was variable (from 1h to 2months) to assess the system robustness over time. The pre-specified path was divided into seven stretches to assess the system robustness in different contexts. To further assess the performance of the brain-actuated wheelchair, subject 1 participated in a second experiment consisting of 10 trials where he was asked to drive the simulated wheelchair following 10 different complex and random paths never tried before.\n\n\nRESULTS\nIn experiment 1 the two subjects were able to reach 100% (subject 1) and 80% (subject 2) of the final goals along the pre-specified trajectory in their best sessions. Different performances were obtained over time and path stretches, what indicates that performance is time and context dependent. In experiment 2, subject 1 was able to reach the final goal in 80% of the trials.\n\n\nCONCLUSIONS\nThe results show that subjects can rapidly master our asynchronous EEG-based BCI to control a wheelchair. Also, they can autonomously operate the BCI over long periods of time without the need for adaptive algorithms externally tuned by a human operator to minimize the impact of EEG non-stationarities. This is possible because of two key components: first, the inclusion of a shared control system between the BCI system and the intelligent simulated wheelchair; second, the selection of stable user-specific EEG features that maximize the separability between the mental tasks.\n\n\nSIGNIFICANCE\nThese results show the feasibility of continuously controlling complex robotics devices using an asynchronous and non-invasive BCI."
},
{
"pmid": "12899258",
"title": "How many people are able to operate an EEG-based brain-computer interface (BCI)?",
"abstract": "Ninety-nine healthy people participated in a brain-computer interface (BCI) field study conducted at an exposition held in Graz, Austria. Each subject spent 20-30 min on a two-session BCI investigation. The first session consisted of 40 trials conducted without feedback. Then, a subject-specific classifier was set up to provide the subject with feedback, and the second session--40 trials in which the subject had to control a horizontal bar on a computer screen--was conducted. Subjects were instructed to imagine a right-hand movement or a foot movement after a cue stimulus depending on the direction of an arrow. Bipolar electrodes were mounted over the right-hand representation area and over the foot representation area. Classification results achieved with 1) an adaptive autoregressive model (39 subjects) and 2) band power estimation (60 subjects) are presented. Roughly 93% of the subjects were able to achieve classification accuracy above 60% after two sessions of training."
},
{
"pmid": "18701380",
"title": "Comparative analysis of spectral approaches to feature extraction for EEG-based motor imagery classification.",
"abstract": "The quantification of the spectral content of electroencephalogram (EEG) recordings has a substantial role in clinical and scientific applications. It is of particular relevance in the analysis of event-related brain oscillatory responses. This work is focused on the identification and quantification of relevant frequency patterns in motor imagery (MI) related EEGs utilized for brain-computer interface (BCI) purposes. The main objective of the paper is to perform comparative analysis of different approaches to spectral signal representation such as power spectral density (PSD) techniques, atomic decompositions, time-frequency (t-f) energy distributions, continuous and discrete wavelet approaches, from which band power features can be extracted and used in the framework of MI classification. The emphasis is on identifying discriminative properties of the feature sets representing EEG trials recorded during imagination of either left- or right-hand movement. Feature separability is quantified in the offline study using the classification accuracy (CA) rate obtained with linear and nonlinear classifiers. PSD approaches demonstrate the most consistent robustness and effectiveness in extracting the distinctive spectral patterns for accurately discriminating between left and right MI induced EEGs. This observation is based on an analysis of data recorded from eleven subjects over two sessions of BCI experiments. In addition, generalization capabilities of the classifiers reflected in their intersession performance are discussed in the paper."
},
{
"pmid": "18848844",
"title": "EEG-based motor imagery analysis using weighted wavelet transform features.",
"abstract": "In this study, an electroencephalogram (EEG) analysis system for single-trial classification of motor imagery (MI) data is proposed. Feature extraction in brain-computer interface (BCI) work is an important task that significantly affects the success of brain signal classification. The continuous wavelet transform (CWT) is applied together with Student's two-sample t-statistics for 2D time-scale feature extraction, where features are extracted from EEG signals recorded from subjects performing left and right MI. First, we utilize the CWT to construct a 2D time-scale feature, which yields a highly redundant representation of EEG signals in the time-frequency domain, from which we can obtain precise localization of event-related brain desynchronization and synchronization (ERD and ERS) components. We then weight the 2D time-scale feature with Student's two-sample t-statistics, representing a time-scale plot of discriminant information between left and right MI. These important characteristics, including precise localization and significant discriminative ability, substantially enhance the classification of mental tasks. Finally, a correlation coefficient is used to classify the MI data. Due to its simplicity, it will enable the performance of our proposed method to be clearly demonstrated. Compared to a conventional 2D time-frequency feature and three well-known time-frequency approaches, the experimental results show that the proposed method provides reliable 2D time-scale features for BCI classification."
},
{
"pmid": "29243122",
"title": "Aberrant baseline brain activity in psychogenic erectile dysfunction patients: a resting state fMRI study.",
"abstract": "Recent neuroimaging studies have elucidated many interesting and promising findings on sexuality regarding the neural underpinnings of both normal and abnormal sexual processes. Psychogenic erectile dysfunction (pED) consists of a major part of male sexual dysfunction in China, but the understanding of the central mechanism of pED is still in its infancy. It is commonly appreciated that pED is a functional disorder, which can be attributed predominantly or exclusively to psychological factors, such as anxiety, depression, loss of self-esteem, and psychosocial stresses. Most previous studies probed the central response in the brain of pED patients using sexual-related stimuli. However, little concern has been given to a more fundamental issue whether the baseline brain activity is altered in pED or not. With rs-fMRI data, the current study aimed to explain the central mechanism behind pED by investigating the alterations in baseline brain activity in patients with pED, as indexed by the amplitude of low-frequency (0.01-0.08 Hz) fluctuation (ALFF). After the psychological screening and urological examination procedure, 26 pED patients and 26 healthy matched controls were enrolled. Our results explicated significantly lower baseline brain activity in the right anterior insula and right orbitofrontal cortex for pED patients (multiple comparison corrected). Additionally, the voxel-wise correlation analysis showed that ALFF of the right anterior insula was correlated with the outcomes of erectile function (multiple comparison corrected). Our results implied there was impaired cognitive and motivational processing of sexual stimuli in pED patients. Our current findings may shed light on the neural pathology underlying pED. We hope that our study has provided a new angle looking into pED research by investigating resting state brain activity. Furthermore, we suggest that the current study may put forward a more subtle conception of insular influence on pED, which may help foster new specific, mechanistic insights."
},
{
"pmid": "18310804",
"title": "Brain-computer symbiosis.",
"abstract": "The theoretical groundwork of the 1930s and 1940s and the technical advance of computers in the following decades provided the basis for dramatic increases in human efficiency. While computers continue to evolve, and we can still expect increasing benefits from their use, the interface between humans and computers has begun to present a serious impediment to full realization of the potential payoff. This paper is about the theoretical and practical possibility that direct communication between the brain and the computer can be used to overcome this impediment by improving or augmenting conventional forms of human communication. It is about the opportunity that the limitations of our body's input and output capacities can be overcome using direct interaction with the brain, and it discusses the assumptions, possible limitations and implications of a technology that I anticipate will be a major source of pervasive changes in the coming decades."
},
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "17409472",
"title": "A review of classification algorithms for EEG-based brain-computer interfaces.",
"abstract": "In this paper we review classification algorithms used to design brain-computer interface (BCI) systems based on electroencephalography (EEG). We briefly present the commonly employed algorithms and describe their critical properties. Based on the literature, we compare them in terms of performance and provide guidelines to choose the suitable classification algorithm(s) for a specific BCI."
},
{
"pmid": "16792278",
"title": "BCI Meeting 2005--workshop on BCI signal processing: feature extraction and translation.",
"abstract": "This paper describes the outcome of discussions held during the Third International BCI Meeting at a workshop charged with reviewing and evaluating the current state of and issues relevant to brain-computer interface (BCI) feature extraction and translation. The issues discussed include a taxonomy of methods and applications, time-frequency spatial analysis, optimization schemes, the role of insight in analysis, adaptation, and methods for quantifying BCI feedback."
},
{
"pmid": "27966546",
"title": "Noninvasive Electroencephalogram Based Control of a Robotic Arm for Reach and Grasp Tasks.",
"abstract": "Brain-computer interface (BCI) technologies aim to provide a bridge between the human brain and external devices. Prior research using non-invasive BCI to control virtual objects, such as computer cursors and virtual helicopters, and real-world objects, such as wheelchairs and quadcopters, has demonstrated the promise of BCI technologies. However, controlling a robotic arm to complete reach-and-grasp tasks efficiently using non-invasive BCI has yet to be shown. In this study, we found that a group of 13 human subjects could willingly modulate brain activity to control a robotic arm with high accuracy for performing tasks requiring multiple degrees of freedom by combination of two sequential low dimensional controls. Subjects were able to effectively control reaching of the robotic arm through modulation of their brain rhythms within the span of only a few training sessions and maintained the ability to control the robotic arm over multiple months. Our results demonstrate the viability of human operation of prosthetic limbs using non-invasive BCI technology."
},
{
"pmid": "25107852",
"title": "DARPA-funded efforts in the development of novel brain-computer interface technologies.",
"abstract": "The Defense Advanced Research Projects Agency (DARPA) has funded innovative scientific research and technology developments in the field of brain-computer interfaces (BCI) since the 1970s. This review highlights some of DARPA's major advances in the field of BCI, particularly those made in recent years. Two broad categories of DARPA programs are presented with respect to the ultimate goals of supporting the nation's warfighters: (1) BCI efforts aimed at restoring neural and/or behavioral function, and (2) BCI efforts aimed at improving human training and performance. The programs discussed are synergistic and complementary to one another, and, moreover, promote interdisciplinary collaborations among researchers, engineers, and clinicians. Finally, this review includes a summary of some of the remaining challenges for the field of BCI, as well as the goals of new DARPA efforts in this domain."
},
{
"pmid": "24110155",
"title": "Real-time modeling and 3D visualization of source dynamics and connectivity using wearable EEG.",
"abstract": "This report summarizes our recent efforts to deliver real-time data extraction, preprocessing, artifact rejection, source reconstruction, multivariate dynamical system analysis (including spectral Granger causality) and 3D visualization as well as classification within the open-source SIFT and BCILAB toolboxes. We report the application of such a pipeline to simulated data and real EEG data obtained from a novel wearable high-density (64-channel) dry EEG system."
},
{
"pmid": "22438708",
"title": "Brain computer interfaces, a review.",
"abstract": "A brain-computer interface (BCI) is a hardware and software communications system that permits cerebral activity alone to control computers or external devices. The immediate goal of BCI research is to provide communications capabilities to severely disabled people who are totally paralyzed or 'locked in' by neurological neuromuscular disorders, such as amyotrophic lateral sclerosis, brain stem stroke, or spinal cord injury. Here, we review the state-of-the-art of BCIs, looking at the different steps that form a standard BCI: signal acquisition, preprocessing or signal enhancement, feature extraction, classification and the control interface. We discuss their advantages, drawbacks, and latest advances, and we survey the numerous technologies reported in the scientific literature to design each step of a BCI. First, the review examines the neuroimaging modalities used in the signal acquisition step, each of which monitors a different functional brain activity such as electrical, magnetic or metabolic activity. Second, the review discusses different electrophysiological control signals that determine user intentions, which can be detected in brain activity. Third, the review includes some techniques used in the signal enhancement step to deal with the artifacts in the control signals and improve the performance. Fourth, the review studies some mathematic algorithms used in the feature extraction and classification steps which translate the information in the control signals into commands that operate a computer or other device. Finally, the review provides an overview of various BCI applications that control a range of devices."
},
{
"pmid": "11204034",
"title": "Optimal spatial filtering of single trial EEG during imagined hand movement.",
"abstract": "The development of an electroencephalograph (EEG)-based brain-computer interface (BCI) requires rapid and reliable discrimination of EEG patterns, e.g., associated with imaginary movement. One-sided hand movement imagination results in EEG changes located at contra- and ipsilateral central areas. We demonstrate that spatial filters for multichannel EEG effectively extract discriminatory information from two populations of single-trial EEG, recorded during left- and right-hand movement imagery. The best classification results for three subjects are 90.8%, 92.7%, and 99.7%. The spatial filters are estimated from a set of data by the method of common spatial patterns and reflect the specific activation of cortical areas. The method performs a weighting of the electrodes according to their importance for the classification task. The high recognition rates and computational simplicity make it a promising method for an EEG-based brain-computer interface."
},
{
"pmid": "28782865",
"title": "Deep learning with convolutional neural networks for EEG decoding and visualization.",
"abstract": "Deep learning with convolutional neural networks (deep ConvNets) has revolutionized computer vision through end-to-end learning, that is, learning from the raw data. There is increasing interest in using deep ConvNets for end-to-end EEG analysis, but a better understanding of how to design and train ConvNets for end-to-end EEG decoding and how to visualize the informative EEG features the ConvNets learn is still needed. Here, we studied deep ConvNets with a range of different architectures, designed for decoding imagined or executed tasks from raw EEG. Our results show that recent advances from the machine learning field, including batch normalization and exponential linear units, together with a cropped training strategy, boosted the deep ConvNets decoding performance, reaching at least as good performance as the widely used filter bank common spatial patterns (FBCSP) algorithm (mean decoding accuracies 82.1% FBCSP, 84.0% deep ConvNets). While FBCSP is designed to use spectral power modulations, the features used by ConvNets are not fixed a priori. Our novel methods for visualizing the learned features demonstrated that ConvNets indeed learned to use spectral power modulations in the alpha, beta, and high gamma frequencies, and proved useful for spatially mapping the learned features by revealing the topography of the causal contributions of features in different frequency bands to the decoding decision. Our study thus shows how to design and train ConvNets to decode task-related information from the raw EEG without handcrafted features and highlights the potential of deep ConvNets combined with advanced visualization techniques for EEG-based brain mapping. Hum Brain Mapp 38:5391-5420, 2017. © 2017 Wiley Periodicals, Inc."
},
{
"pmid": "16317224",
"title": "Characterization of four-class motor imagery EEG data for the BCI-competition 2005.",
"abstract": "To determine and compare the performance of different classifiers applied to four-class EEG data is the goal of this communication. The EEG data were recorded with 60 electrodes from five subjects performing four different motor-imagery tasks. The EEG signal was modeled by an adaptive autoregressive (AAR) process whose parameters were extracted by Kalman filtering. By these AAR parameters four classifiers were obtained, namely minimum distance analysis (MDA)--for single-channel analysis, and linear discriminant analysis (LDA), k-nearest-neighbor (kNN) classifiers as well as support vector machine (SVM) classifiers for multi-channel analysis. The performance of all four classifiers was quantified and evaluated by Cohen's kappa coefficient, an advantageous measure we introduced here to BCI research for the first time. The single-channel results gave rise to topographic maps that revealed the channels with the highest level of separability between classes for each subject. Our results of the multi-channel analysis indicate SVM as the most successful classifier, whereas kNN performed worst."
},
{
"pmid": "25462637",
"title": "Deep learning in neural networks: an overview.",
"abstract": "In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks."
},
{
"pmid": "17015237",
"title": "Brain-controlled interfaces: movement restoration with neural prosthetics.",
"abstract": "Brain-controlled interfaces are devices that capture brain transmissions involved in a subject's intention to act, with the potential to restore communication and movement to those who are immobilized. Current devices record electrical activity from the scalp, on the surface of the brain, and within the cerebral cortex. These signals are being translated to command signals driving prosthetic limbs and computer displays. Somatosensory feedback is being added to this control as generated behaviors become more complex. New technology to engineer the tissue-electrode interface, electrode design, and extraction algorithms to transform the recorded signal to movement will help translate exciting laboratory demonstrations to patient practice in the near future."
},
{
"pmid": "22431526",
"title": "A novel Bayesian framework for discriminative feature extraction in Brain-Computer Interfaces.",
"abstract": "As there has been a paradigm shift in the learning load from a human subject to a computer, machine learning has been considered as a useful tool for Brain-Computer Interfaces (BCIs). In this paper, we propose a novel Bayesian framework for discriminative feature extraction for motor imagery classification in an EEG-based BCI in which the class-discriminative frequency bands and the corresponding spatial filters are optimized by means of the probabilistic and information-theoretic approaches. In our framework, the problem of simultaneous spatiospectral filter optimization is formulated as the estimation of an unknown posterior probability density function (pdf) that represents the probability that a single-trial EEG of predefined mental tasks can be discriminated in a state. In order to estimate the posterior pdf, we propose a particle-based approximation method by extending a factored-sampling technique with a diffusion process. An information-theoretic observation model is also devised to measure discriminative power of features between classes. From the viewpoint of classifier design, the proposed method naturally allows us to construct a spectrally weighted label decision rule by linearly combining the outputs from multiple classifiers. We demonstrate the feasibility and effectiveness of the proposed method by analyzing the results and its success on three public databases."
},
{
"pmid": "27900952",
"title": "A novel deep learning approach for classification of EEG motor imagery signals.",
"abstract": "OBJECTIVE\nSignal classification is an important issue in brain computer interface (BCI) systems. Deep learning approaches have been used successfully in many recent studies to learn features and classify different types of data. However, the number of studies that employ these approaches on BCI applications is very limited. In this study we aim to use deep learning methods to improve classification performance of EEG motor imagery signals.\n\n\nAPPROACH\nIn this study we investigate convolutional neural networks (CNN) and stacked autoencoders (SAE) to classify EEG Motor Imagery signals. A new form of input is introduced to combine time, frequency and location information extracted from EEG signal and it is used in CNN having one 1D convolutional and one max-pooling layers. We also proposed a new deep network by combining CNN and SAE. In this network, the features that are extracted in CNN are classified through the deep network SAE.\n\n\nMAIN RESULTS\nThe classification performance obtained by the proposed method on BCI competition IV dataset 2b in terms of kappa value is 0.547. Our approach yields 9% improvement over the winner algorithm of the competition.\n\n\nSIGNIFICANCE\nOur results show that deep learning methods provide better classification performance compared to other state of art approaches. These methods can be applied successfully to BCI systems where the amount of data is large due to daily recording."
},
{
"pmid": "30472579",
"title": "Optimized deep neural network architecture for robust detection of epileptic seizures using EEG signals.",
"abstract": "OBJECTIVE\nAutomatic detection of epileptic seizures based on deep learning methods received much attention last year. However, the potential of deep neural networks in seizure detection has not been fully exploited in terms of the optimal design of the model architecture and the detection power of the time-series brain data. In this work, a deep neural network architecture is introduced to learn the temporal dependencies in Electroencephalogram (EEG) data for robust detection of epileptic seizures.\n\n\nMETHODS\nA deep Long Short-Term Memory (LSTM) network is first used to learn the high-level representations of different EEG patterns. Then, a Fully Connected (FC) layer is adopted to extract the most robust EEG features relevant to epileptic seizures. Finally, these features are supplied to a softmax layer to output predicted labels.\n\n\nRESULTS\nThe results on a benchmark clinical dataset reveal the prevalence of the proposed approach over the baseline techniques; achieving 100% classification accuracy, 100% sensitivity, and 100% specificity. Our approach is additionally shown to be robust in noisy and real-life conditions. It maintains high detection performance in the existence of common EEG artifacts (muscle activities and eye movement) as well as background noise.\n\n\nCONCLUSIONS\nWe demonstrate the clinical feasibility of our seizure detection approach achieving superior performance over the cutting-edge techniques in terms of seizure detection performance and robustness.\n\n\nSIGNIFICANCE\nOur seizure detection approach can contribute to accurate and robust detection of epileptic seizures in ideal and real-life situations."
},
{
"pmid": "29932424",
"title": "EEGNet: a compact convolutional neural network for EEG-based brain-computer interfaces.",
"abstract": "OBJECTIVE\nBrain-computer interfaces (BCI) enable direct communication with a computer, using neural activity as the control signal. This neural signal is generally chosen from a variety of well-studied electroencephalogram (EEG) signals. For a given BCI paradigm, feature extractors and classifiers are tailored to the distinct characteristics of its expected EEG control signal, limiting its application to that specific signal. Convolutional neural networks (CNNs), which have been used in computer vision and speech recognition to perform automatic feature extraction and classification, have successfully been applied to EEG-based BCIs; however, they have mainly been applied to single BCI paradigms and thus it remains unclear how these architectures generalize to other paradigms. Here, we ask if we can design a single CNN architecture to accurately classify EEG signals from different BCI paradigms, while simultaneously being as compact as possible.\n\n\nAPPROACH\nIn this work we introduce EEGNet, a compact convolutional neural network for EEG-based BCIs. We introduce the use of depthwise and separable convolutions to construct an EEG-specific model which encapsulates well-known EEG feature extraction concepts for BCI. We compare EEGNet, both for within-subject and cross-subject classification, to current state-of-the-art approaches across four BCI paradigms: P300 visual-evoked potentials, error-related negativity responses (ERN), movement-related cortical potentials (MRCP), and sensory motor rhythms (SMR).\n\n\nMAIN RESULTS\nWe show that EEGNet generalizes across paradigms better than, and achieves comparably high performance to, the reference algorithms when only limited training data is available across all tested paradigms. In addition, we demonstrate three different approaches to visualize the contents of a trained EEGNet model to enable interpretation of the learned features.\n\n\nSIGNIFICANCE\nOur results suggest that EEGNet is robust enough to learn a wide variety of interpretable features over a range of BCI tasks. Our models can be found at: https://github.com/vlawhern/arl-eegmodels."
},
{
"pmid": "17281471",
"title": "Common Spatial Pattern Method for Channel Selelction in Motor Imagery Based Brain-computer Interface.",
"abstract": "A brain-computer interface(BCI) based on motor imagery (MI) translates the subject's motor intention into a control signal through classifying the electroencephalogram (EEG) patterns of different imagination tasks, e.g. hand and foot movements. Characteristic EEG spatial patterns make MI tasks substantially discriminable. Multi-channel EEGs are usually necessary for spatial pattern identification and therefore MI-based BCI is still in the stage of laboratory demonstration, to some extent, due to the need for constanly troublesome recording preparation. This paper presents a method for channel reduction in MI-based BCI. Common spatial pattern (CSP) method was employed to analyze spatial patterns of imagined hand and foot movements. Significant channels were selelcted by searching the maximunms of spatial pattern vectors in scalp mappings. A classification algorithm was developed by means of combining linear discriminat analysis towards even-related desynchronization (ERD) and readiness potential (RP). The classification accuracies with four optimal channels were 93.45% and 91.88% for two subjects."
},
{
"pmid": "15188883",
"title": "BCI Competition 2003--Data set IV: an algorithm based on CSSD and FDA for classifying single-trial EEG.",
"abstract": "This paper presents an algorithm for classifying single-trial electroencephalogram (EEG) during the preparation of self-paced tapping. It combines common spatial subspace decomposition with Fisher discriminant analysis to extract features from multichannel EEG. Three features are obtained based on Bereitschaftspotential and event-related desynchronization. Finally, a perceptron neural network is trained as the classifier. This algorithm was applied to the data set (self-paced 1s) of \"BCI Competition 2003\" with a classification accuracy of 84% on the test set."
}
] |
Scientific Reports | 31827155 | PMC6906424 | 10.1038/s41598-019-55108-8 | Projection-to-Projection Translation for Hybrid X-ray and Magnetic Resonance Imaging | Hybrid X-ray and magnetic resonance (MR) imaging promises large potential in interventional medical imaging applications due to the broad variety of contrast of MRI combined with fast imaging of X-ray-based modalities. To fully utilize the potential of the vast amount of existing image enhancement techniques, the corresponding information from both modalities must be present in the same domain. For image-guided interventional procedures, X-ray fluoroscopy has proven to be the modality of choice. Synthesizing one modality from another in this case is an ill-posed problem due to ambiguous signal and overlapping structures in projective geometry. To take on these challenges, we present a learning-based solution to MR to X-ray projection-to-projection translation. We propose an image generator network that focuses on high representation capacity in higher resolution layers to allow for accurate synthesis of fine details in the projection images. Additionally, a weighting scheme in the loss computation that favors high-frequency structures is proposed to focus on the important details and contours in projection imaging. The proposed extensions prove valuable in generating X-ray projection images with natural appearance. Our approach achieves a deviation from the ground truth of only 6% and structural similarity measure of 0.913 ± 0.005. In particular the high frequency weighting assists in generating projection images with sharp appearance and reduces erroneously synthesized fine details. | Related WorkMR projection imagingCommon clinical X-ray systems used for image-guided interventions exhibit a cone-beam geometry with the according perspective distortion in the acquired images. These systems can acquire fluoroscopic sequences with a rate of up to 30 frames per second. In contrast, MR imaging is usually performed in a tomographic fashion. The acquisition of whole volumes and subsequent forward projection is, however, too slow to be applicable in interventional procedures. Fortunately, the direct acquisition of projection images is possible9,10. Yet, the resulting images are subject to a parallel-beam geometry and, therefore, incompatible with their cone-beam projected X-ray counterpart. To remedy this contradiction, recent research has addressed the problem of acquiring MR projections with the same perspective distortion as an X-ray system, without the detour of volumetric acquisition11. Most approaches rely on rebinning to convert the acquired projection rays from parallel to fan- or cone-beam geometry9,12. Unfortunately, this requires interpolation which reduces the resulting image’s quality. To avoid loss of resolution, Syben et al.10 proposed a neural network based algorithm to generate this perspective projection.Image-to-image translationProjection-to-projection translation suffers from the problems explained in Section 1, i.e., ambiguous signal and overlapping structures. However, from a machine learning point of view, it is essentially an image-to-image translation task. While we target the application of synthesizing 2D projection images instead of tomographic slice images, the preliminary work on pseudo-CT generation, mostly applied for radiotherapy treatment planning, is still relevant. Two main approaches to the task of estimating 3D CT images from corresponding MR scans have prevailed up to now: atlas-based and learning-based methods. Atlas-based methods as proposed in13,14 achieve good results, however, their dependence on accurate image registration and often high computational complexity is undesirable in the interventional setting. In contrast, inference is fast in learning-based approaches which makes them suitable for real-time applications. Essentially a regression task, image-to-image translation can be solved using classical machine learning methods like random forests5. The advent of convolutional neural networks (CNN) revolutionized the field of image synthesis and image processing in general. Numerous different approaches to image synthesis using deep neural networks have been published since then15–17. Lately, generative adversarial networks (GAN)18 proved valuable for this task. Isola et al.19 presented one of the first approaches to general-purpose image-to-image translation based on GANs and others followed6,7,20–24.Applying this idea to the field of medical image synthesis, Nie et al. enhanced the conditional GAN structure with an auto-context model and achieved good results in the task of tomographic CT image synthesis based on its MR pendant6. Furthermore, in7,25–27, the successful training of a GAN based on unpaired MR and CT images was shown. As a result of the achieved successes, GANs are now a frequently used tool in medical imaging research that extends beyond the realm of image-to-image translation28–32. Inspired by these advances in the field of image synthesis, we seek to find a deep learning-based solution to generate X-ray projections from corresponding MR projections. In previous research33, we were able to show a proof of concept, which however was only able to produce mediocre images. We want to remedy this shortcoming in the underlying work and additionally show a profound analysis of the results on a more representative data set. | [
"11169837",
"26429262",
"23442772",
"29674235",
"29771238",
"24784377",
"29870364",
"22770690",
"24320447",
"25915956"
] | [
{
"pmid": "11169837",
"title": "A truly hybrid interventional MR/X-ray system: feasibility demonstration.",
"abstract": "A system enabling both x-ray fluoroscopy and MRI in a single exam, without requiring patient repositioning, would be a powerful tool for image-guided interventions. We studied the technical issues related to acquisition of x-ray images inside an open MRI system (GE Signa SP). The system includes a flat-panel x-ray detector (GE Medical Systems) placed under the patient bed, a fixed-anode x-ray tube overhead with the anode-cathode axis aligned with the main magnetic field and a high-frequency x-ray generator (Lunar Corp.). New challenges investigated related to: 1) deflection and defocusing of the electron beam of the x-ray tube; 2) proper functioning of the flat panel; 3) effects on B0 field homogeneity; and 4) additional RF noise in the MR images. We have acquired high-quality x-ray and MR images without repositioning the object using our hybrid system, which demonstrates the feasibility of this new configuration. Further work is required to ensure that the highest possible image quality is achieved with both MR and x-ray modalities."
},
{
"pmid": "26429262",
"title": "Vision 20/20: Simultaneous CT-MRI--Next chapter of multimodality imaging.",
"abstract": "Multimodality imaging systems such as positron emission tomography-computed tomography (PET-CT) and MRI-PET are widely available, but a simultaneous CT-MRI instrument has not been developed. Synergies between independent modalities, e.g., CT, MRI, and PET/SPECT can be realized with image registration, but such postprocessing suffers from registration errors that can be avoided with synchronized data acquisition. The clinical potential of simultaneous CT-MRI is significant, especially in cardiovascular and oncologic applications where studies of the vulnerable plaque, response to cancer therapy, and kinetic and dynamic mechanisms of targeted agents are limited by current imaging technologies. The rationale, feasibility, and realization of simultaneous CT-MRI are described in this perspective paper. The enabling technologies include interior tomography, unique gantry designs, open magnet and RF sequences, and source and detector adaptation. Based on the experience with PET-CT, PET-MRI, and MRI-LINAC instrumentation where hardware innovation and performance optimization were instrumental to construct commercial systems, the authors provide top-level concepts for simultaneous CT-MRI to meet clinical requirements and new challenges. Simultaneous CT-MRI fills a major gap of modality coupling and represents a key step toward the so-called \"omnitomography\" defined as the integration of all relevant imaging modalities for systems biology and precision medicine."
},
{
"pmid": "23442772",
"title": "Magnetic resonance-based attenuation correction for PET/MR hybrid imaging using continuous valued attenuation maps.",
"abstract": "OBJECTIVES\nAttenuation correction of positron emission tomographic (PET) data is critical in providing accurate and quantitative PET volumes. Deriving an attenuation map (μ-map) from magnetic resonance (MR) volumes is a challenge in PET/MR hybrid imaging. The difficulty lies in differentiating cortical bone from air from standard MR sequences because both these classes yield little to no MR signal and thus shows no distinguishable information. The objective of this contribution is 2-fold: (1) to generate and evaluate a continuous valued computed tomography (CT)-like attenuation map (μ-map) with continuous density values from dedicated MR sequences and (2) to compare its PET quantification accuracy with respect to a CT-based attenuation map as the criterion standard and other segmentation-based attenuation maps for studies of the head.\n\n\nMATERIALS AND METHODS\nThree-dimensional Dixon-volume interpolated breath-hold examination and ultrashort echo time sequences were acquired for each patient on a Siemens 3-T Biograph mMR PET/MR hybrid system and the corresponding patient CT on a Siemens Biograph 64. A pseudo-CT training was done using the epsilon-insensitive support vector regression ([Latin Small Letter Open E]-SVR) technique on 5 patients who had CT/MR/PET triplets, and the generated model was evaluated on 5 additional patients who were not included in the training process. Four μ-maps were compared, and 3 of them derived from CT: scaled CT (μ-map CT), 3-class segmented CT without cortical bone (μ-map no bone), 4-class segmented CT with cortical bone (μ-map bone), and 1 from MR sequences via [Latin Small Letter Open E]-SVR technique previously mentioned (ie, MR predicted [μ-map MR]). Positron emission tomographic volumes with each of the previously mentioned μ-maps were reconstructed, and relative difference images were calculated with respect to μ-map CT as the criterion standard.\n\n\nRESULTS\nFor PET quantification, the proposed method yields a mean (SD) absolute error of 2.40% (3.69%) and 2.16% (1.77%) for the complete brain and the regions close to the cortical bone, respectively. In contrast, PET using μ-map no bone yielded 10.15% (3.31%) and 11.03 (2.26%) for the same, although PET using μ-map bone resulted in errors of 3.96% (3.71%) and 4.22% (3.91%). Furthermore, it is shown that the model can be extended to predict pseudo-CTs for other anatomical regions on the basis of only MR information.\n\n\nCONCLUSIONS\nIn this study, the generation of continuous valued attenuation maps from MR sequences is demonstrated and its effect on PET quantification is evaluated in comparison with segmentation-based μ-maps. A less-than-2-minute acquisition time makes the proposed approach promising for a clinical application for studies of the head. However, further experiments are required to validate and evaluate this technique for attenuation correction in other regions of the body."
},
{
"pmid": "29674235",
"title": "Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.",
"abstract": "Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1-weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR-to-CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state-of-the-art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run-time cost for synthesizing a CT image."
},
{
"pmid": "29771238",
"title": "On the direct acquisition of beam's-eye-view images in MRI for integration with external beam radiotherapy.",
"abstract": "The recent interest in the integration of external beam radiotherapy with a magnetic resonance (MR) imaging unit offers the potential for real-time adaptive tumour tracking during radiation treatment. The tracking of large tumours which follow a rapid trajectory may best be served by the generation of a projection image from the perspective of the beam source, or 'beam's eye view' (BEV). This type of image projection represents the path of the radiation beam, thus enabling rapid compensations for target translations, rotations and deformations, as well time-dependent critical structure avoidance. MR units have been traditionally incapable of this type of imaging except through lengthy 3D acquisitions and ray tracing procedures. This work investigates some changes to the traditional MR scanner architecture that would permit the direct acquisition of a BEV image suitable for integration with external beam radiotherapy. Based on the theory presented in this work, a phantom was imaged with nonlinear encoding-gradient field patterns to demonstrate the technique. The phantom was constructed with agarose gel tubes spaced two cm apart at their base and oriented to converge towards an imaginary beam source 100 cm away. A corresponding virtual phantom was also created and subjected to the same encoding technique as in the physical demonstration, allowing the method to be tested without hardware limitations. The experimentally acquired and simulated images indicate the feasibility of the technique, showing a substantial amount of blur reduction in a diverging phantom compared to the conventional imaging geometry, particularly with the nonlinear gradients ideally implemented. The theory is developed to demonstrate that the method can be adapted in a number of different configurations to accommodate all proposed integration schemes for MR units and radiotherapy sources. Depending on the configuration, the implementation of this technique will require between two and four additional nonlinear encoding coils."
},
{
"pmid": "24784377",
"title": "MRI-based treatment planning with pseudo CT generated through atlas registration.",
"abstract": "PURPOSE\nTo evaluate the feasibility and accuracy of magnetic resonance imaging (MRI)-based treatment planning using pseudo CTs generated through atlas registration.\n\n\nMETHODS\nA pseudo CT, providing electron density information for dose calculation, was generated by deforming atlas CT images previously acquired on other patients. The authors tested 4 schemes of synthesizing a pseudo CT from single or multiple deformed atlas images: use of a single arbitrarily selected atlas, arithmetic mean process using 6 atlases, and pattern recognition with Gaussian process (PRGP) using 6 or 12 atlases. The required deformation for atlas CT images was derived from a nonlinear registration of conjugated atlas MR images to that of the patient of interest. The contrasts of atlas MR images were adjusted by histogram matching to reduce the effect of different sets of acquisition parameters. For comparison, the authors also tested a simple scheme assigning the Hounsfield unit of water to the entire patient volume. All pseudo CT generating schemes were applied to 14 patients with common pediatric brain tumors. The image similarity of real patient-specific CT and pseudo CTs constructed by different schemes was compared. Differences in computation times were also calculated. The real CT in the treatment planning system was replaced with the pseudo CT, and the dose distribution was recalculated to determine the difference.\n\n\nRESULTS\nThe atlas approach generally performed better than assigning a bulk CT number to the entire patient volume. Comparing atlas-based schemes, those using multiple atlases outperformed the single atlas scheme. For multiple atlas schemes, the pseudo CTs were similar to the real CTs (correlation coefficient, 0.787-0.819). The calculated dose distribution was in close agreement with the original dose. Nearly the entire patient volume (98.3%-98.7%) satisfied the criteria of chi-evaluation (<2% maximum dose and 2 mm range). The dose to 95% of the volume and the percentage of volume receiving at least 95% of the prescription dose in the planning target volume differed from the original values by less than 2% of the prescription dose (root-mean-square, RMS < 1%). The PRGP scheme did not perform better than the arithmetic mean process with the same number of atlases. Increasing the number of atlases from 6 to 12 often resulted in improvements, but statistical significance was not always found.\n\n\nCONCLUSIONS\nMRI-based treatment planning with pseudo CTs generated through atlas registration is feasible for pediatric brain tumor patients. The doses calculated from pseudo CTs agreed well with those from real CTs, showing dosimetric accuracy within 2% for the PTV when multiple atlases were used. The arithmetic mean process may be a reasonable choice over PRGP for the synthesis scheme considering performance and computational costs."
},
{
"pmid": "29870364",
"title": "Low-Dose CT Image Denoising Using a Generative Adversarial Network With Wasserstein Distance and Perceptual Loss.",
"abstract": "The continuous development and extensive use of computed tomography (CT) in medical practice has raised a public concern over the associated radiation dose to the patient. Reducing the radiation dose may lead to increased noise and artifacts, which can adversely affect the radiologists' judgment and confidence. Hence, advanced image reconstruction from low-dose CT data is needed to improve the diagnostic performance, which is a challenging problem due to its ill-posed nature. Over the past years, various low-dose CT methods have produced impressive results. However, most of the algorithms developed for this application, including the recently popularized deep learning techniques, aim for minimizing the mean-squared error (MSE) between a denoised CT image and the ground truth under generic penalties. Although the peak signal-to-noise ratio is improved, MSE- or weighted-MSE-based methods can compromise the visibility of important structural details after aggressive denoising. This paper introduces a new CT image denoising method based on the generative adversarial network (GAN) with Wasserstein distance and perceptual similarity. The Wasserstein distance is a key concept of the optimal transport theory and promises to improve the performance of GAN. The perceptual loss suppresses noise by comparing the perceptual features of a denoised output against those of the ground truth in an established feature space, while the GAN focuses more on migrating the data noise distribution from strong to weak statistically. Therefore, our proposed method transfers our knowledge of visual perception to the image denoising task and is capable of not only reducing the image noise level but also trying to keep the critical information at the same time. Promising results have been obtained in our experiments with clinical CT images."
},
{
"pmid": "22770690",
"title": "3D Slicer as an image computing platform for the Quantitative Imaging Network.",
"abstract": "Quantitative analysis has tremendous but mostly unrealized potential in healthcare to support objective and accurate interpretation of the clinical imaging. In 2008, the National Cancer Institute began building the Quantitative Imaging Network (QIN) initiative with the goal of advancing quantitative imaging in the context of personalized therapy and evaluation of treatment response. Computerized analysis is an important component contributing to reproducibility and efficiency of the quantitative imaging techniques. The success of quantitative imaging is contingent on robust analysis methods and software tools to bring these methods from bench to bedside. 3D Slicer is a free open-source software application for medical image computing. As a clinical research tool, 3D Slicer is similar to a radiology workstation that supports versatile visualizations but also provides advanced functionality such as automated segmentation and registration for a variety of application domains. Unlike a typical radiology workstation, 3D Slicer is free and is not tied to specific hardware. As a programming platform, 3D Slicer facilitates translation and evaluation of the new quantitative methods by allowing the biomedical researcher to focus on the implementation of the algorithm and providing abstractions for the common tasks of data communication, visualization and user interface development. Compared to other tools that provide aspects of this functionality, 3D Slicer is fully open source and can be readily extended and redistributed. In addition, 3D Slicer is designed to facilitate the development of new functionality in the form of 3D Slicer extensions. In this paper, we present an overview of 3D Slicer as a platform for prototyping, development and evaluation of image analysis tools for clinical research applications. To illustrate the utility of the platform in the scope of QIN, we discuss several use cases of 3D Slicer by the existing QIN teams, and we elaborate on the future directions that can further facilitate development and validation of imaging biomarkers using 3D Slicer."
},
{
"pmid": "24320447",
"title": "CONRAD--a software framework for cone-beam imaging in radiology.",
"abstract": "PURPOSE\nIn the community of x-ray imaging, there is a multitude of tools and applications that are used in scientific practice. Many of these tools are proprietary and can only be used within a certain lab. Often the same algorithm is implemented multiple times by different groups in order to enable comparison. In an effort to tackle this problem, the authors created CONRAD, a software framework that provides many of the tools that are required to simulate basic processes in x-ray imaging and perform image reconstruction with consideration of nonlinear physical effects.\n\n\nMETHODS\nCONRAD is a Java-based state-of-the-art software platform with extensive documentation. It is based on platform-independent technologies. Special libraries offer access to hardware acceleration such as OpenCL. There is an easy-to-use interface for parallel processing. The software package includes different simulation tools that are able to generate up to 4D projection and volume data and respective vector motion fields. Well known reconstruction algorithms such as FBP, DBP, and ART are included. All algorithms in the package are referenced to a scientific source.\n\n\nRESULTS\nA total of 13 different phantoms and 30 processing steps have already been integrated into the platform at the time of writing. The platform comprises 74.000 nonblank lines of code out of which 19% are used for documentation. The software package is available for download at http://conrad.stanford.edu. To demonstrate the use of the package, the authors reconstructed images from two different scanners, a table top system and a clinical C-arm system. Runtimes were evaluated using the RabbitCT platform and demonstrate state-of-the-art runtimes with 2.5 s for the 256 problem size and 12.4 s for the 512 problem size.\n\n\nCONCLUSIONS\nAs a common software framework, CONRAD enables the medical physics community to share algorithms and develop new ideas. In particular this offers new opportunities for scientific collaboration and quantitative performance comparison between the methods of different groups."
},
{
"pmid": "25915956",
"title": "Epipolar Consistency in Transmission Imaging.",
"abstract": "This paper presents the derivation of the Epipolar Consistency Conditions (ECC) between two X-ray images from the Beer-Lambert law of X-ray attenuation and the Epipolar Geometry of two pinhole cameras, using Grangeat's theorem. We motivate the use of Oriented Projective Geometry to express redundant line integrals in projection images and define a consistency metric, which can be used, for instance, to estimate patient motion directly from a set of X-ray images. We describe in detail the mathematical tools to implement an algorithm to compute the Epipolar Consistency Metric and investigate its properties with detailed random studies on both artificial and real FD-CT data. A set of six reference projections of the CT scan of a fish were used to evaluate accuracy and precision of compensating for random disturbances of the ground truth projection matrix using an optimization of the consistency metric. In addition, we use three X-ray images of a pumpkin to prove applicability to real data. We conclude, that the metric might have potential in applications related to the estimation of projection geometry. By expression of redundancy between two arbitrary projection views, we in fact support any device or acquisition trajectory which uses a cone-beam geometry. We discuss certain geometric situations, where the ECC provide the ability to correct 3D motion, without the need for 3D reconstruction."
}
] |
BMC Medical Informatics and Decision Making | 31830986 | PMC6907109 | 10.1186/s12911-019-0987-5 | A low-cost vision system based on the analysis of motor features for recognition and severity rating of Parkinson’s Disease | BackgroundAssessment and rating of Parkinson’s Disease (PD) are commonly based on the medical observation of several clinical manifestations, including the analysis of motor activities. In particular, medical specialists refer to the MDS-UPDRS (Movement Disorder Society – sponsored revision of Unified Parkinson’s Disease Rating Scale) that is the most widely used clinical scale for PD rating. However, clinical scales rely on the observation of some subtle motor phenomena that are either difficult to capture with human eyes or could be misclassified. This limitation motivated several researchers to develop intelligent systems based on machine learning algorithms able to automatically recognize the PD. Nevertheless, most of the previous studies investigated the classification between healthy subjects and PD patients without considering the automatic rating of different levels of severity.MethodsIn this context, we implemented a simple and low-cost clinical tool that can extract postural and kinematic features with the Microsoft Kinect v2 sensor in order to classify and rate PD. Thirty participants were enrolled for the purpose of the present study: sixteen PD patients rated according to MDS-UPDRS and fourteen healthy paired subjects. In order to investigate the motor abilities of the upper and lower body, we acquired and analyzed three main motor tasks: (1) gait, (2) finger tapping, and (3) foot tapping. After preliminary feature selection, different classifiers based on Support Vector Machine (SVM) and Artificial Neural Networks (ANN) were trained and evaluated for the best solution.ResultsConcerning the gait analysis, results showed that the ANN classifier performed the best by reaching 89.4% of accuracy with only nine features in diagnosis PD and 95.0% of accuracy with only six features in rating PD severity. Regarding the finger and foot tapping analysis, results showed that an SVM using the extracted features was able to classify healthy subjects versus PD patients with great performances by reaching 87.1% of accuracy. The results of the classification between mild and moderate PD patients indicated that the foot tapping features were the most representative ones to discriminate (81.0% of accuracy).ConclusionsThe results of this study have shown how a low-cost vision-based system can automatically detect subtle phenomena featuring the PD. Our findings suggest that the proposed tool can support medical specialists in the assessment and rating of PD patients in a real clinical scenario. | Related worksIn the last years, machine learning (ML) techniques have been used and compared for PD classification [9], e.g. Support Vector Machine (SVM), Linear Discriminant Analysis (LDA), Artificial Neural Network (ANN), Decision Tree (DT), Naïve Bayes. Most of the published studies investigate two-group classifications, i.e. PD patients vs healthy subjects of control (HC), with promising results obtained [10]. Few works, indeed, presented multiclass classification among patients at different disease stages [9–11].Researchers have also applied ML to classify PD patients and HC by extracting features related to motor abilities. The majority of the studies based on the analysis of either the lower limb motor abilities or the upper limb motor abilities are usually focused on a single exercises or a single symptom [12–31]. Different technologies have been exploited to capture the analyzed movements, and the most used are optoelectronic systems, wearable sensors like accelerometers and gyroscopes and camera-based systems [32].Objective and precise assessments of motor tasks are usually performed using large optoelectronic equipment (e.g., 3D-camera-based systems, instrumented walkways) that require heavy installation and a large space to conduct the experiments [33]. Earlier efforts to develop clinic-based gait assessment tools for patients with PD have appeared in the literature over the past two decades. Muro-de-la-Herran et al. [34] and Tao et al. [35], reviewed the use of wearable sensors, such as accelerometers, gyroscopes, magnetoresistive sensors, flexible goniometers, electromagnetic tracking systems, and force sensors in gait analysis (including both kinematics and kinetics), and reported that they have the potential to play an important role in various clinical applications. Among the different proposed wearable sensors, inertial measurement units (IMU) were widely used, even though there are several key limitations that should be considered when considering the use of wearable IMUs as a clinical-based tool, e.g. the gyroscope-based assessment tools suffer from a drifting effect [36]. Systems that are based on low-cost camera might represent a valid solution to overcome both the high cost and encumbrance of an optoelectronic system and the above reported limitation of the IMU-based system. Since the release of the Microsoft Kinect SDK, the Kinect v2 sensor has been widely utilized for PD-related research. Several projects focused on rehabilitation and they proposed experimental ways of monitoring patients’ activities [36–39]. Most of the cited works carried on comparisons of the Kinect v2 sensor in relation to gold standards, as optoelectronic systems, in order to test and quantify its accuracy.According to the recent trends in the area of intelligent systems for personalized medicine [40–46], it seems clear that there is the necessity for new, low-cost, and accessible technologies to facilitate in-clinic and at-home assessment of motor alterations throughout the progression of PD [47]. In this context, we have proposed a low-cost camera-based system able to recognize and rate PD patients in a completely non-invasive manner. The main novel contributions respect to the state of the art presented above are:
differently from already published studies, we also considered the classification of the PD severity;we used the MS Kinect v2 system to investigate three motor exercise: gait, finger tapping and foot tapping;we evaluated a large set of features extracted from the kinematic data (spatio-temporal parameters, frequency and postural variables);differently from previous studies on gait analysis for PD classification [36, 37, 39, 48], we also considered postural oscillations and kinematics of upper body parts (trunk, neck and arms);we have developed and compared two classifiers (SVM and ANN) able to assess and rate the movement impairment of PD patients using a specific set of features extracted by the recorded movements.Our main goal is to design and test a mobile low-cost decision support system (DSS) that can be easily used both in specialized hospitals and at home thus implementing the recent telemedicine paradigms. The system aims then to detected the early symptoms of PD, and to provide a tool able to monitor, assess and rate the disease in a non-invasive manner, since the early stages. | [
"28931491",
"29131880",
"26578041",
"6067254",
"30826265",
"19103505",
"22231198",
"26302523",
"24235277",
"23195495",
"19932995",
"21335305",
"24661464",
"17278588",
"23939408",
"26179817",
"18074362",
"20438759",
"24556672",
"22438763",
"28408157",
"26861323",
"26499251",
"26002604",
"29297304",
"30323198",
"21641850",
"27571158",
"9214783",
"19054734",
"29858068"
] | [
{
"pmid": "28931491",
"title": "Global, regional, and national burden of neurological disorders during 1990-2015: a systematic analysis for the Global Burden of Disease Study 2015.",
"abstract": "BACKGROUND\nComparable data on the global and country-specific burden of neurological disorders and their trends are crucial for health-care planning and resource allocation. The Global Burden of Diseases, Injuries, and Risk Factors (GBD) Study provides such information but does not routinely aggregate results that are of interest to clinicians specialising in neurological conditions. In this systematic analysis, we quantified the global disease burden due to neurological disorders in 2015 and its relationship with country development level.\n\n\nMETHODS\nWe estimated global and country-specific prevalence, mortality, disability-adjusted life-years (DALYs), years of life lost (YLLs), and years lived with disability (YLDs) for various neurological disorders that in the GBD classification have been previously spread across multiple disease groupings. The more inclusive grouping of neurological disorders included stroke, meningitis, encephalitis, tetanus, Alzheimer's disease and other dementias, Parkinson's disease, epilepsy, multiple sclerosis, motor neuron disease, migraine, tension-type headache, medication overuse headache, brain and nervous system cancers, and a residual category of other neurological disorders. We also analysed results based on the Socio-demographic Index (SDI), a compound measure of income per capita, education, and fertility, to identify patterns associated with development and how countries fare against expected outcomes relative to their level of development.\n\n\nFINDINGS\nNeurological disorders ranked as the leading cause group of DALYs in 2015 (250·7 [95% uncertainty interval (UI) 229·1 to 274·7] million, comprising 10·2% of global DALYs) and the second-leading cause group of deaths (9·4 [9·1 to 9·7] million], comprising 16·8% of global deaths). The most prevalent neurological disorders were tension-type headache (1505·9 [UI 1337·3 to 1681·6 million cases]), migraine (958·8 [872·1 to 1055·6] million), medication overuse headache (58·5 [50·8 to 67·4 million]), and Alzheimer's disease and other dementias (46·0 [40·2 to 52·7 million]). Between 1990 and 2015, the number of deaths from neurological disorders increased by 36·7%, and the number of DALYs by 7·4%. These increases occurred despite decreases in age-standardised rates of death and DALYs of 26·1% and 29·7%, respectively; stroke and communicable neurological disorders were responsible for most of these decreases. Communicable neurological disorders were the largest cause of DALYs in countries with low SDI. Stroke rates were highest at middle levels of SDI and lowest at the highest SDI. Most of the changes in DALY rates of neurological disorders with development were driven by changes in YLLs.\n\n\nINTERPRETATION\nNeurological disorders are an important cause of disability and death worldwide. Globally, the burden of neurological disorders has increased substantially over the past 25 years because of expanding population numbers and ageing, despite substantial decreases in mortality rates from stroke and communicable neurological disorders. The number of patients who will need care by clinicians with expertise in neurological conditions will continue to grow in coming decades. Policy makers and health-care providers should be aware of these trends to provide adequate services.\n\n\nFUNDING\nBill & Melinda Gates Foundation."
},
{
"pmid": "26578041",
"title": "Minimal clinically important difference on the Motor Examination part of MDS-UPDRS.",
"abstract": "BACKGROUND\nRecent studies increasingly utilize the Movement Disorders Society Sponsored Unified Parkinson's Disease Rating Scale (MDS-UPDRS). However, the minimal clinically important difference (MCID) has not been fully established for MDS-UPDRS yet.\n\n\nOBJECTIVE\nTo assess the MCID thresholds for MDS-UPDRS Motor Examination (Part III).\n\n\nMETHODS\n728 paired investigations of 260 patients were included. At each visit both MDS-UPDRS and Clinician-reported Global Impression-Improvement (CGI-I) scales were assessed. MDS-UPDRS Motor Examination (ME) score changes associated with CGI-I score 4 (no change) were compared with MDS-UPDRS ME score changes associated with CGI-I score 3 (minimal improvement) and CGI-I score 5 (minimal worsening). Both anchor- and distribution-based techniques were utilized to determine the magnitude of MCID.\n\n\nRESULTS\nThe MCID estimates for MDS-UPDRS ME were asymmetric: -3.25 points for detecting minimal, but clinically pertinent, improvement and 4.63 points for observing minimal, but clinically pertinent, worsening.\n\n\nCONCLUSIONS\nMCID is the smallest change of scores that are clinically meaningful to patients. These MCID estimates may allow the judgement of a numeric change in MDS-UPDRS ME on its clinical importance."
},
{
"pmid": "30826265",
"title": "Upper limb motor pre-clinical assessment in Parkinson's disease using machine learning.",
"abstract": "INTRODUCTION\nParkinson's disease (PD) is a common neurodegenerative disorder characterized by disabling motor and non-motor symptoms. For example, idiopathic hyposmia (IH), which is a reduced olfactory sensitivity, is typical in >95% of PD patients and is a preclinical marker for the pathology.\n\n\nMETHODS\nIn this work, a wearable inertial device, named SensHand V1, was used to acquire motion data from the upper limbs during the performance of six tasks selected by MDS-UPDRS III. Three groups of people were enrolled, including 30 healthy subjects, 30 IH people, and 30 PD patients. Forty-eight parameters per side were computed by spatiotemporal and frequency data analysis. A feature array was selected as the most significant to discriminate among the different classes both in two-group and three-group classification. Multiple analyses were performed comparing three supervised learning algorithms, Support Vector Machine (SVM), Random Forest (RF), and Naïve Bayes, on three different datasets.\n\n\nRESULTS\nExcellent results were obtained for healthy vs. patients classification (F-Measure 0.95 for RF and 0.97 for SVM), and good results were achieved by including subjects with hyposmia as a separate group (0.79 accuracy, 0.80 precision with RF) within a three-group classification. Overall, RF classifiers were the best approach for this application.\n\n\nCONCLUSION\nThe system is suitable to support an objective PD diagnosis. Further, combining motion analysis with a validated olfactory screening test, a two-step non-invasive, low-cost procedure can be defined to appropriately analyze people at risk for PD development, helping clinicians to identify also subtle changes in motor performance that characterize PD onset."
},
{
"pmid": "19103505",
"title": "Opening velocity, a novel parameter, for finger tapping test in patients with Parkinson's disease.",
"abstract": "OBJECTIVES\nA new system consisting of an accelerometer and touch sensor was developed to find objective parameters for the finger tapping (FT) test in patients with Parkinson's disease (PD).\n\n\nMETHODS\nWe recruited sixteen patients with PD and thirty-two age-matched healthy volunteers (HVs). By using this new system, various parameters related to velocity, amplitude, rhythm and number in the FT test were measured in patients with PD and examined in comparison with those of HVs on the basis of the Unified Parkinson's Disease Rating Scale (UPDRS) FT score.\n\n\nRESULTS\nThe new system allowed us to measure fourteen parameters of FT movement very easily, and a radar chart showed obvious differences in most of these parameters between HVs and patients with PD. Principal component analysis showed that fourteen parameters were classified into three components: (1) both mean and standard deviation (SD) of both amplitude and velocity, (2) number of FT for 60s and mean FT interval, and (3) SD of FT interval. The first (velocity- and amplitude-related parameters) and third (rhythm-related parameters) components contributed to discrimination of PD from HVs. Maximum opening velocity (MoV) was the best of these parameters because of its sensitivity and association with the UPDRS FT score.\n\n\nCONCLUSIONS\nA novel system for the FT test, which is compact, simple and efficient, has been developed. Velocity- and amplitude-related parameters were indicated to be valuable for evaluation of the FT test in patients with PD. In particular, we first propose that MoV is a novel marker for the FT test."
},
{
"pmid": "22231198",
"title": "Assessment of tremor activity in the Parkinson's disease using a set of wearable sensors.",
"abstract": "Tremor is the most common motor disorder of Parkinson's disease (PD) and consequently its detection plays a crucial role in the management and treatment of PD patients. The current diagnosis procedure is based on subject-dependent clinical assessment, which has a difficulty in capturing subtle tremor features. In this paper, an automated method for both resting and action/postural tremor assessment is proposed using a set of accelerometers mounted on different patient's body segments. The estimation of tremor type (resting/action postural) and severity is based on features extracted from the acquired signals and hidden Markov models. The method is evaluated using data collected from 23 subjects (18 PD patients and 5 control subjects). The obtained results verified that the proposed method successfully: 1) quantifies tremor severity with 87 % accuracy, 2) discriminates resting from postural tremor, and 3) discriminates tremor from other Parkinsonian motor symptoms during daily activities."
},
{
"pmid": "26302523",
"title": "A Smartphone-Based Tool for Assessing Parkinsonian Hand Tremor.",
"abstract": "The aim of this study is to propose a practical smartphone-based tool to accurately assess upper limb tremor in Parkinson's disease (PD) patients. The tool uses signals from the phone's accelerometer and gyroscope (as the phone is held or mounted on a subject's hand) to compute a set of metrics which can be used to quantify a patient's tremor symptoms. In a small-scale clinical study with 25 PD patients and 20 age-matched healthy volunteers, we combined our metrics with machine learning techniques to correctly classify 82% of the patients and 90% of the healthy volunteers, which is high compared to similar studies. The proposed method could be effective in assisting physicians in the clinic, or to remotely evaluate the patient's condition and communicate the results to the physician. Our tool is low cost, platform independent, noninvasive, and requires no expertise to use. It is also well matched to the standard clinical examination for PD and can keep the patient \"connected\" to his physician on a daily basis. Finally, it can facilitate the creation of anonymous profiles for PD patients, aiding further research on the effectiveness of medication or other overlooked aspects of patients' lives."
},
{
"pmid": "24235277",
"title": "Automatic identification and classification of freezing of gait episodes in Parkinson's disease patients.",
"abstract": "Alternation of walking pattern decreases quality of life and may result in falls and injuries. Freezing of gait (FOG) in Parkinson's disease (PD) patients occurs occasionally and intermittently, appearing in a random, inexplicable manner. In order to detect typical disturbances during walking, we designed an expert system for automatic classification of various gait patterns. The proposed method is based on processing of data obtained from an inertial sensor mounted on shank. The algorithm separates normal from abnormal gait using Pearson's correlation and describes each stride by duration, shank displacement, and spectral components. A rule-based data processing classifies strides as normal, short (short(+)) or very short (short(-)) strides, FOG with tremor (FOG(+)) or FOG with complete motor block (FOG(-)). The algorithm also distinguishes between straight and turning strides. In 12 PD patients, FOG(+) and FOG(-) were identified correctly in 100% of strides, while normal strides were recognized in 95% of cases. Short(+) and short(-) strides were identified in about 84% and 78%. Turning strides were correctly identified in 88% of cases. The proposed method may be used as an expert system for detailed stride classification, providing warning for severe FOG episodes and near-fall situations."
},
{
"pmid": "23195495",
"title": "Automatic detection of freezing of gait events in patients with Parkinson's disease.",
"abstract": "The aim of this study is to detect freezing of gait (FoG) events in patients suffering from Parkinson's disease (PD) using signals received from wearable sensors (six accelerometers and two gyroscopes) placed on the patients' body. For this purpose, an automated methodology has been developed which consists of four stages. In the first stage, missing values due to signal loss or degradation are replaced and then (second stage) low frequency components of the raw signal are removed. In the third stage, the entropy of the raw signal is calculated. Finally (fourth stage), four classification algorithms have been tested (Naïve Bayes, Random Forests, Decision Trees and Random Tree) in order to detect the FoG events. The methodology has been evaluated using several different configurations of sensors in order to conclude to the set of sensors which can produce optimal FoG episode detection. Signals recorded from five healthy subjects, five patients with PD who presented the symptom of FoG and six patients who suffered from PD but they do not present FoG events. The signals included 93 FoG events with 405.6s total duration. The results indicate that the proposed methodology is able to detect FoG events with 81.94% sensitivity, 98.74% specificity, 96.11% accuracy and 98.6% area under curve (AUC) using the signals from all sensors and the Random Forests classification algorithm."
},
{
"pmid": "19932995",
"title": "Accurate telemonitoring of Parkinson's disease progression by noninvasive speech tests.",
"abstract": "Tracking Parkinson's disease (PD) symptom progression often uses the unified Parkinson's disease rating scale (UPDRS) that requires the patient's presence in clinic, and time-consuming physical examinations by trained medical staff. Thus, symptom monitoring is costly and logistically inconvenient for patient and clinical staff alike, also hindering recruitment for future large-scale clinical trials. Here, for the first time, we demonstrate rapid, remote replication of UPDRS assessment with clinically useful accuracy (about 7.5 UPDRS points difference from the clinicians' estimates), using only simple, self-administered, and noninvasive speech tests. We characterize speech with signal processing algorithms, extracting clinically useful features of average PD progression. Subsequently, we select the most parsimonious model with a robust feature selection algorithm, and statistically map the selected subset of features to UPDRS using linear and nonlinear regression techniques that include classical least squares and nonparametric classification and regression trees. We verify our findings on the largest database of PD speech in existence (approximately 6000 recordings from 42 PD patients, recruited to a six-month, multicenter trial). These findings support the feasibility of frequent, remote, and accurate UPDRS tracking. This technology could play a key part in telemonitoring frameworks that enable large-scale clinical trials into novel PD treatments."
},
{
"pmid": "21335305",
"title": "Hilbert-Huang-based tremor removal to assess postural properties from accelerometers.",
"abstract": "Tremor is one of the symptoms of several disorders of the central and peripheral nervous system, such as Parkinson's disease (PD). The impairment of postural control is another symptom of PD. The conventional method of posture analysis uses force plates, but accelerometers can be a valid and reliable alternative. Both these measurement techniques are sensitive to tremor. Tremor affects postural measures and may thus lead to misleading results or interpretations. Linear low-pass filters (LPFs) are commonly employed for tremor removal. In this study, an alternative method, based on Hilbert-Huang transformation (HHT), is proposed. We examined 20 PD subjects, with and without tremor, and 20 control subjects. We compared the effectiveness of LPF and HHT-based filtering on a set of postural parameters extracted from acceleration signals. HHT has the advantage of providing a filter, which with no a priori knowledge, efficiently manages the nonlinear, nonstationary interference due to tremor, and beyond tremor, gives descriptive measures of postural function. Some of the differences found using LPF can instead be ascribed to inefficient noise/tremor suppression. Filter order and cutoff frequency are indeed critical when subjects exhibit a tremorous behavior, in which case LPF parameters should be chosen very carefully."
},
{
"pmid": "24661464",
"title": "Clinician versus machine: reliability and responsiveness of motor endpoints in Parkinson's disease.",
"abstract": "BACKGROUND\nEnhancing the reliability and responsiveness of motor assessments required to demonstrate therapeutic efficacy is a priority for Parkinson's disease (PD) clinical trials. The objective of this study is to determine the reliability and responsiveness of a portable kinematic system for quantifying PD motor deficits as compared to clinical ratings.\n\n\nMETHODS\nEighteen PD patients with subthalamic nucleus deep-brain stimulation (DBS) performed three tasks for evaluating resting tremor, postural tremor, and finger-tapping speed, amplitude, and rhythm while wearing a wireless motion-sensor unit (Kinesia) on the more-affected index finger. These tasks were repeated three times with DBS turned off and at each of 10 different stimulation amplitudes chosen to yield small changes in treatment response. Each task performance was video-recorded for subsequent clinician rating in blinded, randomized order. Test-retest reliability was calculated as intraclass correlation (ICC) and sensitivity was calculated as minimal detectable change (MDC) for each DBS amplitude.\n\n\nRESULTS\nICCs for Kinesia were significantly higher than those for clinician ratings of finger-tapping speed (p < 0.0001), amplitude (p < 0.0001), and rhythm (p < 0.05), but were not significantly different for evaluations of resting or postural tremor. Similarly, Kinesia scores yielded a lower MDC as compared with clinician scores across all finger-tapping subscores (p < 0.0001), but did not differ significantly for resting and postural tremor.\n\n\nCONCLUSIONS\nThe Kinesia portable kinematic system can provide greater test-retest reliability and sensitivity to change than conventional clinical ratings for measuring bradykinesia, hypokinesia, and dysrhythmia in PD patients."
},
{
"pmid": "17278588",
"title": "Quantification of tremor and bradykinesia in Parkinson's disease using a novel ambulatory monitoring system.",
"abstract": "An ambulatory system for quantification of tremor and bradykinesia in patients with Parkinson's disease (PD) is presented. To record movements of the upper extremities, a sensing units which included miniature gyroscopes, has been fixed to each of the forearms. An algorithm to detect and quantify tremor and another algorithm to quantify bradykinesia have been proposed and validated. Two clinical studies have been performed. In the first study, 10 PD patients and 10 control subjects participated in a 45-min protocol of 17 typical daily activities. The algorithm for tremor detection showed an overall sensitivity of 99.5% and a specificity of 94.2% in comparison to a video reference. The estimated tremor amplitude showed a high correlation to the Unified Parkinson's Disease Rating Scale (UPDRS) tremor subscore (e.g., r = 0.87, p < 0.001 for the roll axis). There was a high and significant correlation between the estimated bradykinesia related parameters estimated for the whole period of measurement and respective UPDRS subscore (e.g., r = -0.83, p < 0.001 for the roll axis). In the second study, movements of upper extremities of 11 PD patients were recorded for periods of 3-5 hr. The patients were moving freely during the measurements. The effects of selection of window size used to calculate tremor and bradykinesia related parameters on the correlation between UPDRS and these parameters were studied. By selecting a window similar to the period of the first study, similar correlations were obtained. Moreover, one of the bradykinesia related parameters showed significant correlation (r = -0.74, p < 0.01) to UPDRS with window sizes as short as 5 min. Our study provides evidence that objective, accurate and simultaneous assessment of tremor and bradykinesia can be achieved in free moving PD patients during their daily activities."
},
{
"pmid": "23939408",
"title": "Automated assessment of bradykinesia and dyskinesia in Parkinson's disease.",
"abstract": "There is a need for objective measures of dyskinesia and bradykinesia of Parkinson's disease (PD) that are continuous throughout the day and related to levodopa dosing. The output of an algorithm that calculates dyskinesia and bradykinesia scores every two minutes over 10 days (PKG: Global Kinetics Corporation) was compared with conventional rating scales for PD in PD subjects. The algorithm recognises bradykinesia as movements made with lower acceleration and amplitude and with longer intervals between movement. Similarly the algorithm recognises dyskinesia as having movements of normal amplitude and acceleration but with shorter periods without movement. The distribution of the bradykinesia and dyskinesia scores from PD subjects differed from that of normal subjects. The algorithm predicted the clinical dyskinesia rating scale AIMS with a 95% margin of error of 3.2 units compared with the inter-rater 95% limits of agreement from 3 neurologists of -3.4 to +4.3 units. Similarly the algorithm predicted the UPDRS III score with a margin of error similar to the inter-rater limits of agreement. Improvement in scores in response to changes in medication could be assessed statistically in individual patients. This algorithm provides objective, continuous and automated assessment of the clinical features of bradykinesia and dyskinesia in PD."
},
{
"pmid": "26179817",
"title": "Dyskinesia detection and monitoring by a single sensor in patients with Parkinson's disease.",
"abstract": "OBJECTIVE\nIn current clinical practice, assessment of levodopa-induced dyskinesias (LIDs) in Parkinson's disease (PD) is based on semiquantitative scales or patients' diaries. We aimed to assess the feasibility, clinical validity, and usability of a waist-worn inertial sensor for discriminating between LIDs and physiological sway in both supervised and unsupervised settings.\n\n\nMETHODS\nForty-six PD patients on L-dopa therapy, 18 de novo PD patients, and 18 healthy controls were enrolled. Patients underwent clinical assessment of motor signs and dyskinesias and kinetic-dynamic L-dopa monitoring, tracked by serial measurements of plasma drug concentrations and motor and postural tests.\n\n\nRESULTS\nA subset of features was selected, which showed excellent reliability. Sensitivity and specificity of the selected features for dyskinesia recognition were assessed in both supervised and unsupervised settings with an accuracy of 95% and 86%, respectively.\n\n\nCONCLUSIONS\nOur preliminary findings suggest that it is feasible to design a reliable sensor-based application for dyskinesia monitoring at home."
},
{
"pmid": "18074362",
"title": "Validity of spiral analysis in early Parkinson's disease.",
"abstract": "Spiral analysis is an objective, easy to administer noninvasive test that has been proposed to measure motor dysfunction in Parkinson disease (PD). We compared overall Unified Parkinson Disease Rating Scale Part III scores to selected indices derived from spiral analysis in seventy-four patients with early PD (mean duration of disease 2.4 +/- 1.7 years, mean age 61.5 +/- 9.7 years). Of the spiral indices, degree of severity, first order zero crossing, second order smoothness, and mean speed were best correlated with total motor Unified Parkinson's Disease Rating Scale (UPDRS) score (all P < 0.01), and these indices showed a gradient across worsening tertiles of UPDRS (P < 0.05). Spiral indices also correlated with UPDRS ratings for the worst side and worst arm scores as well. The domains of bradykinesia, rigidity, and action tremor were correlated with first order crossing, second order smoothness, and mean speed, where as rest tremor was most highly correlated with degree of severity. This suggests that the spiral analysis may supplement motor assessment in PD, although further analysis of spiral metrics, a larger sample and longitudinal data should be evaluated."
},
{
"pmid": "20438759",
"title": "A new computer method for assessing drawing impairment in Parkinson's disease.",
"abstract": "A test battery, consisting of self-assessments and motor tests (tapping and spiral drawing tasks) was used on 9482 test occasions by 62 patients with advanced Parkinson's disease (PD) in a telemedicine setting. On each test occasion, three Archimedes spirals were traced. A new computer method, using wavelet transforms and principal component analysis processed the spiral drawings to generate a spiral score. In a web interface, two PD specialists rated drawing impairment in spiral drawings from three random test occasions per patient, using a modification of the Bain & Findley 10-category scale. A standardised manual rating was defined as the mean of the two raters' assessments. Bland-Altman analysis was used to evaluate agreement between the spiral score and the standardised manual rating. Another selection of spiral drawings was used to estimate the Spearman rank correlations between the raters (r=0.87), and between the mean rating and the spiral score (r=0.89). The 95% confidence interval for the method's prediction errors was +/-1.5 scale units, which was similar to the differences between the human raters. In conclusion, the method could assess PD-related drawing impairments well comparable to trained raters."
},
{
"pmid": "24556672",
"title": "Gait analysis methods: an overview of wearable and non-wearable systems, highlighting clinical applications.",
"abstract": "This article presents a review of the methods used in recognition and analysis of the human gait from three different approaches: image processing, floor sensors and sensors placed on the body. Progress in new technologies has led the development of a series of devices and techniques which allow for objective evaluation, making measurements more efficient and effective and providing specialists with reliable information. Firstly, an introduction of the key gait parameters and semi-subjective methods is presented. Secondly, technologies and studies on the different objective methods are reviewed. Finally, based on the latest research, the characteristics of each method are discussed. 40% of the reviewed articles published in late 2012 and 2013 were related to non-wearable systems, 37.5% presented inertial sensor-based systems, and the remaining 22.5% corresponded to other wearable systems. An increasing number of research works demonstrate that various parameters such as precision, conformability, usability or transportability have indicated that the portable systems based on body sensors are promising methods for gait analysis."
},
{
"pmid": "22438763",
"title": "Gait analysis using wearable sensors.",
"abstract": "Gait analysis using wearable sensors is an inexpensive, convenient, and efficient manner of providing useful information for multiple health-related applications. As a clinical tool applied in the rehabilitation and diagnosis of medical conditions and sport activities, gait analysis using wearable sensors shows great prospects. The current paper reviews available wearable sensors and ambulatory gait analysis methods based on the various wearable sensors. After an introduction of the gait phases, the principles and features of wearable sensors used in gait analysis are provided. The gait analysis methods based on wearable sensors is divided into gait kinematics, gait kinetics, and electromyography. Studies on the current methods are reviewed, and applications in sports, rehabilitation, and clinical diagnosis are summarized separately. With the development of sensor technology and the analysis method, gait analysis using wearable sensors is expected to play an increasingly important role in clinical applications."
},
{
"pmid": "28408157",
"title": "Microsoft Kinect can distinguish differences in over-ground gait between older persons with and without Parkinson's disease.",
"abstract": "Gait patterns differ between healthy elders and those with Parkinson's disease (PD). A simple, low-cost clinical tool that can evaluate kinematic differences between these populations would be invaluable diagnostically; since gait analysis in a clinical setting is impractical due to cost and technical expertise. This study investigated the between group differences between the Kinect and a 3D movement analysis system (BTS) and reported validity and reliability of the Kinect v2 sensor for gait analysis. Nineteen subjects participated, eleven without (C) and eight with PD (PD). Outcome measures included spatiotemporal parameters and kinematics. Ankle range of motion for C was significantly less during ankle swing compared to PD (p=0.04) for the Kinect. Both systems showed significant differences for stride length (BTS (C 1.24±0.16, PD=1.01±0.17, p=0.009), Kinect (C=1.24±0.17, PD=1.00±0.18, p=0.009)), gait velocity (BTS (C=1.06±0.14, PD=0.83±0.15, p=0.01), Kinect (C=1.06±0.15, PD=0.83±0.16, p=0.01)), and swing velocity (BTS (C=2.50±0.27, PD=2.12±0.36, p=0.02), Kinect (C=2.32±0.25, PD=1.95±0.31, p=0.01)) between groups. Agreement (RangeICC =0.93-0.99) and consistency (RangeICC =0.94-0.99) were excellent between systems for stride length, stance duration, swing duration, gait velocity, and swing velocity. The Kinect v2 can was sensitive enough to detect between group differences and consistently produced results similar to the BTS system."
},
{
"pmid": "26861323",
"title": "Validity of the Kinect for Gait Assessment: A Focused Review.",
"abstract": "Gait analysis may enhance clinical practice. However, its use is limited due to the need for expensive equipment which is not always available in clinical settings. Recent evidence suggests that Microsoft Kinect may provide a low cost gait analysis method. The purpose of this report is to critically evaluate the literature describing the concurrent validity of using the Kinect as a gait analysis instrument. An online search of PubMed, CINAHL, and ProQuest databases was performed. Included were studies in which walking was assessed with the Kinect and another gold standard device, and consisted of at least one numerical finding of spatiotemporal or kinematic measures. Our search identified 366 papers, from which 12 relevant studies were retrieved. The results demonstrate that the Kinect is valid only for some spatiotemporal gait parameters. Although the kinematic parameters measured by the Kinect followed the trend of the joint trajectories, they showed poor validity and large errors. In conclusion, the Kinect may have the potential to be used as a tool for measuring spatiotemporal aspects of gait, yet standardized methods should be established, and future examinations with both healthy subjects and clinical participants are required in order to integrate the Kinect as a clinical gait analysis tool."
},
{
"pmid": "26499251",
"title": "Motion tracking and gait feature estimation for recognising Parkinson's disease using MS Kinect.",
"abstract": "BACKGROUND\nAnalysis of gait features provides important information during the treatment of neurological disorders, including Parkinson's disease. It is also used to observe the effects of medication and rehabilitation. The methodology presented in this paper enables the detection of selected gait attributes by Microsoft (MS) Kinect image and depth sensors to track movements in three-dimensional space.\n\n\nMETHODS\nThe experimental part of the paper is devoted to the study of three sets of individuals: 18 patients with Parkinson's disease, 18 healthy aged-matched individuals, and 15 students. The methodological part of the paper includes the use of digital signal-processing methods for rejecting gross data-acquisition errors, segmenting video frames, and extracting gait features. The proposed algorithm describes methods for estimating the leg length, normalised average stride length (SL), and gait velocity (GV) of the individuals in the given sets using MS Kinect data.\n\n\nRESULTS\nThe main objective of this work involves the recognition of selected gait disorders in both the clinical and everyday settings. The results obtained include an evaluation of leg lengths, with a mean difference of 0.004 m in the complete set of 51 individuals studied, and of the gait features of patients with Parkinson's disease (SL: 0.38 m, GV: 0.61 m/s) and an age-matched reference set (SL: 0.54 m, GV: 0.81 m/s). Combining both features allowed for the use of neural networks to classify and evaluate the selectivity, specificity, and accuracy. The achieved accuracy was 97.2 %, which suggests the potential use of MS Kinect image and depth sensors for these applications.\n\n\nCONCLUSIONS\nDiscussion points include the possibility of using the MS Kinect sensors as inexpensive replacements for complex multi-camera systems and treadmill walking in gait-feature detection for the recognition of selected gait disorders."
},
{
"pmid": "26002604",
"title": "Accuracy of the Microsoft Kinect for measuring gait parameters during treadmill walking.",
"abstract": "The measurement of gait parameters normally requires motion tracking systems combined with force plates, which limits the measurement to laboratory settings. In some recent studies, the possibility of using the portable, low cost, and marker-less Microsoft Kinect sensor to measure gait parameters on over-ground walking has been examined. The current study further examined the accuracy level of the Kinect sensor for assessment of various gait parameters during treadmill walking under different walking speeds. Twenty healthy participants walked on the treadmill and their full body kinematics data were measured by a Kinect sensor and a motion tracking system, concurrently. Spatiotemporal gait parameters and knee and hip joint angles were extracted from the two devices and were compared. The results showed that the accuracy levels when using the Kinect sensor varied across the gait parameters. Average heel strike frame errors were 0.18 and 0.30 frames for the right and left foot, respectively, while average toe off frame errors were -2.25 and -2.61 frames, respectively, across all participants and all walking speeds. The temporal gait parameters based purely on heel strike have less error than the temporal gait parameters based on toe off. The Kinect sensor can follow the trend of the joint trajectories for the knee and hip joints, though there was substantial error in magnitudes. The walking speed was also found to significantly affect the identified timing of toe off. The results of the study suggest that the Kinect sensor may be used as an alternative device to measure some gait parameters for treadmill walking, depending on the desired accuracy level."
},
{
"pmid": "29297304",
"title": "Novel human microbe-disease association prediction using network consistency projection.",
"abstract": "BACKGROUND\nAccumulating biological and clinical reports have indicated that imbalance of microbial community is closely associated with occurrence and development of various complex human diseases. Identifying potential microbe-disease associations, which could provide better understanding of disease pathology and further boost disease diagnostic and prognostic, has attracted more and more attention. However, hardly any computational models have been developed for large scale microbe-disease association prediction.\n\n\nRESULTS\nIn this article, based on the assumption that microbes with similar functions tend to share similar association or non-association patterns with similar diseases and vice versa, we proposed the model of Network Consistency Projection for Human Microbe-Disease Association prediction (NCPHMDA) by integrating known microbe-disease associations and Gaussian interaction profile kernel similarity for microbes and diseases. NCPHMDA yielded outstanding AUCs of 0.9039, 0.7953 and average AUC of 0.8918 in global leave-one-out cross validation, local leave-one-out cross validation and 5-fold cross validation, respectively. Furthermore, colon cancer, asthma and type 2 diabetes were taken as independent case studies, where 9, 9 and 8 out of the top 10 predicted microbes were successfully confirmed by recent published clinical literature.\n\n\nCONCLUSION\nNCPHMDA is a non-parametric universal network-based method which can simultaneously predict associated microbes for investigated diseases but does not require negative samples. It is anticipated that NCPHMDA would become an effective biological resource for clinical experimental guidance."
},
{
"pmid": "30323198",
"title": "Recurrent Neural Network for Predicting Transcription Factor Binding Sites.",
"abstract": "It is well known that DNA sequence contains a certain amount of transcription factors (TF) binding sites, and only part of them are identified through biological experiments. However, these experiments are expensive and time-consuming. To overcome these problems, some computational methods, based on k-mer features or convolutional neural networks, have been proposed to identify TF binding sites from DNA sequences. Although these methods have good performance, the context information that relates to TF binding sites is still lacking. Research indicates that standard recurrent neural networks (RNN) and its variants have better performance in time-series data compared with other models. In this study, we propose a model, named KEGRU, to identify TF binding sites by combining Bidirectional Gated Recurrent Unit (GRU) network with k-mer embedding. Firstly, DNA sequences are divided into k-mer sequences with a specified length and stride window. And then, we treat each k-mer as a word and pre-trained word representation model though word2vec algorithm. Thirdly, we construct a deep bidirectional GRU model for feature learning and classification. Experimental results have shown that our method has better performance compared with some state-of-the-art methods. Additional experiments about embedding strategy show that k-mer embedding will be helpful to enhance model performance. The robustness of KEGRU is proved by experiments with different k-mer length, stride window and embedding vector dimension."
},
{
"pmid": "21641850",
"title": "An exploration of familial associations in spinal posture defined using a clinical grouping method.",
"abstract": "The primary aim of this study was to examine familial associations in spinal posture, defined using postural angles and a clinical classification method. A secondary aim was to investigate the reliability of clinical postural classification. Postural angles were calculated from sagittal photographs, while two experienced clinicians made use of standing sagittal images to classify participants into one of four postural groups (sway, flat, hyperlordotic, neutral). Parent-child associations in postural angles and postural groups were evaluated using Pearson's correlation and Fisher's exact test, respectively. Inter-rater reliability was expressed using percentage agreement and Kappa coefficients (K). Daughters whose father or mother had a hyperlordotic posture were 4.0 or 3.5 times, respectively, more likely to have a hyperlordotic posture than daughters whose parents did not have a hyperlordotic posture. These participants in the hyperlorotic group had a significantly higher body mass index than members of the other postural groups (p < 0.03). Percentage agreement between clinicians was 63.5% (K = 0.48). These results provide preliminary evidence of a familial association in the hyperlordotic posture and support the use of postural classification."
},
{
"pmid": "27571158",
"title": "Pisa syndrome in Parkinson's disease and parkinsonism: clinical features, pathophysiology, and treatment.",
"abstract": "Pisa syndrome is defined as a reversible lateral bending of the trunk with a tendency to lean to one side. It is a frequent and often disabling complication of Parkinson's disease, and has also been described in several atypical forms of parkinsonism and in neurodegenerative and psychiatric disorders after drug exposure and surgical procedures. Although no consistent diagnostic criteria for Pisa syndrome are available, most investigations have adopted an arbitrary cutoff of at least 10° of lateral flexion for the diagnosis of the syndrome. Pathophysiological mechanisms underlying Pisa syndrome have not been fully explained. One hypothesis emphasises central mechanisms, whereby Pisa syndrome is thought to be caused by alterations in sensory-motor integration pathways; by contrast, a peripheral hypothesis emphasises the role of anatomical changes in the musculoskeletal system. Furthermore, several drugs are reported to induce Pisa syndrome, including antiparkinsonian drugs. As Pisa syndrome might be reversible, clinicians need to be able to recognise this condition early to enable prompt management. Nevertheless, further research is needed to determine optimum treatment strategies."
},
{
"pmid": "9214783",
"title": "Fetal ECG extraction from single-channel maternal ECG using singular value decomposition.",
"abstract": "The extraction of fetal electrocardiogram (ECG) from the composite maternal ECG signal obtained from the abdominal lead is discussed. The proposed method employs singular value decomposition (SVD) and analysis based on the singular value ratio (SVR) spectrum. The maternal ECG (M-ECG) and the fetal ECG (F-ECG) components are identified in terms of the SV-decomposed modes of the appropriately configured data matrices, and elimination of the M-ECG and determination of F-ECG are achieved through selective separation of the SV-decomposed components. The unique feature of the method is that only one composite maternal ECG signal is required to determine the F-ECG component. The method is numerically robust and computationally efficient."
},
{
"pmid": "19054734",
"title": "A constructive hybrid structure optimization methodology for radial basis probabilistic neural networks.",
"abstract": "In this paper, a novel heuristic structure optimization methodology for radial basis probabilistic neural networks (RBPNNs) is proposed. First, a minimum volume covering hyperspheres (MVCH) algorithm is proposed to select the initial hidden-layer centers of the RBPNN, and then the recursive orthogonal least square algorithm (ROLSA) combined with the particle swarm optimization (PSO) algorithm is adopted to further optimize the initial structure of the RBPNN. The proposed algorithms are evaluated through eight benchmark classification problems and two real-world application problems, a plant species identification task involving 50 plant species and a palmprint recognition task. Experimental results show that our proposed algorithm is feasible and efficient for the structure optimization of the RBPNN. The RBPNN achieves higher recognition rates and better classification efficiency than multilayer perceptron networks (MLPNs) and radial basis function neural networks (RBFNNs) in both tasks. Moreover, the experimental results illustrated that the generalization performance of the optimized RBPNN in the plant species identification task was markedly better than that of the optimized RBFNN."
},
{
"pmid": "29858068",
"title": "A Deep Learning Framework for Robust and Accurate Prediction of ncRNA-Protein Interactions Using Evolutionary Information.",
"abstract": "The interactions between non-coding RNAs (ncRNAs) and proteins play an important role in many biological processes, and their biological functions are primarily achieved by binding with a variety of proteins. High-throughput biological techniques are used to identify protein molecules bound with specific ncRNA, but they are usually expensive and time consuming. Deep learning provides a powerful solution to computationally predict RNA-protein interactions. In this work, we propose the RPI-SAN model by using the deep-learning stacked auto-encoder network to mine the hidden high-level features from RNA and protein sequences and feed them into a random forest (RF) model to predict ncRNA binding proteins. Stacked assembling is further used to improve the accuracy of the proposed method. Four benchmark datasets, including RPI2241, RPI488, RPI1807, and NPInter v2.0, were employed for the unbiased evaluation of five established prediction tools: RPI-Pred, IPMiner, RPISeq-RF, lncPro, and RPI-SAN. The experimental results show that our RPI-SAN model achieves much better performance than other methods, with accuracies of 90.77%, 89.7%, 96.1%, and 99.33%, respectively. It is anticipated that RPI-SAN can be used as an effective computational tool for future biomedical researches and can accurately predict the potential ncRNA-protein interacted pairs, which provides reliable guidance for biological research."
}
] |
BMC Medical Informatics and Decision Making | 31830960 | PMC6907113 | 10.1186/s12911-019-0986-6 | An adaptive term proximity based rocchio’s model for clinical decision support retrieval | BackgroundIn order to better help doctors make decision in the clinical setting, research is necessary to connect electronic health record (EHR) with the biomedical literature. Pseudo Relevance Feedback (PRF) is a kind of classical query modification technique that has shown to be effective in many retrieval models and thus suitable for handling terse language and clinical jargons in EHR. Previous work has introduced a set of constraints (axioms) of traditional PRF model. However, in the feedback document, the importance degree of candidate term and the co-occurrence relationship between a candidate term and a query term. Most methods do not consider both of these factors. Intuitively, terms that have higher co-occurrence degree with a query term are more likely to be related to the query topic.MethodsIn this paper, we incorporate original HAL model into the Rocchio’s model, and propose a new concept of term proximity feedback weight. A HAL-based Rocchio’s model in the query expansion, called HRoc, is proposed. Meanwhile, we design three normalization methods to better incorporate proximity information to query expansion. Finally, we introduce an adaptive parameter to replace the length of sliding window of HAL model, and it can select window size according to document length.ResultsBased on 2016 TREC Clinical Support medicine dataset, experimental results demonstrate that the proposed HRoc and HRoc_AP models superior to other advanced models, such as PRoc2 and TF-PRF methods on various evaluation metrics. Among them, compared with the Proc2 and TF-PRF models, the MAP of our model is increased by 8.5% and 12.24% respectively, while the F1 score of our model is increased by 7.86% and 9.88% respectively.ConclusionsThe proposed HRoc model can effectively enhance the precision and the recall rate of Information Retrieval and gets a more precise result than other models. Furthermore, after introducing self-adaptive parameter, the advanced HRoc_AP model uses less hyper-parameters than other models while enjoys an equivalent performance, which greatly improves the efficiency and applicability of the model and thus helps clinicians to retrieve clinical support document effectively. | Related workThe CDS track complements the previous TREC tasks inspired by biomedicine [1], specifically, the genomics and medical records tracks. The CDS track has been heavily inspired by the TREC genomics [19], medical records [20] tracks and the medical case-based retrieval track of Image-CLEF [21]. They all shown great interest in medical ad-hoc retrieval. There are no reusable, uncertified set of medical records, in [22, 23], short case reports, proposed that the real medical records should be represented by idealized method. For a given case report, follow-up participants retrieve full-text biomedical articles and answer questions related to several types of clinical information needs. The 2016 CDS focuses on topic query expansion modeling by actual patients note [24–26].In Information Retrieval (IR) process, original queries may lead to the absence of some important terms information. PRF method is a common but effective technique for achieving better retrieval performance in [2, 7, 8, 27], which the semantic relationship between the added terms and the original query terms is considered, including these defined relations in Rocchio’s model. It brings better result. Then, many other relevance feedback techniques and algorithms were proposed and most of them were derived under the Rocchio’s framework. For example, A famous and successful automatic PRF algorithm is proposed in okapi system by Robertson et al [28]. A feedback framework based on proximity (called proc) is proposed by Miao et al, which includes different proximity measures to estimate the correlation and importance of candidate options [29]. Ye and Huang propose a unified model (TF-PRF) to capture local saliency of related candidate in feedback documents [30]. These two models are strong baselines, and used for comparison in our experiments. In addition, many other competitive approaches have obtained significant performance in improving retrieval effectiveness [4,31]. Since they are not that related to our research methods, we do not introduce them in detail.Recently, plenty of work has been studied to integrate term proximity and other relationships into existing IR models. In [32], the authors introduced a pseudo term to the model in the Dirichlet language model, the approximate centrality of query terms is used as a parameter. LV and Zhao integrated location and proximity information into the language model from a second perspective [33]. These relations play an important role in IR field. Mbarek et al have obtained significant performance in improving retrieval effectiveness [34]. Rasolofo et al use proximity measurement in combination with the Okapi probabilistic model [17]. Peng et al incorporate term dependency in the DFR framework [35]. Metzler et al developed a general and formal framework for modeling term dependency through Markov random fields, and developed a new method to train the model, which directly maximizes the average accuracy rather than the availability of training data. [36]. Zhao et al use Triangle Kernel functions in information retrieval applications [37]. In this paper, we propose three HAL-based co-occurrence PRF models, in which we integrate the approximate weight information of a term into the traditional PRF model: Rocchio model. In our method, we estimate their weights by considering the distance between the candidate expansion and the query item. In addition, we introduce three normalization methods for a new concept of proximity-based term weighting and an adaptive function to make the D value dynamically adjust according to the length of the document. | [
"27219127"
] | [
{
"pmid": "27219127",
"title": "MIMIC-III, a freely accessible critical care database.",
"abstract": "MIMIC-III ('Medical Information Mart for Intensive Care') is a large, single-center database comprising information relating to patients admitted to critical care units at a large tertiary care hospital. Data includes vital signs, medications, laboratory measurements, observations and notes charted by care providers, fluid balance, procedure codes, diagnostic codes, imaging reports, hospital length of stay, survival data, and more. The database supports applications including academic and industrial research, quality improvement initiatives, and higher education coursework."
}
] |
International Journal of Health Geographics | 31829212 | PMC6907256 | 10.1186/s12942-019-0193-9 | Comparative analysis of computer-vision and BLE technology based indoor navigation systems for people with visual impairments | BackgroundConsiderable number of indoor navigation systems has been proposed to augment people with visual impairments (VI) about their surroundings. These systems leverage several technologies, such as computer-vision, Bluetooth low energy (BLE), and other techniques to estimate the position of a user in indoor areas. Computer-vision based systems use several techniques including matching pictures, classifying captured images, recognizing visual objects or visual markers. BLE based system utilizes BLE beacons attached in the indoor areas as the source of the radio frequency signal to localize the position of the user.MethodsIn this paper, we examine the performance and usability of two computer-vision based systems and BLE-based system. The first system is computer-vision based system, called CamNav that uses a trained deep learning model to recognize locations, and the second system, called QRNav, that utilizes visual markers (QR codes) to determine locations. A field test with 10 blindfolded users has been conducted while using the three navigation systems.ResultsThe obtained results from navigation experiment and feedback from blindfolded users show that QRNav and CamNav system is more efficient than BLE based system in terms of accuracy and usability. The error occurred in BLE based application is more than 30% compared to computer vision based systems including CamNav and QRNav.ConclusionsThe developed navigation systems are able to provide reliable assistance for the participants during real time experiments. Some of the participants took minimal external assistance while moving through the junctions in the corridor areas. Computer vision technology demonstrated its superiority over BLE technology in assistive systems for people with visual impairments. | Related workConsiderable number of assistive systems, designed to augment visually impaired people with indoor navigation services, have been developed in the last decades. These systems utilize different types of technologies for guiding the people with VI in indoor areas. Since indoor positioning is the important task in navigation, in this segment initially we discuss various technologies and approaches utilized for positioning the user in indoor areas. Later we discuss and compare various existing navigation systems developed for people with VI.Overview of indoor positioning approachesDue to the inaccuracy of traditional GPS based approaches in indoor areas , high sensitive GPS and GPS pseudolites [13] are utilized for positioning the user in indoor areas. High sensitive GPS and GPS pseudolites displayed acceptable accuracy in indoor positioning, but their implementation cost is high. Apart from GPS based approaches, various technologies has been leveraged for the development of positioning module in indoor navigation systems. Figure 1 illustrates the hierarchical classification of common indoor positioning approaches utilized in indoor navigation systems.Fig. 1Hierarchical classification of indoor positioning approaches
Computer vision-based approaches make use of traditional cameras, omnidirectional cameras, 3d cameras or inbuilt smartphone cameras to extract visual imageries from the indoor environments. Diverse image processing algorithms such as Scale Invariant Feature Transform (SIFT) [14], Speeded Up Robust Feature (SURF) [15], Gist features [16] etc. were utilized for extracting the features from the captured imageries and followed by matching the query images. Together with the feature extraction methods, conventional approaches for vision-based positioning methods utilize clustering and matching algorithms. In addition to conventional approaches, deep learning-based computer vision solutions were developed in last 5 years. Deep learning models are composed of multiple processing layers to learn features of data without an explicit feature engineering process [17]. It made deep learning based approaches distinguished among object detection and classification methods. Apart from identifying the indoor locations based on matching the images, egomotion based position estimation methods were also employed in the computer-vision based positioning approaches [18]. Egomotion is the technique of estimating the positions of camera with respect to it’s surrounding environment.Pedestrian dead reckoning (PDR) approaches estimate the position of the user based on their known past positions. PDR methods utilizes data acquired from the accelerometer, gyroscope and magnetometer to compute the position of the user. The traditional PDR algorithms compute the position of the user by integrating the step length of the user, number of steps traveled by the user and heading angle of the user [19, 20]. It is observed that conventional PDR approaches are abundant with position errors due to drift [21], varying step length of the users, sensor bias. In order to compensate the errors generated in traditional PDR approaches, most of latest PDR based systems combined another positioning technologies such as BLE or Wi-Fi along with it or introduced some novel sensor data fusion methodologies.RFID, Wi-Fi, Ultra-Wide Band (UWB), Bluetooth and Visible Light Communication (VLC) are the popular communication technology based approaches utilized for indoor positioning task. RFID systems comprises of an RFID reader and RFID tags pasted on the objects. There exist two different types of RFID tags; active and passive. Passive tags does not require an external power supply. And many of the recent RFID based systems use passive RFID tags over active RFID tags. Time of arrival (TOA), Time difference of arrival (TDOA), Angle of arrival (AOA) and Received signal strength (RSS) the popular methods used in RFID based system for position estimation [22]. Indoor environment can contain different types of static objects such as walls, shelves etc. which can cause non-line of sight scenarios. In this context except RSS based position estimation, rest of the methods fails to compute the position of the user with minimal errors. The popular RSS based positioning approaches are trilateration and fingerprinting [23]. At present most of the indoor areas are equipped with Wi-Fi routers for providing seamless internet access for private groups or individuals or public groups. This existing Wi-Fi infrastructure can be utilized to localize the user and to provide navigation aid for the users. The Wi-Fi access points are used as the source for transmitting the signals to the receiving device ( mobile or small hardware) and receiving device utilize the received signal to estimate the position of the user. Despite the fact that Wi-Fi routers are costly compared to other RF signal transmitting devices, Wi-Fi based positioning methods displayed its popularity over other methods in recent years because of the availability of Wi-Fi routers in indoor areas. Wi-Fi-based indoor positioning systems make use of RSS fingerprinting or triangulation or trilateration methods for positioning [24]. Bluetooth based systems displayed similar or better accuracy in indoor positioning while comparing with Wi-Fi based systems. They make use of Bluetooth low energy (BLE) beacons installed in indoor environments to track the positions of users using lateration or proximity sensing approaches or RSSI fingerprinting [25]. In BLE systems , RSSI fingerprinting method has displayed better positioning accuracy while comparing with all other methodologies [26]. In order to preserve the efficiency of BLE indoor positioning systems, the data from the BLE beacons should be collected within a range of 3 meters. In recent advances, mostly a smartphone is used as a receiver for both Bluetooth and Wi-Fi signals.Existing LED and florescent lamps in the indoor areas can be utilized for developing low cost indoor positioning solutions. Nowadays these LEDs or fluorescent lamps are becoming ubiquitous in indoor environment. Visible light communication (VLC) based indoor positioning methods use the light signals emitted by fluorescent lamps or LEDs to estimate the position of the user. A smartphone camera or dedicated independent photo detector is used to detect and receive the light signals from lamps. RSS and AOA are the most popular measuring approaches used in VLC based positioning methods [27]. UWB is also a radio technology which utilizes very low energy level for short-range, high-bandwidth communications. UWB based positioning systems can provide centimeter level accuracy which is far better than Wi-Fi based, or Bluetooth based methods. UWB uses TOA, AOA, TDOA, RSS based methodology for the position estimation [28].Table 1 illustrates comparison of indoor positioning approachesTable 1Comparison of indoor positioning approachesIndoor positioning technologyInfrastructureHardwarePopular measurement methodsPopular techniquesAccuracyComputer visionDedicated infrastructure not requiredCamera or inbuilt camera of smartphonePattern recognitionScene analysisLow to mediumMotion detectionDedicated infrastructure not requiredInertial sensor or inbuilt sensors of smartphoneTrackingDead reckoningMediumWi-FiUtilize existing infrastructure of buildingWi-Fi access points and smartphoneRSSFingerprinting and trilaterationLow to mediumBluetoothDedicated infrastructure requiredBLE beacons and smartphoneRSS and ProximityFingerprinting and trilaterationMediumRFIDDedicated infrastructure requiredRFID tags and RFID tag readersRSS and proximityFingerprintingMediumVLCDedicated infrastructure not requiredLED lights and Photo detectorRSS and AOATrilateration and triangulationMedium to highUWBDedicated infrastructure requiredUWB tags and UWB tag readerTOA, TDOATrilaterationHigh
Indoor navigation solutions for people with visual impairmentsComputer vision is one of the popular technology used for the development of assistive systems for people with visual impairments. Computer vision based systems utilized two types of approach, tag based and non tag based approach to provide safe navigation for people with VI in indoor environments. Tag based system utilize some visual markers or codes attached in the indoor areas and tag or marker readers for guiding the user. Non tag based systems use the natural imageries of indoor areas or imageries of some static objects or texts found in the doors or walls in the indoor areas to guide the the people with VI in indoor areas. Moreover, 3 dimensional imageries are also utilized in the development of wayfinding or navigation system for people with VI.Tian et al. [29] presented a navigation system for people with VI. It is composed of text recognition and door detection modules. The text recognition module employs the mean shift-based clustering for classifying the text, and Tesseract with Omni page optical character recognition (OCR) to identify the content of the text. The detection of doors is done by employing a canny edge detector. An indoor localization method has been proposed in [30]. It is based on the integration of edge detection mechanism with text recognition. Canny edge detector [31] is used to spot the edges in captured images. However, the usage of edge detection may fail in settings that have limited number of edges resulting in incorrect place recognition.An android operating system based navigation system employ Google Glass to aid people with VI to navigate in indoor areas [32]. It uses canny edge detector and Hough line transform to detect the corners and object detection tasks. In order to estimate the distance from walls, a floor detection algorithm has been used. An indoor navigation system based on image subtraction method for spotting objects has been proposed in [33]. The algorithm of Histogram back-propagation is used for constructing the histograms of colors for detected objects. The tracking of the user is achieved by utilizing continues adaptive mean shift algorithm. A door detection method for helping people with VI to access unknown indoor areas was proposed in [34]. A miniature camera mounted on the head was used to acquire the scenes in front of the user. A computer module was included to process the captured images and provide audio feedback to the user. The door detection is based on a “ generic geometric door model” built on the stable edge and corner features. A computer vision module for helping blind peoples to access indoor environment was developed in [35]. An “image optimization algorithm” and a “two-layer disparity image segmentation” were used to detect the things or objects in indoor environments. The proposed approach examines the depth of information at 1 to 2 meters to guarantee the safe walking of the users.Lee and Medioni proposed an RGB-D camera-based navigation system [36] for indoor navigation. Sparse features and dense point clouds are used to get the estimation of camera motions. In addition, a real-time Corner-based motion estimator algorithm was employed to estimate the position and orientation of objects which are in the navigation path. ISANA [37] consists of a Google tango mobile device, a smart cane with a keypad and two vibration motors to provide guided navigation for people with VI. The Google tango device is included with an RGB-D camera, a wide-angle camera, and 9 axes inertial measurement unit (IMU). The RGB-D camera was used to capture the depth data to identify the obstacles in the navigation path. The position of the user is estimated by merging visual data along with data from the IMU. Along with voice feedback, ISANA will provide haptic feedback to the user about the path and obstacles in noisy environments. The service offered by Microsoft Kinect device is extended to develop the assistive navigation system for people with VI [38]. The infrared sensor of Kinect device was integrated with RGB camera to assist the blind user. RGB camera is utilized to extract the imageries and infrared sensor is employed for getting the depth information to estimate the distance from obstacles. Corner detection algorithm was used to detect the obstacles.A low cost assistive system for guiding blinds are proposed in [39]. The proposed system utilizes android phone and QR codes pasted in the floor to guide the user in indoor areas. ‘Zxing’ library was used for encoding and decoding the QR code. Zxing library for QR codes detection worked well under low light conditions. ‘Ebsar’ [40] utilizes a google glass, Android smartphone and QR codes attached to each location for guiding the visually impaired users. In addition to that, Ebsar utilizes the compass and accelerometer of the Android smartphone for tracking the movement of the user. The Ebsar provides instruction to the users in Arabic language. Gude et al. [41] developed an indoor navigation system for people with VI that utilizes bar code namely Semacode for identifying the location and guiding the user. The proposed system consists of two video cameras, one attached on the user’s cane to detect the tags in the ground and other one attached on the glass of the user to detect tags placed above the ground level. The system provides output to the user via a tactile Braille device mounted the cane.The digital signs (tags) based on Data Matrix 2-D bar codes are utilized to guide the people with VI [42]. Each tag encodes a 16-bit hexadecimal number. To provide robust segmentation of tags from the surroundings, the 2D matrix is embedded in a unique circle-and-square background. To read the digital signs, a tag reader which comprises a camera, lenses, IR illuminator, a computer on module was utilized. etc. of. To enhance the salience of the tags for image capture, tags were printed on 3M infrared (IR) retro-reflective material. Zeb et al. [43] used fiducial markers (AR tool kit markers) printed on a paper and attached on the ceiling of each room for guiding the people with VI. Since the markers are pasted on the ceiling, the user has to walk with a web camera facing towards the ceiling. Up on successful marker recognition, the audio associated with the recognized marker stored in the database will be played to the user. A wearable system for locating blinds utilized custom colored markers to estimate the location of the user [44]. The prototype of the system consists of a wearable glass with a camera mounted on it and a mobile phone. In order to improve the detection time of markers, markers are designed as the QR code included inside a color circle. Along with these markers, multiple micro ultrasonic sensors were included in the system to detect the obstacles in the path of the user and thereby ensuring their safety. Rahul Raj et al. [45] proposed an indoor navigation system using QR codes and smartphone. The QR code is augmented with two information. First one is the location information which provides latitude, longitude, and altitude. The second one is the web URL for downloading the floor map with respect to the obtained location information. The authors address that usage of a smartphone with a low-quality camera for capturing the image and fast movement of user can reduce the QR code detection rate.A smart handheld device [46] utilized visual codes attached to the indoor environment and data from smartphone sensors for assisting the blinds. The smartphone camera will capture the scenes in front of the user and search for visual codes in the images. Color pattern detection using HSV and YCbCr color space method were utilized for detecting the visual code or markers in the captured image. An Augmented Reality library called “ArUco is used as an encoding technique to construct the visual markers. Rituerto et al. [47] proposed a sign based indoor navigation system for people with visual impairments. The position of the user is estimated by combining data from inertial sensors of smartphone and detected markers. Moreover, the existing signs in indoor environments were also utilized for positioning the user. ArUco marker library was used to create the markers. The system will provide assistive information to the user via a text to speech module.Nowadays BLE beacons based systems are becoming popular for indoor positioning and navigation application due to its low cost, and easiness to deploy as well as integrate with mobile devices. BLE beacons based systems are used in many airports, railways stations around the world for navigation and wayfinding applications. In the last 5 years, assistive systems developed for people with VI also relied on BLE technology to guide users in indoor areas. To best our knowledge there are few BLE beacons based indoor navigation systems proposed for people with VI. Most of the systems just require a smartphone and an Android or iOS application to provide reliable navigation service to the user. Lateration, RSSI fingerprinting and proximity sensing methods are the commonly used approaches for localizing the user.NavCog [48] is a smartphone based navigation system for people with VI in indoor environment. The NavCog utilizes BLE beacons installed in the indoor environment and motion sensors of the smartphone to estimate the position of the user. RSSI fingerprint matching method was used for computing the position of user. Along with position estimation, the NavCog can provide information about the nearby point of interests, stairs, etc. to the users. StaNavi [49] is a similar system like NavCog, uses smartphone compass and BLE beacons to guide the people with VI. StaNavi was developed to operate in large railway stations. The BLE localization method utilized a proximity detection approach to compute the position of the user. A cloud-based server was also used in StaNavi to provide information about the navigation route. GuideBeacon [50] also uses smartphone compass and low-cost BLE beacons to assist the people with VI in indoor environment. GudieBeacon implemented a proximity detection algorithm to identify the nearest beacons and thereby estimate the position of the user. The system speaks out the navigation instructions using Google text to speech library. Along with audio feedback, the GuideBeacon provide haptic and tactile feedback to the user.Bilgi et al. [51] proposed a navigation system for people with VI and hearing impairments in indoor area. The proposed prototype consists of BLE beacons attached on the ceiling of indoor areas and a smartphone. The localization algorithm uses nearness to beacons or proximity detection to compute the position of the user. Duarte et al. [52] proposed a system to guide the people with VI through public indoor areas. The system namely, SmartNav consist of a smartphone (Android application) and BLE beacons. The Google speech input API enables the user to give voice commands to the system. Android text-to-speech API is used for speech synthesis. The SmartNav utilizes multilateration approach to estimate the location of the user.Murata et al. [53] developed a smartphone based blind localization system that utilize probabilistic localization algorithm to localize the user in a multi storied building. The proposed algorithm uses data from smartphone sensor and RSS of BLE beacons. Moreover, they introduced novel methods to monitor the integrity of localization in real world scenarios and to control the localization while traveling between floors using escalators or lift. RSS fingerprinting based localization in BLE beacons systems using fuzzy logic type 2 method displayed better precision and accuracy while compared to traditional RSS fingerprinting and other non fuzzy methods such as proximity, trilateration, centroid [54].In addition to computer vision and BLE technology, RFID , Wi-Fi , PDR technologies are widely utilized for the development of assistive wayfinding or navigation systems for people with VI. Moreover, hybrid systems which integrate more than one technology to guide blinds or people with VI are proposed in the recent years. Table 2 illustrates Comparison of discussed indoor navigation solutions for people with visual impairments.Table 2Comparison of discussed indoor positioning solutions for people with visual impairmentsReferencesTechnologySystemTechniquesTested byTestFeedbackAccuracyRemarksTian et al. [29]Computer visionWeb camera and mini laptopText recognition and door detectionBlindsDoor detection and door signage recognitionVoiceMedium(−) Motion blur and very large occlusions happen when subjects have sudden head movementsLee and Medioni [36]Computer visionRGB-D camera, IMU, LaptopCamera motion estimationBlinds and blindfoldedPose estimation and mobility experimentsTactileMedium(−) Inconsistency in mapsManlises et al. [33]Computer visionWeb camera and computerCAMShift trackingBlindsObject recognition, navigation timeVoiceMedium(−) Tested with 3 blinds onlyLi et al. [37]Computer visionTango mobile deviceObstacle detection, camera motion estimation by combining visual and inertial dataBlindfoldedError occurred during navigation and navigation timeVoice and HapticGoodUsing smart cane with the system reduced the errorsKanwal et al. [38]Computer visionRGB camera and Kinect sensorCorner detection using visual and inertial dataBlind and blindfoldedObstacle avoidance and walkingVoiceGood(−) Infrared sensor failed under strong sunlight conditionsAl-Khalifa et al. [40]Computer vision and motionGoogle glass, Android smartphone, QR codeQR code recognition, IMUBlindsError occurred during navigation and navigation timeVoiceMedium(+) Easy to useLegge et al. [42]Computer visionDigital tags, Tag reader , smartphoneDigital tag recognitionBlinds and Blind foldedTag detection, Route findingVoiceMedium(+) System provided independent navigationZeb et al. [43]Computer visionWeb cam, AR markersAR marker recognitionBlindsNormal walkingVoiceMediumLow costAhmetovic et al. [48]BLEBeacons and smartphoneFingerprinting, IMUBlindsNormal occurred events that hinder the navigationVoiceMedium(−) Can’t inform the users that they are in wrong wayKim et al. [49]BLEBeacons and smartphoneProximity detection, IMUBlindsNavigation time, task completion, deviation during navigationTactile and voiceGood(+) Test was carried out in a large area (busy railway station)Cheraghi et al. [50]BLEBeacons and smartphoneProximity detection, IMUBlindsnavigation time, Navigation distanceHaptic and voiceMediumImprovements are required to support varying pace of walkingMany of the navigation systems discussed in this section have adopted several evaluation strategies to show their performance, effectiveness, and usability in real-time. Most of the systems considered navigation time as a parameter to show their efficiency. Some of the systems utilized the error committed by the users during navigation to asses the effectiveness of the system. The most important and commonly used approach is conducting surveys, interviews with the people who participated in the evaluation of the system. The user feedback help the authors to limit the usability issues of the proposed system. | [
"28692961",
"29513754",
"26017442",
"23630409",
"30774566",
"24116156"
] | [
{
"pmid": "28692961",
"title": "Places: A 10 Million Image Database for Scene Recognition.",
"abstract": "The rise of multi-million-item dataset initiatives has enabled data-hungry machine learning algorithms to reach near-human semantic classification performance at tasks such as visual object and scene recognition. Here we describe the Places Database, a repository of 10 million scene photographs, labeled with scene semantic categories, comprising a large and diverse list of the types of environments encountered in the world. Using the state-of-the-art Convolutional Neural Networks (CNNs), we provide scene classification CNNs (Places-CNNs) as baselines, that significantly outperform the previous approaches. Visualization of the CNNs trained on Places shows that object detectors emerge as an intermediate representation of scene classification. With its high-coverage and high-diversity of exemplars, the Places Database along with the Places-CNNs offer a novel resource to guide future progress on scene recognition problems."
},
{
"pmid": "29513754",
"title": "Using Bluetooth proximity sensing to determine where office workers spend time at work.",
"abstract": "BACKGROUND\nMost wearable devices that measure movement in workplaces cannot determine the context in which people spend time. This study examined the accuracy of Bluetooth sensing (10-second intervals) via the ActiGraph GT9X Link monitor to determine location in an office setting, using two simple, bespoke algorithms.\n\n\nMETHODS\nFor one work day (mean±SD 6.2±1.1 hours), 30 office workers (30% men, aged 38±11 years) simultaneously wore chest-mounted cameras (video recording) and Bluetooth-enabled monitors (initialised as receivers) on the wrist and thigh. Additional monitors (initialised as beacons) were placed in the entry, kitchen, photocopy room, corridors, and the wearer's office. Firstly, participant presence/absence at each location was predicted from the presence/absence of signals at that location (ignoring all other signals). Secondly, using the information gathered at multiple locations simultaneously, a simple heuristic model was used to predict at which location the participant was present. The Bluetooth-determined location for each algorithm was tested against the camera in terms of F-scores.\n\n\nRESULTS\nWhen considering locations individually, the accuracy obtained was excellent in the office (F-score = 0.98 and 0.97 for thigh and wrist positions) but poor in other locations (F-score = 0.04 to 0.36), stemming primarily from a high false positive rate. The multi-location algorithm exhibited high accuracy for the office location (F-score = 0.97 for both wear positions). It also improved the F-scores obtained in the remaining locations, but not always to levels indicating good accuracy (e.g., F-score for photocopy room ≈0.1 in both wear positions).\n\n\nCONCLUSIONS\nThe Bluetooth signalling function shows promise for determining where workers spend most of their time (i.e., their office). Placing beacons in multiple locations and using a rule-based decision model improved classification accuracy; however, for workplace locations visited infrequently or with considerable movement, accuracy was below desirable levels. Further development of algorithms is warranted."
},
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "23630409",
"title": "Toward a Computer Vision-based Wayfinding Aid for Blind Persons to Access Unfamiliar Indoor Environments.",
"abstract": "Independent travel is a well known challenge for blind and visually impaired persons. In this paper, we propose a proof-of-concept computer vision-based wayfinding aid for blind people to independently access unfamiliar indoor environments. In order to find different rooms (e.g. an office, a lab, or a bathroom) and other building amenities (e.g. an exit or an elevator), we incorporate object detection with text recognition. First we develop a robust and efficient algorithm to detect doors, elevators, and cabinets based on their general geometric shape, by combining edges and corners. The algorithm is general enough to handle large intra-class variations of objects with different appearances among different indoor environments, as well as small inter-class differences between different objects such as doors and door-like cabinets. Next, in order to distinguish intra-class objects (e.g. an office door from a bathroom door), we extract and recognize text information associated with the detected objects. For text recognition, we first extract text regions from signs with multiple colors and possibly complex backgrounds, and then apply character localization and topological analysis to filter out background interference. The extracted text is recognized using off-the-shelf optical character recognition (OCR) software products. The object type, orientation, location, and text information are presented to the blind traveler as speech."
},
{
"pmid": "30774566",
"title": "Vision-based Mobile Indoor Assistive Navigation Aid for Blind People.",
"abstract": "This paper presents a new holistic vision-based mobile assistive navigation system to help blind and visually impaired people with indoor independent travel. The system detects dynamic obstacles and adjusts path planning in real-time to improve navigation safety. First, we develop an indoor map editor to parse geometric information from architectural models and generate a semantic map consisting of a global 2D traversable grid map layer and context-aware layers. By leveraging the visual positioning service (VPS) within the Google Tango device, we design a map alignment algorithm to bridge the visual area description file (ADF) and semantic map to achieve semantic localization. Using the on-board RGB-D camera, we develop an efficient obstacle detection and avoidance approach based on a time-stamped map Kalman filter (TSM-KF) algorithm. A multi-modal human-machine interface (HMI) is designed with speech-audio interaction and robust haptic interaction through an electronic SmartCane. Finally, field experiments by blindfolded and blind subjects demonstrate that the proposed system provides an effective tool to help blind individuals with indoor navigation and wayfinding."
},
{
"pmid": "24116156",
"title": "Indoor navigation by people with visual impairment using a digital sign system.",
"abstract": "There is a need for adaptive technology to enhance indoor wayfinding by visually-impaired people. To address this need, we have developed and tested a Digital Sign System. The hardware and software consist of digitally-encoded signs widely distributed throughout a building, a handheld sign-reader based on an infrared camera, image-processing software, and a talking digital map running on a mobile device. Four groups of subjects-blind, low vision, blindfolded sighted, and normally sighted controls-were evaluated on three navigation tasks. The results demonstrate that the technology can be used reliably in retrieving information from the signs during active mobility, in finding nearby points of interest, and following routes in a building from a starting location to a destination. The visually impaired subjects accurately and independently completed the navigation tasks, but took substantially longer than normally sighted controls. This fully functional prototype system demonstrates the feasibility of technology enabling independent indoor navigation by people with visual impairment."
}
] |
Scientific Reports | 31836728 | PMC6911101 | 10.1038/s41598-019-55320-6 | Unsupervised Pre-training of a Deep LSTM-based Stacked Autoencoder for Multivariate Time Series Forecasting Problems | Currently, most real-world time series datasets are multivariate and are rich in dynamical information of the underlying system. Such datasets are attracting much attention; therefore, the need for accurate modelling of such high-dimensional datasets is increasing. Recently, the deep architecture of the recurrent neural network (RNN) and its variant long short-term memory (LSTM) have been proven to be more accurate than traditional statistical methods in modelling time series data. Despite the reported advantages of the deep LSTM model, its performance in modelling multivariate time series (MTS) data has not been satisfactory, particularly when attempting to process highly non-linear and long-interval MTS datasets. The reason is that the supervised learning approach initializes the neurons randomly in such recurrent networks, disabling the neurons that ultimately must properly learn the latent features of the correlated variables included in the MTS dataset. In this paper, we propose a pre-trained LSTM-based stacked autoencoder (LSTM-SAE) approach in an unsupervised learning fashion to replace the random weight initialization strategy adopted in deep LSTM recurrent networks. For evaluation purposes, two different case studies that include real-world datasets are investigated, where the performance of the proposed approach compares favourably with the deep LSTM approach. In addition, the proposed approach outperforms several reference models investigating the same case studies. Overall, the experimental results clearly show that the unsupervised pre-training approach improves the performance of deep LSTM and leads to better and faster convergence than other models. | Related WorkIt is demonstrated that analysing UTS data is easy and common; however, analysing MTS data is complex due to the correlated signals involved29. The key challenge in MTS problems is to model such complex real-world data, as well as to learn the latent features automatically from the input correlated data3. For this reason, the machine learning community has afforded scant attention to the MTS forecasting problem versus much attention to forecasting problems using UTS data. This section shows an overview of state-of-the-art methods that adopt the unsupervised pre-training procedure to facilitate the overall learning process. Then, the section shows the recent achievements of state-of-the-art techniques that are implemented to solve the MTS forecasting application using same datasets utilized in this paper to assess the proposed model.State-of-the-art modelsIt is widely known that the conventional ANN does not provide a very good result when the function to be approximated is very complex, especially in regression problems30. There are various reasons for this, such as the neurons’ initialization or the overfitting or the function becoming stuck in local minima15. As such, different training approaches along with different network architectures for the conventional ANNs are introduced4,31,32. One of these approaches, as a trial towards a fine analysis for complex real-world data, is to develop robust features that are capable of capturing the relevant information from data. However, developing such domain-specific features for each task is expensive, time-consuming, and requires expertise in working with the data12. The alternative approach is to use an unsupervised feature learning strategy to learn the feature representation layers from unlabelled data, which was early presented by Schmidhuber14,20. In fact, the latter approach has many advantages, such as conducting learning based on unlabeled data, and having the availability and abundance of unlabelled data, in contrast to labelled data. In addition, learning the features from unlabelled data in advance, which is called pre-training, is much better than learning them from hand-crafted features33,34.Beside these advantages, the most important advantage is the ability to stack several feature representations layers to create deep architectures20, which are more capable of modelling complex structures and correlated features that are included in MTS problems12,30. These unsupervised pre-training approaches alleviate the underfitting and overfitting problems that had restrained the modelling of complex neural systems for a period of time35. Most of the current unsupervised pre-training models are developed in a layer-wise fashion, which is enough to train simple models, and then stack them layer-by-layer to form a deep structure.Hinton et al. developed a greedy layer-wise unsupervised learning algorithm for deep belief networks (DBNs), a generative model with many layers of hidden variables22. Hinton’s approach consists of a stack of restricted boltzmann machines (RBMs), where two consecutive learning steps are conducted: a pre-training step, which is a kind of unsupervised learning using the gradient of the network energy of the RBM, and a fine-tuning step, which is a kind of supervised learning step to calculate the backpropagation errors. Then, the learned feature activations of an RBM layer are used as the input layer to train the following RBM in the stack22.Next, Bengio et al. presented a theoretical and experimental analysis for Hinton’s approach, asserting that the greedy layer-wise pre-training approach will help to optimize the modelling of the deep networks23. In addition, Bengio concluded that this approach can yield a good generalization because it initializes the upper layers with better representations of relevant high level abstractions. Moreover, Ehran et al. tried specifically to answer the following question: how does unsupervised pre-training work? They empirically showed the effectiveness of the pre-training step and how it guides the learning model towards basins of attraction of minima that support a good generalization from the training dataset33. Furthermore, they showed evidence that the layer-wise pre-training procedure performs the function of regularization perfectly. Sarikaya et al. applied Hinton’s DBN model to the natural language understanding problem and compared it with three text classification algorithms, namely, support vector machines (SVM), boosting, and maximum entropy36. Using the additional unlabelled data for DBN pre-training and combining DBN-based learned features with the original features provides significant gains to DBN over the other models.Le Paine et al. empirically investigated the impact of unsupervised pre-training in contrast to a number of recent advances, such as rectified linear units (ReLUs), data augmentation, dropout, and large labelled datasets34. Using CNNs, they tried to answer the question: When is unsupervised pre-training useful given recent advances? Their investigation, in an image recognition application, was based on three axes: first developing an unsupervised method that incorporates ReLUs and recent unsupervised regularization techniques. Second, analysing the benefits of unsupervised pre-training compared to data augmentation and dropout on a benchmark dataset, while varying the ratio of unsupervised to supervised samples. Third, verifying their findings using benchmark datasets. They found that unsupervised pre-training is a perfect way to improve the performance, especially when the ratio of unsupervised to supervised samples is high34. Tang et al. presented a new pre-training approach based on knowledge transfer learning35. In contrast to the layer-wise approach, which trains the model components layer-by-layer, their approach trains the entire model as a whole. In their experiments on speech recognition, they could learn complex RNNs as a student model by a weaker deep neural network (DNN) as a teacher model. Furthermore, their approach can be combined with a layer-wise pre-training of CNN to deliver additional gains.Recently, Saikia et al. empirically analysed the effect of different unsupervised pre-training approaches in modelling regression problems using four different datasets30. They considered two methods separately, namely, DBN and stacked autoencoder (SA), and compared their performance with a standard ANN algorithm without pre-training. Liu et al. applied Hinton’s model on stacked autoencoders (SAEs) to solve gearbox fault diagnosis37. The proposed method can directly extract salient features from frequency-domain signals and eliminate the exhaustive use of hand-crafted features. Meng et al. used it also in order to train the stacked denoising sparse autoencoder layer-by-layer38.As we mentioned in this section, most of the previous work in unsupervised pre-training NN (or deep NNs) has focused on data compression20, dimensionality reduction20,27, classification20,28, and UTS forecasting20 problems. Importantly, time series forecasting with deep learning techniques is an interesting research area that needs to be studied as well19,26. Moreover, even the recent time series forecasting research in the literature has focused on UTS problems. Very few works in the literature are devoted to forecasting via MTS data. For example, Romeu et al. used a pre-trained autoencoder-based DNN for UTS forecasting of indoor temperature26. Kuremoto et al. used the earlier Hinton model for unsupervised pre-training step and backpropagation for the fine-tuning step using an artificial UTS dataset39. Malhotra et al. extended an approach that uses a CNN with data from audio and image domains to be used in the time series domain and trained a multilayer RNN that can then be used as a pre-trained model to obtain representations for time series40. Their method overcomes the need for large amounts of training data for the problem domain. Wang et al. used Hinton’s DBN model in a cyclic architecture instead of the single DBN model that was earlier presented by Hinton41. However, while they used MTS data in their experiments, their target was a classification problem and not a forecasting problem, such as our target in this paper. In addition, their model architecture is completely different from ours as presented here.Overview of the reference modelsFurthermore, given the complexity of MTS data that are described in the previous section, few works have been introduced to solve forecasting applications using MTS datasets. Indeed, specific methodologies are required for treatment rather than traditional parametric methods, auto-regressive methods, and Gaussian models2,42. In the following, we briefly review previous contributions in the literature that are used to solve the same forecasting problems that are studied in this paper.Wu et al. presented a hybrid prediction model that combines a natural computation algorithm, namely, the artificial immune system (AIS) with regression trees (RT)43. The cells in AIS represent the basic constituent elements of the model and the RT forecasting sub-models are embedded in the AIS model to form the cells’ pool. They applied their hybrid model to solve the bike sharing system problem, which we also solve in this paper. Huang et al. developed a deep neural network model that integrates the CNN and LSTM architectures to solve the PM2.5 mass concentrations forecasting problem44. In this simple combination, the CNN is used for feature extraction and the LSTM is used to analyse the features extracted by the CNN, and then predict the PM2.5 concentration of the next observation. The authors added a batch normalization step to improve the forecasting accuracy. The proposed hybrid approach outperformed the single CNN and LSTM models.Qi et al. proposed a hybrid model that integrates graph convolutional networks (GCN) and long short-term memory (LSTM) networks to model and forecast the spatio-temporal variation of the PM2.5 mass concentrations forecasting problem45. The role of the GCN is to extract the spatial dependency between different stations, whereas the role of the LSTM is to capture the temporal dependency among observations at different times. In a subsequent section of our paper, we also treat the PM2.5 concentrations problem. Cheng et al. developed three hybrid models combine statistical learning approaches with machine learning approaches to forecast the PM2.5 mass concentrations problem46. These models are: wavelet-ANN, wavelet-ARIMA, and wavelet-SVM. Their experimental results showed that the proposed hybrid models can significantly improve the prediction accuracy better than their single counterpart models. In addition, the wavelet-ARIMA model has the highest accuracy compared to the other two hybrid models.Last, Zhao et al. proposed a hybrid method that combines five modules to solve the PM2.5 mass concentrations forecasting problem47. These five modules are the data preprocessing module, feature selection module, optimization module, forecasting module and evaluation module. They used the complete ensemble empirical mode decomposition with adaptive noise and variational mode decomposition (CEEMDAN-VMD) to decompose, reconstruct, identify and select the main features of PM2.5 observations through the data preprocessing module. Then, they used the auto-correlation function (ACF) to extract the variables that have relatively large correlation with the predictor, thereby selecting the input variables according to the order of correlation coefficients. Next, the least squares support vector machine (LSSVM) is used to predict the future points of concentrations. Finally, the parameters of the LSSVM are optimized using the whale optimization algorithm (WOA).It is clear that most of the aforementioned state-of-the-art methods in the literature that are used to solve the same MTS forecasting problems are hybrid methods that combine more than a model, in contrast to the model proposed in this paper, which is a standalone model. In summary, undoubtedly, the layer-wise architecture that was earlier presented by Hinton has a concrete theoretical foundation as well as an empirical assessment. However, it is not easy to employ such a layer-wise structure to pre-train models without a clear multilayer structure35. In the following section, we propose a novel layer-wise structure for the LSTM-based autoencoder in a deep architecture fashion. We will show that the proposed approach outperforms our previous deep LSTM model as well as other reported baseline approaches. | [
"25462637",
"9377276",
"11032042",
"24637071",
"16764513",
"30743109",
"18267787"
] | [
{
"pmid": "25462637",
"title": "Deep learning in neural networks: an overview.",
"abstract": "In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "11032042",
"title": "Learning to forget: continual prediction with LSTM.",
"abstract": "Long short-term memory (LSTM; Hochreiter & Schmidhuber, 1997) can solve numerous tasks not solvable by previous learning algorithms for recurrent neural networks (RNNs). We identify a weakness of LSTM networks processing continual input streams that are not a priori segmented into subsequences with explicitly marked ends at which the network's internal state could be reset. Without resets, the state may grow indefinitely and eventually cause the network to break down. Our remedy is a novel, adaptive \"forget gate\" that enables an LSTM cell to learn to reset itself at appropriate times, thus releasing internal resources. We review illustrative benchmark problems on which standard LSTM outperforms other RNN algorithms. All algorithms (including LSTM) fail to solve continual versions of these problems. LSTM with forget gates, however, easily solves them, and in an elegant way."
},
{
"pmid": "24637071",
"title": "Solving the linear interval tolerance problem for weight initialization of neural networks.",
"abstract": "Determining good initial conditions for an algorithm used to train a neural network is considered a parameter estimation problem dealing with uncertainty about the initial weights. Interval analysis approaches model uncertainty in parameter estimation problems using intervals and formulating tolerance problems. Solving a tolerance problem is defining lower and upper bounds of the intervals so that the system functionality is guaranteed within predefined limits. The aim of this paper is to show how the problem of determining the initial weight intervals of a neural network can be defined in terms of solving a linear interval tolerance problem. The proposed linear interval tolerance approach copes with uncertainty about the initial weights without any previous knowledge or specific assumptions on the input data as required by approaches such as fuzzy sets or rough sets. The proposed method is tested on a number of well known benchmarks for neural networks trained with the back-propagation family of algorithms. Its efficiency is evaluated with regards to standard performance measures and the results obtained are compared against results of a number of well known and established initialization methods. These results provide credible evidence that the proposed method outperforms classical weight initialization methods."
},
{
"pmid": "16764513",
"title": "A fast learning algorithm for deep belief nets.",
"abstract": "We show how to use \"complementary priors\" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind."
},
{
"pmid": "30743109",
"title": "A hybrid model for spatiotemporal forecasting of PM2.5 based on graph convolutional neural network and long short-term memory.",
"abstract": "Increasing availability of data related to air quality from ground monitoring stations has provided the chance for data mining researchers to propose sophisticated models for predicting the concentrations of different air pollutants. In this paper, we proposed a hybrid model based on deep learning methods that integrates Graph Convolutional networks and Long Short-Term Memory networks (GC-LSTM) to model and forecast the spatiotemporal variation of PM2.5 concentrations. Specifically, historical observations on different stations are constructed as spatiotemporal graph series, and historical air quality variables, meteorological factors, spatial terms and temporal attributes are defined as graph signals. To evaluate the performance of the GC-LSTM, we compared our results with several state-of-the-art methods in different time intervals. Based on the results, our GC-LSTM model achieved the best performance for predictions. Moreover, evaluations of recall rate (68.45%), false alarm rate (4.65%) (both of threshold: 115 μg/m3) and correlation coefficient R2 (0.72) for 72-hour predictions also verify the feasibility of our proposed model. This methodology can be used for concentration forecasting of different air pollutants in future."
},
{
"pmid": "18267787",
"title": "Learning long-term dependencies with gradient descent is difficult.",
"abstract": "Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered."
}
] |
JMIR Medical Informatics | 31553307 | PMC6911227 | 10.2196/14325 | Mapping ICD-10 and ICD-10-CM Codes to Phecodes: Workflow Development and Initial Evaluation | BackgroundThe phecode system was built upon the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) for phenome-wide association studies (PheWAS) using the electronic health record (EHR).ObjectiveThe goal of this paper was to develop and perform an initial evaluation of maps from the International Classification of Diseases, 10th Revision (ICD-10) and the International Classification of Diseases, 10th Revision, Clinical Modification (ICD-10-CM) codes to phecodes.MethodsWe mapped ICD-10 and ICD-10-CM codes to phecodes using a number of methods and resources, such as concept relationships and explicit mappings from the Centers for Medicare & Medicaid Services, the Unified Medical Language System, Observational Health Data Sciences and Informatics, Systematized Nomenclature of Medicine-Clinical Terms, and the National Library of Medicine. We assessed the coverage of the maps in two databases: Vanderbilt University Medical Center (VUMC) using ICD-10-CM and the UK Biobank (UKBB) using ICD-10. We assessed the fidelity of the ICD-10-CM map in comparison to the gold-standard ICD-9-CM phecode map by investigating phenotype reproducibility and conducting a PheWAS.ResultsWe mapped >75% of ICD-10 and ICD-10-CM codes to phecodes. Of the unique codes observed in the UKBB (ICD-10) and VUMC (ICD-10-CM) cohorts, >90% were mapped to phecodes. We observed 70-75% reproducibility for chronic diseases and <10% for an acute disease for phenotypes sourced from the ICD-10-CM phecode map. Using the ICD-9-CM and ICD-10-CM maps, we conducted a PheWAS with a Lipoprotein(a) genetic variant, rs10455872, which replicated two known genotype-phenotype associations with similar effect sizes: coronary atherosclerosis (ICD-9-CM: P<.001; odds ratio (OR) 1.60 [95% CI 1.43-1.80] vs ICD-10-CM: P<.001; OR 1.60 [95% CI 1.43-1.80]) and chronic ischemic heart disease (ICD-9-CM: P<.001; OR 1.56 [95% CI 1.35-1.79] vs ICD-10-CM: P<.001; OR 1.47 [95% CI 1.22-1.77]).ConclusionsThis study introduces the beta versions of ICD-10 and ICD-10-CM to phecode maps that enable researchers to leverage accumulated ICD-10 and ICD-10-CM data for PheWAS in the EHR. | Related WorkThe Clinical Classification Software (CCS) is another maintained system for aggregating ICD codes into clinically meaningful phenotypes. CCS was originally developed by the Agency for Healthcare Research and Quality (AHRQ) to cluster ICD-9-CM diagnosis and procedure codes to a smaller number of clinically meaningful categories [39]. CCS has been used for many purposes, such as measuring outcomes [40] and predicting future health care usage [41]. In a previous study, we showed that phecodes were better aligned with diseases mentioned in clinical practice and that were relevant to genomic studies than CCS for ICD-9-CM (CCS9) codes [20]. We found that phecodes outperform CCS9 codes, in part because CCS9 was not as granular as phecodes. Since CCS for ICD-10-CM (CCS10) is of similar granularity as CCS9 (283 versus 285 disease groups) [42], we believe that the phecode map would likely still better represent clinically meaningful phenotypes in genetic research. | [
"27026615",
"29955180",
"30061737",
"26912863",
"25849893",
"25850054",
"26568383",
"29091937",
"27287392",
"24385893",
"24323995",
"29437585",
"24733291",
"20442144",
"27195309",
"24270849",
"28686612",
"25826379",
"16779043",
"26262116",
"14728568",
"20965889",
"30759150",
"29703846",
"18500243",
"30104761",
"30602428",
"30679510",
"29739741",
"29590070",
"26881369",
"26395541",
"30395248"
] | [
{
"pmid": "27026615",
"title": "PheKB: a catalog and workflow for creating electronic phenotype algorithms for transportability.",
"abstract": "OBJECTIVE\nHealth care generated data have become an important source for clinical and genomic research. Often, investigators create and iteratively refine phenotype algorithms to achieve high positive predictive values (PPVs) or sensitivity, thereby identifying valid cases and controls. These algorithms achieve the greatest utility when validated and shared by multiple health care systems.Materials and Methods We report the current status and impact of the Phenotype KnowledgeBase (PheKB, http://phekb.org), an online environment supporting the workflow of building, sharing, and validating electronic phenotype algorithms. We analyze the most frequent components used in algorithms and their performance at authoring institutions and secondary implementation sites.\n\n\nRESULTS\nAs of June 2015, PheKB contained 30 finalized phenotype algorithms and 62 algorithms in development spanning a range of traits and diseases. Phenotypes have had over 3500 unique views in a 6-month period and have been reused by other institutions. International Classification of Disease codes were the most frequently used component, followed by medications and natural language processing. Among algorithms with published performance data, the median PPV was nearly identical when evaluated at the authoring institutions (n = 44; case 96.0%, control 100%) compared to implementation sites (n = 40; case 97.5%, control 100%).\n\n\nDISCUSSION\nThese results demonstrate that a broad range of algorithms to mine electronic health record data from different health systems can be developed with high PPV, and algorithms developed at one site are generally transportable to others.\n\n\nCONCLUSION\nBy providing a central repository, PheKB enables improved development, transportability, and validity of algorithms for research-grade phenotypes using health care generated data."
},
{
"pmid": "29955180",
"title": "Using an atlas of gene regulation across 44 human tissues to inform complex disease- and trait-associated variation.",
"abstract": "We apply integrative approaches to expression quantitative loci (eQTLs) from 44 tissues from the Genotype-Tissue Expression project and genome-wide association study data. About 60% of known trait-associated loci are in linkage disequilibrium with a cis-eQTL, over half of which were not found in previous large-scale whole blood studies. Applying polygenic analyses to metabolic, cardiovascular, anthropometric, autoimmune, and neurodegenerative traits, we find that eQTLs are significantly enriched for trait associations in relevant pathogenic tissues and explain a substantial proportion of the heritability (40-80%). For most traits, tissue-shared eQTLs underlie a greater proportion of trait associations, although tissue-specific eQTLs have a greater contribution to some traits, such as blood pressure. By integrating information from biological pathways with eQTL target genes and applying a gene-based approach, we validate previously implicated causal genes and pathways, and propose new variant and gene associations for several complex traits, which we replicate in the UK BioBank and BioVU."
},
{
"pmid": "30061737",
"title": "Biobank-driven genomic discovery yields new insight into atrial fibrillation biology.",
"abstract": "To identify genetic variation underlying atrial fibrillation, the most common cardiac arrhythmia, we performed a genome-wide association study of >1,000,000 people, including 60,620 atrial fibrillation cases and 970,216 controls. We identified 142 independent risk variants at 111 loci and prioritized 151 functional candidate genes likely to be involved in atrial fibrillation. Many of the identified risk variants fall near genes where more deleterious mutations have been reported to cause serious heart defects in humans (GATA4, MYH6, NKX2-5, PITX2, TBX5)1, or near genes important for striated muscle function and integrity (for example, CFL2, MYH7, PKP2, RBM20, SGCG, SSPN). Pathway and functional enrichment analyses also suggested that many of the putative atrial fibrillation genes act via cardiac structural remodeling, potentially in the form of an 'atrial cardiomyopathy'2, either during fetal heart development or as a response to stress in the adult heart."
},
{
"pmid": "26912863",
"title": "The phenotypic legacy of admixture between modern humans and Neandertals.",
"abstract": "Many modern human genomes retain DNA inherited from interbreeding with archaic hominins, such as Neandertals, yet the influence of this admixture on human traits is largely unknown. We analyzed the contribution of common Neandertal variants to over 1000 electronic health record (EHR)-derived phenotypes in ~28,000 adults of European ancestry. We discovered and replicated associations of Neandertal alleles with neurological, psychiatric, immunological, and dermatological phenotypes. Neandertal alleles together explained a significant fraction of the variation in risk for depression and skin lesions resulting from sun exposure (actinic keratosis), and individual Neandertal alleles were significantly associated with specific human phenotypes, including hypercoagulation and tobacco use. Our results establish that archaic admixture influences disease risk in modern humans, provide hypotheses about the effects of hundreds of Neandertal haplotypes, and demonstrate the utility of EHR data in evolutionary analyses."
},
{
"pmid": "25849893",
"title": "TYK2 protein-coding variants protect against rheumatoid arthritis and autoimmunity, with no evidence of major pleiotropic effects on non-autoimmune complex traits.",
"abstract": "Despite the success of genome-wide association studies (GWAS) in detecting a large number of loci for complex phenotypes such as rheumatoid arthritis (RA) susceptibility, the lack of information on the causal genes leaves important challenges to interpret GWAS results in the context of the disease biology. Here, we genetically fine-map the RA risk locus at 19p13 to define causal variants, and explore the pleiotropic effects of these same variants in other complex traits. First, we combined Immunochip dense genotyping (n = 23,092 case/control samples), Exomechip genotyping (n = 18,409 case/control samples) and targeted exon-sequencing (n = 2,236 case/controls samples) to demonstrate that three protein-coding variants in TYK2 (tyrosine kinase 2) independently protect against RA: P1104A (rs34536443, OR = 0.66, P = 2.3 x 10(-21)), A928V (rs35018800, OR = 0.53, P = 1.2 x 10(-9)), and I684S (rs12720356, OR = 0.86, P = 4.6 x 10(-7)). Second, we show that the same three TYK2 variants protect against systemic lupus erythematosus (SLE, Pomnibus = 6 x 10(-18)), and provide suggestive evidence that two of the TYK2 variants (P1104A and A928V) may also protect against inflammatory bowel disease (IBD; P(omnibus) = 0.005). Finally, in a phenome-wide association study (PheWAS) assessing >500 phenotypes using electronic medical records (EMR) in >29,000 subjects, we found no convincing evidence for association of P1104A and A928V with complex phenotypes other than autoimmune diseases such as RA, SLE and IBD. Together, our results demonstrate the role of TYK2 in the pathogenesis of RA, SLE and IBD, and provide supporting evidence for TYK2 as a promising drug target for the treatment of autoimmune diseases."
},
{
"pmid": "26568383",
"title": "MR-PheWAS: hypothesis prioritization among potential causal effects of body mass index on many outcomes, using Mendelian randomization.",
"abstract": "Observational cohort studies can provide rich datasets with a diverse range of phenotypic variables. However, hypothesis-driven epidemiological analyses by definition only test particular hypotheses chosen by researchers. Furthermore, observational analyses may not provide robust evidence of causality, as they are susceptible to confounding, reverse causation and measurement error. Using body mass index (BMI) as an exemplar, we demonstrate a novel extension to the phenome-wide association study (pheWAS) approach, using automated screening with genotypic instruments to screen for causal associations amongst any number of phenotypic outcomes. We used a sample of 8,121 children from the ALSPAC dataset, and tested the linear association of a BMI-associated allele score with 172 phenotypic outcomes (with variable sample sizes). We also performed an instrumental variable analysis to estimate the causal effect of BMI on each phenotype. We found 21 of the 172 outcomes were associated with the allele score at an unadjusted p < 0.05 threshold, and use Bonferroni corrections, permutation testing and estimates of the false discovery rate to consider the strength of results given the number of tests performed. The most strongly associated outcomes included leptin, lipid profile, and blood pressure. We also found novel evidence of effects of BMI on a global self-worth score."
},
{
"pmid": "29091937",
"title": "Phenome-wide association study using research participants' self-reported data provides insight into the Th17 and IL-17 pathway.",
"abstract": "A phenome-wide association study of variants in genes in the Th17 and IL-17 pathway was performed using self-reported phenotypes and genetic data from 521,000 research participants of 23andMe. Results replicated known associations with similar effect sizes for autoimmune traits illustrating self-reported traits can be a surrogate for clinically assessed conditions. Novel associations controlling for a false discovery rate of 5% included the association of the variant encoding p.Ile684Ser in TYK2 with increased risk of tonsillectomy, strep throat occurrences and teen acne, the variant encoding p.Arg381Gln in IL23R with a decrease in dandruff frequency, the variant encoding p.Asp10Asn in TRAF3IP2 with risk of male-pattern balding, and the RORC regulatory variant (rs4845604) with protection from allergies. This approach enabled rapid assessment of association with a wide variety of traits and investigation of traits with limited reported associations to overlay meaningful phenotypic context on the range of conditions being considered for drugs targeting this pathway."
},
{
"pmid": "27287392",
"title": "Phenome-wide association study maps new diseases to the human major histocompatibility complex region.",
"abstract": "BACKGROUND\nOver 160 disease phenotypes have been mapped to the major histocompatibility complex (MHC) region on chromosome 6 by genome-wide association study (GWAS), suggesting that the MHC region as a whole may be involved in the aetiology of many phenotypes, including unstudied diseases. The phenome-wide association study (PheWAS), a powerful and complementary approach to GWAS, has demonstrated its ability to discover and rediscover genetic associations. The objective of this study is to comprehensively investigate the MHC region by PheWAS to identify new phenotypes mapped to this genetically important region.\n\n\nMETHODS\nIn the current study, we systematically explored the MHC region using PheWAS to associate 2692 MHC-linked variants (minor allele frequency ≥0.01) with 6221 phenotypes in a cohort of 7481 subjects from the Marshfield Clinic Personalized Medicine Research Project.\n\n\nRESULTS\nFindings showed that expected associations previously identified by GWAS could be identified by PheWAS (eg, psoriasis, ankylosing spondylitis, type I diabetes and coeliac disease) with some having strong cross-phenotype associations potentially driven by pleiotropic effects. Importantly, novel associations with eight diseases not previously assessed by GWAS (eg, lichen planus) were also identified and replicated in an independent population. Many of these associated diseases appear to be immune-related disorders. Further assessment of these diseases in 16 484 Marshfield Clinic twins suggests that some of these diseases, including lichen planus, may have genetic aetiologies.\n\n\nCONCLUSIONS\nThese results demonstrate that the PheWAS approach is a powerful and novel method to discover SNP-disease associations, and is ideal when characterising cross-phenotype associations, and further emphasise the importance of the MHC region in human health and disease."
},
{
"pmid": "24385893",
"title": "Phenome-wide association studies on a quantitative trait: application to TPMT enzyme activity and thiopurine therapy in pharmacogenomics.",
"abstract": "Phenome-Wide Association Studies (PheWAS) investigate whether genetic polymorphisms associated with a phenotype are also associated with other diagnoses. In this study, we have developed new methods to perform a PheWAS based on ICD-10 codes and biological test results, and to use a quantitative trait as the selection criterion. We tested our approach on thiopurine S-methyltransferase (TPMT) activity in patients treated by thiopurine drugs. We developed 2 aggregation methods for the ICD-10 codes: an ICD-10 hierarchy and a mapping to existing ICD-9-CM based PheWAS codes. Eleven biological test results were also analyzed using discretization algorithms. We applied these methods in patients having a TPMT activity assessment from the clinical data warehouse of a French academic hospital between January 2000 and July 2013. Data after initiation of thiopurine treatment were analyzed and patient groups were compared according to their TPMT activity level. A total of 442 patient records were analyzed representing 10,252 ICD-10 codes and 72,711 biological test results. The results from the ICD-9-CM based PheWAS codes and ICD-10 hierarchy codes were concordant. Cross-validation with the biological test results allowed us to validate the ICD phenotypes. Iron-deficiency anemia and diabetes mellitus were associated with a very high TPMT activity (p = 0.0004 and p = 0.0015, respectively). We describe here an original method to perform PheWAS on a quantitative trait using both ICD-10 diagnosis codes and biological test results to identify associated phenotypes. In the field of pharmacogenomics, PheWAS allow for the identification of new subgroups of patients who require personalized clinical and therapeutic management."
},
{
"pmid": "24323995",
"title": "Comorbidity clusters in autism spectrum disorders: an electronic health record time-series analysis.",
"abstract": "OBJECTIVE\nThe distinct trajectories of patients with autism spectrum disorders (ASDs) have not been extensively studied, particularly regarding clinical manifestations beyond the neurobehavioral criteria from the Diagnostic and Statistical Manual of Mental Disorders. The objective of this study was to investigate the patterns of co-occurrence of medical comorbidities in ASDs.\n\n\nMETHODS\nInternational Classification of Diseases, Ninth Revision codes from patients aged at least 15 years and a diagnosis of ASD were obtained from electronic medical records. These codes were aggregated by using phenotype-wide association studies categories and processed into 1350-dimensional vectors describing the counts of the most common categories in 6-month blocks between the ages of 0 to 15. Hierarchical clustering was used to identify subgroups with distinct courses.\n\n\nRESULTS\nFour subgroups were identified. The first was characterized by seizures (n = 120, subgroup prevalence 77.5%). The second (n = 197) was characterized by multisystem disorders including gastrointestinal disorders (prevalence 24.3%) and auditory disorders and infections (prevalence 87.8%), and the third was characterized by psychiatric disorders (n = 212, prevalence 33.0%). The last group (n = 4316) could not be further resolved. The prevalence of psychiatric disorders was uncorrelated with seizure activity (P = .17), but a significant correlation existed between gastrointestinal disorders and seizures (P < .001). The correlation results were replicated by using a second sample of 496 individuals from a different geographic region.\n\n\nCONCLUSIONS\nThree distinct patterns of medical trajectories were identified by unsupervised clustering of electronic health record diagnoses. These may point to distinct etiologies with different genetic and environmental contributions. Additional clinical and molecular characterizations will be required to further delineate these subgroups."
},
{
"pmid": "29437585",
"title": "MR-PheWAS: exploring the causal effect of SUA level on multiple disease outcomes by using genetic instruments in UK Biobank.",
"abstract": "OBJECTIVES\nWe aimed to investigate the role of serum uric acid (SUA) level in a broad spectrum of disease outcomes using data for 120 091 individuals from UK Biobank.\n\n\nMETHODS\nWe performed a phenome-wide association study (PheWAS) to identify disease outcomes associated with SUA genetic risk loci. We then implemented conventional Mendelianrandomisation (MR) analysis to investigate the causal relevance between SUA level and disease outcomes identified from PheWAS. We next applied MR Egger analysis to detect and account for potential pleiotropy, which conventional MR analysis might mistake for causality, and used the HEIDI (heterogeneity in dependent instruments) test to remove cross-phenotype associations that were likely due to genetic linkage.\n\n\nRESULTS\nOur PheWAS identified 25 disease groups/outcomes associated with SUA genetic risk loci after multiple testing correction (P<8.57e-05). Our conventional MR analysis implicated a causal role of SUA level in three disease groups: inflammatory polyarthropathies (OR=1.22, 95% CI 1.11 to 1.34), hypertensive disease (OR=1.08, 95% CI 1.03 to 1.14) and disorders of metabolism (OR=1.07, 95% CI 1.01 to 1.14); and four disease outcomes: gout (OR=4.88, 95% CI 3.91 to 6.09), essential hypertension (OR=1.08, 95% CI 1.03 to 1.14), myocardial infarction (OR=1.16, 95% CI 1.03 to 1.30) and coeliac disease (OR=1.41, 95% CI 1.05 to 1.89). After balancing pleiotropic effects in MR Egger analysis, only gout and its encompassing disease group of inflammatory polyarthropathies were considered to be causally associated with SUA level. Our analysis highlighted a locus (ATXN2/S2HB3) that may influence SUA level and multiple cardiovascular and autoimmune diseases via pleiotropy.\n\n\nCONCLUSIONS\nElevated SUA level is convincing to cause gout and inflammatory polyarthropathies, and might act as a marker for the wider range of diseases with which it associates. Our findings support further investigation on the clinical relevance of SUA level with cardiovascular, metabolic, autoimmune and respiratory diseases."
},
{
"pmid": "24733291",
"title": "R PheWAS: data analysis and plotting tools for phenome-wide association studies in the R environment.",
"abstract": "UNLABELLED\nPhenome-wide association studies (PheWAS) have been used to replicate known genetic associations and discover new phenotype associations for genetic variants. This PheWAS implementation allows users to translate ICD-9 codes to PheWAS case and control groups, perform analyses using these and/or other phenotypes with covariate adjustments and plot the results. We demonstrate the methods by replicating a PheWAS on rs3135388 (near HLA-DRB, associated with multiple sclerosis) and performing a novel PheWAS using an individual's maximum white blood cell count (WBC) as a continuous measure. Our results for rs3135388 replicate known associations with more significant results than the original study on the same dataset. Our PheWAS of WBC found expected results, including associations with infections, myeloproliferative diseases and associated conditions, such as anemia. These results demonstrate the performance of the improved classification scheme and the flexibility of PheWAS encapsulated in this package.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThis R package is freely available under the Gnu Public License (GPL-3) from http://phewascatalog.org. It is implemented in native R and is platform independent."
},
{
"pmid": "20442144",
"title": "International classification of diseases, 10th edition, clinical modification and procedure coding system: descriptive overview of the next generation HIPAA code sets.",
"abstract": "Described are the changes to ICD-10-CM and PCS and potential challenges regarding their use in the US for financial and administrative transaction coding under HIPAA in 2013. Using author constructed derivative databases for ICD-10-CM and PCS it was found that ICD-10-CM's overall term content is seven times larger than ICD-9-CM: only 3.2 times larger in those chapters describing disease or symptoms, but 14.1 times larger in injury and cause sections. A new multi-axial approach ICD-10-PCS increased size 18-fold from its prior version. New ICD-10-CM and PCS reflect a corresponding improvement in specificity and content. The forthcoming required national switch to these new administrative codes, coupled with nearly simultaneous widespread introduction of clinical systems and terminologies, requires substantial changes in US administrative systems. Through coordination of terminologies, the systems using them, and healthcare objectives, we can maximize the improvement achieved and engender beneficial data reuse for multiple purposes, with minimal transformations."
},
{
"pmid": "27195309",
"title": "Preparing for the ICD-10-CM Transition: Automated Methods for Translating ICD Codes in Clinical Phenotype Definitions.",
"abstract": "BACKGROUND\nThe national mandate for health systems to transition from ICD-9-CM to ICD-10-CM in October 2015 has an impact on research activities. Clinical phenotypes defined by ICD-9-CM codes need to be converted to ICD-10-CM, which has nearly four times more codes and a very different structure than ICD-9-CM.\n\n\nMETHODS\nWe used the Centers for Medicare & Medicaid Services (CMS) General Equivalent Maps (GEMs) to translate, using four different methods, condition-specific ICD-9-CM code sets used for pragmatic trials (n=32) into ICD-10-CM. We calculated the recall, precision, and F score of each method. We also used the ICD-9-CM and ICD-10-CM value sets defined for electronic quality measure as an additional evaluation of the mapping methods.\n\n\nRESULTS\nThe forward-backward mapping (FBM) method had higher precision, recall and F-score metrics than simple forward mapping (SFM). The more aggressive secondary (SM) and tertiary mapping (TM) methods resulted in higher recall but lower precision. For clinical phenotype definition, FBM was the best (F=0.67), but was close to SM (F=0.62) and TM (F=0.60), judging on the F-scores alone. The overall difference between the four methods was statistically significant (one-way ANOVA, F=5.749, p=0.001). However, pairwise comparisons between FBM, SM, and TM did not reach statistical significance. A similar trend was found for the quality measure value sets.\n\n\nDISCUSSION\nThe optimal method for using the GEMs depends on the relative importance of recall versus precision for a given use case. It appears that for clinically distinct and homogenous conditions, the recall of FBM is sufficient. The performance of all mapping methods was lower for heterogeneous conditions. Since code sets used for phenotype definition and quality measurement can be very similar, there is a possibility of cross-fertilization between the two activities.\n\n\nCONCLUSION\nDifferent mapping approaches yield different collections of ICD-10-CM codes. All methods require some level of human validation."
},
{
"pmid": "24270849",
"title": "Systematic comparison of phenome-wide association study of electronic medical record data and genome-wide association study data.",
"abstract": "Candidate gene and genome-wide association studies (GWAS) have identified genetic variants that modulate risk for human disease; many of these associations require further study to replicate the results. Here we report the first large-scale application of the phenome-wide association study (PheWAS) paradigm within electronic medical records (EMRs), an unbiased approach to replication and discovery that interrogates relationships between targeted genotypes and multiple phenotypes. We scanned for associations between 3,144 single-nucleotide polymorphisms (previously implicated by GWAS as mediators of human traits) and 1,358 EMR-derived phenotypes in 13,835 individuals of European ancestry. This PheWAS replicated 66% (51/77) of sufficiently powered prior GWAS associations and revealed 63 potentially pleiotropic associations with P < 4.6 × 10⁻⁶ (false discovery rate < 0.1); the strongest of these novel associations were replicated in an independent cohort (n = 7,406). These findings validate PheWAS as a tool to allow unbiased interrogation across multiple phenotypes in EMR-based cohorts and to enhance analysis of the genomic basis of human disease."
},
{
"pmid": "28686612",
"title": "Evaluating phecodes, clinical classification software, and ICD-9-CM codes for phenome-wide association studies in the electronic health record.",
"abstract": "OBJECTIVE\nTo compare three groupings of Electronic Health Record (EHR) billing codes for their ability to represent clinically meaningful phenotypes and to replicate known genetic associations. The three tested coding systems were the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes, the Agency for Healthcare Research and Quality Clinical Classification Software for ICD-9-CM (CCS), and manually curated \"phecodes\" designed to facilitate phenome-wide association studies (PheWAS) in EHRs.\n\n\nMETHODS AND MATERIALS\nWe selected 100 disease phenotypes and compared the ability of each coding system to accurately represent them without performing additional groupings. The 100 phenotypes included 25 randomly-chosen clinical phenotypes pursued in prior genome-wide association studies (GWAS) and another 75 common disease phenotypes mentioned across free-text problem lists from 189,289 individuals. We then evaluated the performance of each coding system to replicate known associations for 440 SNP-phenotype pairs.\n\n\nRESULTS\nOut of the 100 tested clinical phenotypes, phecodes exactly matched 83, compared to 53 for ICD-9-CM and 32 for CCS. ICD-9-CM codes were typically too detailed (requiring custom groupings) while CCS codes were often not granular enough. Among 440 tested known SNP-phenotype associations, use of phecodes replicated 153 SNP-phenotype pairs compared to 143 for ICD-9-CM and 139 for CCS. Phecodes also generally produced stronger odds ratios and lower p-values for known associations than ICD-9-CM and CCS. Finally, evaluation of several SNPs via PheWAS identified novel potential signals, some seen in only using the phecode approach. Among them, rs7318369 in PEPD was associated with gastrointestinal hemorrhage.\n\n\nCONCLUSION\nOur results suggest that the phecode groupings better align with clinical diseases mentioned in clinical practice or for genomic studies. ICD-9-CM, CCS, and phecode groupings all worked for PheWAS-type studies, though the phecode groupings produced superior results."
},
{
"pmid": "25826379",
"title": "UK biobank: an open access resource for identifying the causes of a wide range of complex diseases of middle and old age.",
"abstract": "Cathie Sudlow and colleagues describe the UK Biobank, a large population-based prospective study, established to allow investigation of the genetic and non-genetic determinants of the diseases of middle and old age."
},
{
"pmid": "16779043",
"title": "Utilizing the UMLS for semantic mapping between terminologies.",
"abstract": "An algorithm was derived to find candidate mappings between any two terminologies inside the UMLS, making use of synonymy, explicit mapping relations and hierarchical relationships among UMLS concepts. Using an existing set of mappings from SNOMED CT to ICD9CM as our gold standard, we managed to find candidate mappings for 86% of SNOMED CT terms, with recall of 42% and precision of 20%. Among the various methods used, mapping by UMLS synonymy was particularly accurate and could potentially be useful as a quality assurance tool in the creation of mapping sets or in the UMLS editing process. Other strengths and weaknesses of the algorithm are discussed."
},
{
"pmid": "26262116",
"title": "Observational Health Data Sciences and Informatics (OHDSI): Opportunities for Observational Researchers.",
"abstract": "The vision of creating accessible, reliable clinical evidence by accessing the clincial experience of hundreds of millions of patients across the globe is a reality. Observational Health Data Sciences and Informatics (OHDSI) has built on learnings from the Observational Medical Outcomes Partnership to turn methods research and insights into a suite of applications and exploration tools that move the field closer to the ultimate goal of generating evidence about all aspects of healthcare to serve the needs of patients, clinicians and all other decision-makers around the world."
},
{
"pmid": "14728568",
"title": "Supporting communication in an integrated patient record system.",
"abstract": "Over the past two years, Vanderbilt University Medical Center (VUMC) has developed and implemented an integrated user interface front end that gives users a homogeneous view that supports directly the clinical workflow. This system, known as StarPanel, is a Web-based front end that integrates seamlessly all the functions needed for busy outpatient clinics. The key design concept is the ability to support communication as the fundamental activity of a health care team."
},
{
"pmid": "20965889",
"title": "Lipoprotein(a) as a cardiovascular risk factor: current status.",
"abstract": "AIMS\nThe aims of the study were, first, to critically evaluate lipoprotein(a) [Lp(a)] as a cardiovascular risk factor and, second, to advise on screening for elevated plasma Lp(a), on desirable levels, and on therapeutic strategies.\n\n\nMETHODS AND RESULTS\nThe robust and specific association between elevated Lp(a) levels and increased cardiovascular disease (CVD)/coronary heart disease (CHD) risk, together with recent genetic findings, indicates that elevated Lp(a), like elevated LDL-cholesterol, is causally related to premature CVD/CHD. The association is continuous without a threshold or dependence on LDL- or non-HDL-cholesterol levels. Mechanistically, elevated Lp(a) levels may either induce a prothrombotic/anti-fibrinolytic effect as apolipoprotein(a) resembles both plasminogen and plasmin but has no fibrinolytic activity, or may accelerate atherosclerosis because, like LDL, the Lp(a) particle is cholesterol-rich, or both. We advise that Lp(a) be measured once, using an isoform-insensitive assay, in subjects at intermediate or high CVD/CHD risk with premature CVD, familial hypercholesterolaemia, a family history of premature CVD and/or elevated Lp(a), recurrent CVD despite statin treatment, ≥3% 10-year risk of fatal CVD according to European guidelines, and/or ≥10% 10-year risk of fatal + non-fatal CHD according to US guidelines. As a secondary priority after LDL-cholesterol reduction, we recommend a desirable level for Lp(a) <80th percentile (less than ∼50 mg/dL). Treatment should primarily be niacin 1-3 g/day, as a meta-analysis of randomized, controlled intervention trials demonstrates reduced CVD by niacin treatment. In extreme cases, LDL-apheresis is efficacious in removing Lp(a).\n\n\nCONCLUSION\nWe recommend screening for elevated Lp(a) in those at intermediate or high CVD/CHD risk, a desirable level <50 mg/dL as a function of global cardiovascular risk, and use of niacin for Lp(a) and CVD/CHD risk reduction."
},
{
"pmid": "30759150",
"title": "Using topic modeling via non-negative matrix factorization to identify relationships between genetic variants and disease phenotypes: A case study of Lipoprotein(a) (LPA).",
"abstract": "Genome-wide and phenome-wide association studies are commonly used to identify important relationships between genetic variants and phenotypes. Most studies have treated diseases as independent variables and suffered from the burden of multiple adjustment due to the large number of genetic variants and disease phenotypes. In this study, we used topic modeling via non-negative matrix factorization (NMF) for identifying associations between disease phenotypes and genetic variants. Topic modeling is an unsupervised machine learning approach that can be used to learn patterns from electronic health record data. We chose the single nucleotide polymorphism (SNP) rs10455872 in LPA as the predictor since it has been shown to be associated with increased risk of hyperlipidemia and cardiovascular diseases (CVD). Using data of 12,759 individuals with electronic health records (EHR) and linked DNA samples at Vanderbilt University Medical Center, we trained a topic model using NMF from 1,853 distinct phenotypes and identified six topics. We tested their associations with rs10455872 in LPA. Topics enriched for CVD and hyperlipidemia had positive correlations with rs10455872 (P < 0.001), replicating a previous finding. We also identified a negative correlation between LPA and a topic enriched for lung cancer (P < 0.001) which was not previously identified via phenome-wide scanning. We were able to replicate the top finding in a separate dataset. Our results demonstrate the applicability of topic modeling in exploring the relationship between genetic variants and clinical diseases."
},
{
"pmid": "29703846",
"title": "LPA Variants Are Associated With Residual Cardiovascular Risk in Patients Receiving Statins.",
"abstract": "BACKGROUND\nCoronary heart disease (CHD) is a leading cause of death globally. Although therapy with statins decreases circulating levels of low-density lipoprotein cholesterol and the incidence of CHD, additional events occur despite statin therapy in some individuals. The genetic determinants of this residual cardiovascular risk remain unknown.\n\n\nMETHODS\nWe performed a 2-stage genome-wide association study of CHD events during statin therapy. We first identified 3099 cases who experienced CHD events (defined as acute myocardial infarction or the need for coronary revascularization) during statin therapy and 7681 controls without CHD events during comparable intensity and duration of statin therapy from 4 sites in the Electronic Medical Records and Genomics Network. We then sought replication of candidate variants in another 160 cases and 1112 controls from a fifth Electronic Medical Records and Genomics site, which joined the network after the initial genome-wide association study. Finally, we performed a phenome-wide association study for other traits linked to the most significant locus.\n\n\nRESULTS\nThe meta-analysis identified 7 single nucleotide polymorphisms at a genome-wide level of significance within the LPA/PLG locus associated with CHD events on statin treatment. The most significant association was for an intronic single nucleotide polymorphism within LPA/PLG (rs10455872; minor allele frequency, 0.069; odds ratio, 1.58; 95% confidence interval, 1.35-1.86; P=2.6×10-10). In the replication cohort, rs10455872 was also associated with CHD events (odds ratio, 1.71; 95% confidence interval, 1.14-2.57; P=0.009). The association of this single nucleotide polymorphism with CHD events was independent of statin-induced change in low-density lipoprotein cholesterol (odds ratio, 1.62; 95% confidence interval, 1.17-2.24; P=0.004) and persisted in individuals with low-density lipoprotein cholesterol ≤70 mg/dL (odds ratio, 2.43; 95% confidence interval, 1.18-4.75; P=0.015). A phenome-wide association study supported the effect of this region on coronary heart disease and did not identify noncardiovascular phenotypes.\n\n\nCONCLUSIONS\nGenetic variations at the LPA locus are associated with CHD events during statin therapy independently of the extent of low-density lipoprotein cholesterol lowering. This finding provides support for exploring strategies targeting circulating concentrations of lipoprotein(a) to reduce CHD events in patients receiving statins."
},
{
"pmid": "18500243",
"title": "Development of a large-scale de-identified DNA biobank to enable personalized medicine.",
"abstract": "Our objective was to develop a DNA biobank linked to phenotypic data derived from an electronic medical record (EMR) system. An \"opt-out\" model was implemented after significant review and revision. The plan included (i) development and maintenance of a de-identified mirror image of the EMR, namely, the \"synthetic derivative\" (SD) and (ii) DNA extracted from discarded blood samples and linked to the SD. Surveys of patients indicated general acceptance of the concept, with only a minority ( approximately 5%) opposing it. As a result, mechanisms to facilitate opt-out included publicity and revision of a standard \"consent to treatment\" form. Algorithms for sample handling and procedures for de-identification were developed and validated in order to ensure acceptable error rates (<0.3 and <0.1%, respectively). The rate of sample accrual is 700-900 samples/week. The advantages of this approach are the rate of sample acquisition and the diversity of phenotypes based on EMRs."
},
{
"pmid": "30104761",
"title": "Efficiently controlling for case-control imbalance and sample relatedness in large-scale genetic association studies.",
"abstract": "In genome-wide association studies (GWAS) for thousands of phenotypes in large biobanks, most binary traits have substantially fewer cases than controls. Both of the widely used approaches, the linear mixed model and the recently proposed logistic mixed model, perform poorly; they produce large type I error rates when used to analyze unbalanced case-control phenotypes. Here we propose a scalable and accurate generalized mixed model association test that uses the saddlepoint approximation to calibrate the distribution of score test statistics. This method, SAIGE (Scalable and Accurate Implementation of GEneralized mixed model), provides accurate P values even when case-control ratios are extremely unbalanced. SAIGE uses state-of-art optimization strategies to reduce computational costs; hence, it is applicable to GWAS for thousands of phenotypes by large biobanks. Through the analysis of UK Biobank data of 408,961 samples from white British participants with European ancestry for > 1,400 binary phenotypes, we show that SAIGE can efficiently analyze large sample data, controlling for unbalanced case-control ratios and sample relatedness."
},
{
"pmid": "30602428",
"title": "Electronic Medical Record Context Signatures Improve Diagnostic Classification Using Medical Image Computing.",
"abstract": "Composite models that combine medical imaging with electronic medical records (EMR) improve predictive power when compared to traditional models that use imaging alone. The digitization of EMR provides potential access to a wealth of medical information, but presents new challenges in algorithm design and inference. Previous studies, such as Phenome Wide Association Study (PheWAS), have shown that EMR data can be used to investigate the relationship between genotypes and clinical conditions. Here, we introduce Phenome-Disease Association Study to extend the statistical capabilities of the PheWAS software through a custom Python package, which creates diagnostic EMR signatures to capture system-wide co-morbidities for a disease population within a given time interval. We investigate the effect of integrating these EMR signatures with radiological data to improve diagnostic classification in disease domains known to have confounding factors because of variable and complex clinical presentation. Specifically, we focus on two studies: First, a study of four major optic nerve related conditions; and second, a study of diabetes. Addition of EMR signature vectors to radiologically derived structural metrics improves the area under the curve (AUC) for diagnostic classification using elastic net regression, for diseases of the optic nerve. For glaucoma, the AUC improves from 0.71 to 0.83, for intrinsic optic nerve disease it increases from 0.72 to 0.91, for optic nerve edema it increases from 0.95 to 0.96, and for thyroid eye disease from 0.79 to 0.89. The EMR signatures recapitulate known comorbidities with diabetes, such as abnormal glucose, but do not significantly modulate image-derived features. In summary, EMR signatures present a scalable and readily applicable."
},
{
"pmid": "30679510",
"title": "Learning from Longitudinal Data in Electronic Health Record and Genetic Data to Improve Cardiovascular Event Prediction.",
"abstract": "Current approaches to predicting a cardiovascular disease (CVD) event rely on conventional risk factors and cross-sectional data. In this study, we applied machine learning and deep learning models to 10-year CVD event prediction by using longitudinal electronic health record (EHR) and genetic data. Our study cohort included 109, 490 individuals. In the first experiment, we extracted aggregated and longitudinal features from EHR. We applied logistic regression, random forests, gradient boosting trees, convolutional neural networks (CNN) and recurrent neural networks with long short-term memory (LSTM) units. In the second experiment, we applied a late-fusion approach to incorporate genetic features. We compared the performance with approaches currently utilized in routine clinical practice - American College of Cardiology and the American Heart Association (ACC/AHA) Pooled Cohort Risk Equation. Our results indicated that incorporating longitudinal feature lead to better event prediction. Combining genetic features through a late-fusion approach can further improve CVD prediction, underscoring the importance of integrating relevant genetic data whenever available."
},
{
"pmid": "29739741",
"title": "Public Opinions Toward Diseases: Infodemiological Study on News Media Data.",
"abstract": "BACKGROUND\nSociety always has limited resources to expend on health care, or anything else. What are the unmet medical needs? How do we allocate limited resources to maximize the health and welfare of the people? These challenging questions might be re-examined systematically within an infodemiological frame on a much larger scale, leveraging the latest advancement in information technology and data science.\n\n\nOBJECTIVE\nWe expanded our previous work by investigating news media data to reveal the coverage of different diseases and medical conditions, together with their sentiments and topics in news articles over three decades. We were motivated to do so since news media plays a significant role in politics and affects the public policy making.\n\n\nMETHODS\nWe analyzed over 3.5 million archive news articles from Reuters media during the periods of 1996/1997, 2008 and 2016, using summary statistics, sentiment analysis, and topic modeling. Summary statistics illustrated the coverage of various diseases and medical conditions during the last 3 decades. Sentiment analysis and topic modeling helped us automatically detect the sentiments of news articles (ie, positive versus negative) and topics (ie, a series of keywords) associated with each disease over time.\n\n\nRESULTS\nThe percentages of news articles mentioning diseases and medical conditions were 0.44%, 0.57% and 0.81% in the three time periods, suggesting that news media or the public has gradually increased its interests in medicine since 1996. Certain diseases such as other malignant neoplasm (34%), other infectious diseases (20%), and influenza (11%) represented the most covered diseases. Two hundred and twenty-six diseases and medical conditions (97.8%) were found to have neutral or negative sentiments in the news articles. Using topic modeling, we identified meaningful topics on these diseases and medical conditions. For instance, the smoking theme appeared in the news articles on other malignant neoplasm only during 1996/1997. The topic phrases HIV and Zika virus were linked to other infectious diseases during 1996/1997 and 2016, respectively.\n\n\nCONCLUSIONS\nThe multi-dimensional analysis of news media data allows the discovery of focus, sentiments and topics of news media in terms of diseases and medical conditions. These infodemiological discoveries could shed light on unmet medical needs and research priorities for future and provide guidance for the decision making in public policy."
},
{
"pmid": "29590070",
"title": "Phenotype risk scores identify patients with unrecognized Mendelian disease patterns.",
"abstract": "Genetic association studies often examine features independently, potentially missing subpopulations with multiple phenotypes that share a single cause. We describe an approach that aggregates phenotypes on the basis of patterns described by Mendelian diseases. We mapped the clinical features of 1204 Mendelian diseases into phenotypes captured from the electronic health record (EHR) and summarized this evidence as phenotype risk scores (PheRSs). In an initial validation, PheRS distinguished cases and controls of five Mendelian diseases. Applying PheRS to 21,701 genotyped individuals uncovered 18 associations between rare variants and phenotypes consistent with Mendelian diseases. In 16 patients, the rare genetic variants were associated with severe outcomes such as organ transplants. PheRS can augment rare-variant interpretation and may identify subsets of patients with distinct genetic causes for common diseases."
},
{
"pmid": "26881369",
"title": "In-Hospital Outcomes and Costs Among Patients Hospitalized During a Return Visit to the Emergency Department.",
"abstract": "IMPORTANCE\nUnscheduled short-term return visits to the emergency department (ED) are increasingly monitored as a hospital performance measure and have been proposed as a measure of the quality of emergency care.\n\n\nOBJECTIVE\nTo examine in-hospital clinical outcomes and resource use among patients who are hospitalized during an unscheduled return visit to the ED.\n\n\nDESIGN, SETTING, AND PARTICIPANTS\nRetrospective analysis of adult ED visits to acute care hospitals in Florida and New York in 2013 using data from the Healthcare Cost and Utilization Project. Patients with index ED visits were identified and followed up for return visits to the ED within 7, 14, and 30 days.\n\n\nEXPOSURES\nHospital admission occurring during an initial visit to the ED vs during a return visit to the ED.\n\n\nMAIN OUTCOMES AND MEASURES\nIn-hospital mortality, intensive care unit (ICU) admission, length of stay, and inpatient costs.\n\n\nRESULTS\nAmong the 9,036,483 index ED visits to 424 hospitals in the study sample, 1,758,359 patients were admitted to the hospital during the index ED visit. Of these patients, 149,214 (8.5%) had a return visit to the ED within 7 days of the index ED visit, 228,370 (13.0%) within 14 days, and 349,335 (19.9%) within 30 days, and 76,151 (51.0%), 122,040 (53.4%), and 190,768 (54.6%), respectively, were readmitted to the hospital. Among the 7,278,124 patients who were discharged during the index ED visit, 598,404 (8.2%) had a return visit to the ED within 7 days, 839,386 (11.5%) within 14 days, and 1,205,865 (16.6%) within 30 days. Of these patients, 86,012 (14.4%) were admitted to the hospital within 7 days, 121,587 (14.5%) within 14 days, and 173,279 (14.4%) within 30 days. The 86,012 patients discharged from the ED and admitted to the hospital during a return ED visit within 7 days had significantly lower rates of in-hospital mortality (1.85%) compared with the 1,609,145 patients who were admitted during the index ED visit without a return ED visit (2.48%) (odds ratio, 0.73 [95% CI, 0.69-0.78]), lower rates of ICU admission (23.3% vs 29.0%, respectively; odds ratio, 0.73 [95% CI, 0.71-0.76]), lower mean costs ($10,169 vs $10,799; difference, $629 [95% CI, $479-$781]), and longer lengths of stay (5.16 days vs 4.97 days; IRR, 1.04 [95% CI, 1.03-1.05]). Similar outcomes were observed for patients returning to the ED within 14 and 30 days of the index ED visit. In contrast, patients who returned to the ED after hospital discharge and were readmitted had higher rates of in-hospital mortality and ICU admission, longer lengths of stay, and higher costs during the repeat hospital admission compared with those admitted to the hospital during the index ED visit without a return ED visit.\n\n\nCONCLUSIONS AND RELEVANCE\nCompared with adult patients who were hospitalized during the index ED visit and did not have a return visit to the ED, patients who were initially discharged during an ED visit and admitted during a return visit to the ED had lower in-hospital mortality, ICU admission rates, and in-hospital costs and longer lengths of stay. These findings suggest that hospital admissions associated with return visits to the ED may not adequately capture deficits in the quality of care delivered during an ED visit."
},
{
"pmid": "26395541",
"title": "Online Prediction of Health Care Utilization in the Next Six Months Based on Electronic Health Record Information: A Cohort and Validation Study.",
"abstract": "BACKGROUND\nThe increasing rate of health care expenditures in the United States has placed a significant burden on the nation's economy. Predicting future health care utilization of patients can provide useful information to better understand and manage overall health care deliveries and clinical resource allocation.\n\n\nOBJECTIVE\nThis study developed an electronic medical record (EMR)-based online risk model predictive of resource utilization for patients in Maine in the next 6 months across all payers, all diseases, and all demographic groups.\n\n\nMETHODS\nIn the HealthInfoNet, Maine's health information exchange (HIE), a retrospective cohort of 1,273,114 patients was constructed with the preceding 12-month EMR. Each patient's next 6-month (between January 1, 2013 and June 30, 2013) health care resource utilization was retrospectively scored ranging from 0 to 100 and a decision tree-based predictive model was developed. Our model was later integrated in the Maine HIE population exploration system to allow a prospective validation analysis of 1,358,153 patients by forecasting their next 6-month risk of resource utilization between July 1, 2013 and December 31, 2013.\n\n\nRESULTS\nProspectively predicted risks, on either an individual level or a population (per 1000 patients) level, were consistent with the next 6-month resource utilization distributions and the clinical patterns at the population level. Results demonstrated the strong correlation between its care resource utilization and our risk scores, supporting the effectiveness of our model. With the online population risk monitoring enterprise dashboards, the effectiveness of the predictive algorithm has been validated by clinicians and caregivers in the State of Maine.\n\n\nCONCLUSIONS\nThe model and associated online applications were designed for tracking the evolving nature of total population risk, in a longitudinal manner, for health care resource utilization. It will enable more effective care management strategies driving improved patient outcomes."
},
{
"pmid": "30395248",
"title": "Effect of vocabulary mapping for conditions on phenotype cohorts.",
"abstract": "Objective\nTo study the effect on patient cohorts of mapping condition (diagnosis) codes from source billing vocabularies to a clinical vocabulary.\n\n\nMaterials and Methods\nNine International Classification of Diseases, Ninth Revision, Clinical Modification (ICD9-CM) concept sets were extracted from eMERGE network phenotypes, translated to Systematized Nomenclature of Medicine - Clinical Terms concept sets, and applied to patient data that were mapped from source ICD9-CM and ICD10-CM codes to Systematized Nomenclature of Medicine - Clinical Terms codes using Observational Health Data Sciences and Informatics (OHDSI) Observational Medical Outcomes Partnership (OMOP) vocabulary mappings. The original ICD9-CM concept set and a concept set extended to ICD10-CM were used to create patient cohorts that served as gold standards.\n\n\nResults\nFour phenotype concept sets were able to be translated to Systematized Nomenclature of Medicine - Clinical Terms without ambiguities and were able to perform perfectly with respect to the gold standards. The other 5 lost performance when 2 or more ICD9-CM or ICD10-CM codes mapped to the same Systematized Nomenclature of Medicine - Clinical Terms code. The patient cohorts had a total error (false positive and false negative) of up to 0.15% compared to querying ICD9-CM source data and up to 0.26% compared to querying ICD9-CM and ICD10-CM data. Knowledge engineering was required to produce that performance; simple automated methods to generate concept sets had errors up to 10% (one outlier at 250%).\n\n\nDiscussion\nThe translation of data from source vocabularies to Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT) resulted in very small error rates that were an order of magnitude smaller than other error sources.\n\n\nConclusion\nIt appears possible to map diagnoses from disparate vocabularies to a single clinical vocabulary and carry out research using a single set of definitions, thus improving efficiency and transportability of research."
}
] |
JMIR Medical Informatics | 31769759 | PMC6913619 | 10.2196/14502 | Using a Large Margin Context-Aware Convolutional Neural Network to Automatically Extract Disease-Disease Association from Literature: Comparative Analytic Study | BackgroundResearch on disease-disease association (DDA), like comorbidity and complication, provides important insights into disease treatment and drug discovery, and a large body of the literature has been published in the field. However, using current search tools, it is not easy for researchers to retrieve information on the latest DDA findings. First, comorbidity and complication keywords pull up large numbers of PubMed studies. Second, disease is not highlighted in search results. Finally, DDA is not identified, as currently no disease-disease association extraction (DDAE) dataset or tools are available.ObjectiveAs there are no available DDAE datasets or tools, this study aimed to develop (1) a DDAE dataset and (2) a neural network model for extracting DDA from the literature.MethodsIn this study, we formulated DDAE as a supervised machine learning classification problem. To develop the system, we first built a DDAE dataset. We then employed two machine learning models, support vector machine and convolutional neural network, to extract DDA. Furthermore, we evaluated the effect of using the output layer as features of the support vector machine-based model. Finally, we implemented large margin context-aware convolutional neural network architecture to integrate context features and convolutional neural networks through the large margin function.ResultsOur DDAE dataset consisted of 521 PubMed abstracts. Experiment results showed that the support vector machine-based approach achieved an F1 measure of 80.32%, which is higher than the convolutional neural network-based approach (73.32%). Using the output layer of convolutional neural network as a feature for the support vector machine does not further improve the performance of support vector machine. However, our large margin context-aware-convolutional neural network achieved the highest F1 measure of 84.18% and demonstrated that combining the hinge loss function of support vector machine with a convolutional neural network into a single neural network architecture outperforms other approaches.ConclusionsTo facilitate the development of text-mining research for DDAE, we developed the first publicly available DDAE dataset consisting of disease mentions, Medical Subject Heading IDs, and relation annotations. We developed different conventional machine learning models and neural network architectures and evaluated their effects on our DDAE dataset. To further improve DDAE performance, we propose an large margin context-aware-convolutional neural network model for DDAE that outperforms other approaches. | Related WorkIn this section, we first review published disease annotation datasets. Then, we briefly review different methods of relation extraction in biomedical domains.Disease Annotation DatasetsBefore identifying DDAs, we have to identify diseases in the text first. Fortunately, there are many datasets for developing such disease name recognition and normalization systems. The National Center for Biotechnology Information (NCBI) disease dataset [26] is the most widely used. For instance, Leaman and Lu [9] proposed a semi-Markov model trained on an NCBI disease dataset that achieved an F1 measure of 80.7%. However, DDAs are not annotated in the NCBI dataset abstracts, limiting its usefulness for the DDAE task.As DDAs can give insights into disease etiology and treatment, many studies focus on generating DDA networks [1-5]. For example, Sun et al [4] used disease-gene associations in the Online Mendelian Inheritance in Man [27] to predict DDAs with similar phenotypes. Bang et al [3] used disease-gene relations to define disease-disease network, and the causalities of disease pairs are confirmed through using clinical results and metabolic pathways. However, the constructed networks lack text evidence and therefore cannot be used to develop a DDAE dataset.Xu et al [23] proposed a semisupervised iterative pattern-learning approach to learn DDA patterns from PubMed abstracts. They constructed a disease-disease risk relationship knowledge base (dRiskKB) consisting of 34,000 unique disease pairs. However, there are some limitations of dRiskKB that make it hard to use in developing DDAE systems. First, dRiskKB only provides positive DDA sentences. Owing to the lack of negative instances, it cannot be used to train ML-based classifiers. In addition, as the development of dRiskKB is based on a pattern-learning approach, it only includes DDA sentences with very simple structures and thus is not ideal for training a DDA system capable of analyzing complicated sentences.To solve the above problems, we developed a DDAE dataset. Our dataset was different from dRiskKB in 3 aspects. First, our DDAE dataset contained positive, negative, and null DDAs. Second, it did not use patterns to annotate DDAs and therefore included DDA sentences with more complex expressions. Finally, it annotated DDAs in the entire abstract, allowing an ML-based classifier to use document-level features.Relation ExtractionRule-based approaches are commonly used in new domains or tasks that do not have large-scale annotated datasets. Lee et al’s [28] approach is an example. They extracted protein-protein interactions (PPIs) from plain text using handcrafted dependency rules. Their approach did not require a training set, but it achieved a high precision of 97.4% on the Artificial Intelligence in Medicine (AIMed) dataset [29]. However, it was difficult for them to create rules that can extract all PPIs, and their system, therefore, achieved a low recall of 23.6%. Moreover, Nguyen et al [30] used predicate-argument structure (PAS) [31] rules to extract more general relations including PPI and drug-drug interaction. Their rules detected PPIs by examining where relation verbs and proteins are located in the spans of predicates and arguments. Their approach required less effort to design rules and was able to adapt to different relation types. Compared with Lee et al’s system, it achieved a higher recall of 52.6% on the AIMed dataset but a lower precision of 30.4%.ML-based approaches can usually achieve relatively higher performance than rule-based ones. For instance, Zhang et al [32] used hybrid feature–based and tree-based kernels implemented with SVM-LIGHT-TK [33] for PPI extraction. The feature-based kernel uses SENNA (Semantic/syntactic Extraction using a Neural Network Architecture)’s pretrained word-embedding model [34]. In the tree-based kernel configuration, the sentence dependency structure is used as input. The structure is decomposed into substructures and then transformed into one-hot encoding features for SVMs. Zhang et al’s approach achieved an F score of 69.7% on the AIMed dataset, which is higher than Lee et al’s 26.3% and Nguyen et al’s 38.5%.In addition to sentence-level features, document-level features are also useful in relation extraction. Peng et al [17] proposed an SVM-based approach for document-level chemical-disease relation (CDR) extraction. They used statistical features, such as whether a chemical or disease name appears in the title, to classify document-level chemical-disease pairs. By adding the features, they improved their F score from a baseline of 46.82% to 57.51% on the BioCreative V CDR dataset [35]. Our LC-CNN is partly inspired by Peng et al’s [17] statistical features; our context vector adopts document-level features for sentence-level DDA classification.Although the abovementioned feature-based approaches have made gains in many relation extraction tasks [36-38], it is difficult to find novel features to further improve performance. Several researchers are exploring deep learning approaches as a way forward. For instance, Peng and Lu [39] proposed a multichannel dependency-based CNN model (McDepCNN). McDepCNN uses 2 channels to represent an input sentence. One is the word-embedding layer, whereas the other is the head-word-embedding layer. Each embedding layer concatenates pretrained word-embedding vectors, one-hot encodings of part of speech, chunks, named entity labels, and dependency words. In PPI prediction, Peng and Lu’s CNN model achieved F scores of 63.5% on AIMed and 65.3% on BioInfer.For drug-drug interaction extraction, Lin et al [20] proposed a syntax CNN (SCNN) that integrates syntactic features, including words, predicates, and shortest dependency paths into a CNN. They trained their model with word2vec [40] and the Enju parser [31]. The Enju parser breaks the sentence into PASs, and non-PAS words or phrases are removed. The pruned sentences are then used to train the word-embedding model. Their approach achieved an F score of 68.6% on the 2013 DDIExtraction dataset.Our LC-CNN was also inspired by Zhao et al’s [20] SCNN architecture with 3 main differences. First, we replaced the log loss function with the hinge loss function. Second, SCNN uses a fully connected layer for traditional features before merging them with the CNN’s output. However, LC-CNN directly merges the CNN’s output with traditional features. Finally, SCNN’s traditional features only use sentence-level information, whereas LC-CNN also uses both sentence-level and document-level features. | [
"24232732",
"24895436",
"27587660",
"25228247",
"27209279",
"27366724",
"26468341",
"24729964",
"27283952",
"29272325",
"26380306",
"30560325",
"27173521",
"31414701",
"25864936",
"28316651",
"27466626",
"24725842",
"24393765",
"15608251",
"15811782",
"25887686",
"22595237",
"25861377",
"27454611",
"10928714",
"23969135",
"23714032",
"24339694"
] | [
{
"pmid": "24232732",
"title": "Discovering disease-disease associations by fusing systems-level molecular data.",
"abstract": "The advent of genome-scale genetic and genomic studies allows new insight into disease classification. Recently, a shift was made from linking diseases simply based on their shared genes towards systems-level integration of molecular data. Here, we aim to find relationships between diseases based on evidence from fusing all available molecular interaction and ontology data. We propose a multi-level hierarchy of disease classes that significantly overlaps with existing disease classification. In it, we find 14 disease-disease associations currently not present in Disease Ontology and provide evidence for their relationships through comorbidity data and literature curation. Interestingly, even though the number of known human genetic interactions is currently very small, we find they are the most important predictor of a link between diseases. Finally, we show that omission of any one of the included data sources reduces prediction quality, further highlighting the importance in the paradigm shift towards systems-level data fusion."
},
{
"pmid": "24895436",
"title": "DiseaseConnect: a comprehensive web server for mechanism-based disease-disease connections.",
"abstract": "The DiseaseConnect (http://disease-connect.org) is a web server for analysis and visualization of a comprehensive knowledge on mechanism-based disease connectivity. The traditional disease classification system groups diseases with similar clinical symptoms and phenotypic traits. Thus, diseases with entirely different pathologies could be grouped together, leading to a similar treatment design. Such problems could be avoided if diseases were classified based on their molecular mechanisms. Connecting diseases with similar pathological mechanisms could inspire novel strategies on the effective repositioning of existing drugs and therapies. Although there have been several studies attempting to generate disease connectivity networks, they have not yet utilized the enormous and rapidly growing public repositories of disease-related omics data and literature, two primary resources capable of providing insights into disease connections at an unprecedented level of detail. Our DiseaseConnect, the first public web server, integrates comprehensive omics and literature data, including a large amount of gene expression data, Genome-Wide Association Studies catalog, and text-mined knowledge, to discover disease-disease connectivity via common molecular mechanisms. Moreover, the clinical comorbidity data and a comprehensive compilation of known drug-disease relationships are additionally utilized for advancing the understanding of the disease landscape and for facilitating the mechanism-based development of new drug treatments."
},
{
"pmid": "27587660",
"title": "Causality modeling for directed disease network.",
"abstract": "MOTIVATION\nCausality between two diseases is valuable information as subsidiary information for medicine which is intended for prevention, diagnostics and treatment. Conventional cohort-centric researches are able to obtain very objective results, however, they demands costly experimental expense and long period of time. Recently, data source to clarify causality has been diversified: available information includes gene, protein, metabolic pathway and clinical information. By taking full advantage of those pieces of diverse information, we may extract causalities between diseases, alternatively to cohort-centric researches.\n\n\nMETHOD\nIn this article, we propose a new approach to define causality between diseases. In order to find causality, three different networks were constructed step by step. Each step has different data sources and different analytical methods, and the prior step sifts causality information to the next step. In the first step, a network defines association between diseases by utilizing disease-gene relations. And then, potential causalities of disease pairs are defined as a network by using prevalence and comorbidity information from clinical results. Finally, disease causalities are confirmed by a network defined from metabolic pathways.\n\n\nRESULTS\nThe proposed method is applied to data which is collected from database such as MeSH, OMIM, HuDiNe, KEGG and PubMed. The experimental results indicated that disease causality that we found is 19 times higher than that of random guessing. The resulting pairs of causal-effected diseases are validated on medical literatures.\n\n\nAVAILABILITY AND IMPLEMENTATION\nhttp://www.alphaminers.net\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "25228247",
"title": "Predicting disease associations via biological network analysis.",
"abstract": "BACKGROUND\nUnderstanding the relationship between diseases based on the underlying biological mechanisms is one of the greatest challenges in modern biology and medicine. Exploring disease-disease associations by using system-level biological data is expected to improve our current knowledge of disease relationships, which may lead to further improvements in disease diagnosis, prognosis and treatment.\n\n\nRESULTS\nWe took advantage of diverse biological data including disease-gene associations and a large-scale molecular network to gain novel insights into disease relationships. We analysed and compared four publicly available disease-gene association datasets, then applied three disease similarity measures, namely annotation-based measure, function-based measure and topology-based measure, to estimate the similarity scores between diseases. We systematically evaluated disease associations obtained by these measures against a statistical measure of comorbidity which was derived from a large number of medical patient records. Our results show that the correlation between our similarity measures and comorbidity scores is substantially higher than expected at random, confirming that our similarity measures are able to recover comorbidity associations. We also demonstrated that our predicted disease associations correlated with disease associations generated from genome-wide association studies significantly higher than expected at random. Furthermore, we evaluated our predicted disease associations via mining the literature on PubMed, and presented case studies to demonstrate how these novel disease associations can be used to enhance our current knowledge of disease relationships.\n\n\nCONCLUSIONS\nWe present three similarity measures for predicting disease associations. The strong correlation between our predictions and known disease associations demonstrates the ability of our measures to provide novel insights into disease relationships."
},
{
"pmid": "27209279",
"title": "DNetDB: The human disease network database based on dysfunctional regulation mechanism.",
"abstract": "Disease similarity study provides new insights into disease taxonomy, pathogenesis, which plays a guiding role in diagnosis and treatment. The early studies were limited to estimate disease similarities based on clinical manifestations, disease-related genes, medical vocabulary concepts or registry data, which were inevitably biased to well-studied diseases and offered small chance of discovering novel findings in disease relationships. In other words, genome-scale expression data give us another angle to address this problem since simultaneous measurement of the expression of thousands of genes allows for the exploration of gene transcriptional regulation, which is believed to be crucial to biological functions. Although differential expression analysis based methods have the potential to explore new disease relationships, it is difficult to unravel the upstream dysregulation mechanisms of diseases. We therefore estimated disease similarities based on gene expression data by using differential coexpression analysis, a recently emerging method, which has been proved to be more potential to capture dysfunctional regulation mechanisms than differential expression analysis. A total of 1,326 disease relationships among 108 diseases were identified, and the relevant information constituted the human disease network database (DNetDB). Benefiting from the use of differential coexpression analysis, the potential common dysfunctional regulation mechanisms shared by disease pairs (i.e. disease relationships) were extracted and presented. Statistical indicators, common disease-related genes and drugs shared by disease pairs were also included in DNetDB. In total, 1,326 disease relationships among 108 diseases, 5,598 pathways, 7,357 disease-related genes and 342 disease drugs are recorded in DNetDB, among which 3,762 genes and 148 drugs are shared by at least two diseases. DNetDB is the first database focusing on disease similarity from the viewpoint of gene regulation mechanism. It provides an easy-to-use web interface to search and browse the disease relationships and thus helps to systematically investigate etiology and pathogenesis, perform drug repositioning, and design novel therapeutic interventions.Database URL: http://app.scbit.org/DNetDB/ #."
},
{
"pmid": "27366724",
"title": "Microvasular and macrovascular complications in diabetes mellitus: Distinct or continuum?",
"abstract": "Diabetes and related complications are associated with long-term damage and failure of various organ systems. The line of demarcation between the pathogenic mechanisms of microvascular and macrovascular complications of diabetes and differing responses to therapeutic interventions is blurred. Diabetes induces changes in the microvasculature, causing extracellular matrix protein synthesis, and capillary basement membrane thickening which are the pathognomic features of diabetic microangiopathy. These changes in conjunction with advanced glycation end products, oxidative stress, low grade inflammation, and neovascularization of vasa vasorum can lead to macrovascular complications. Hyperglycemia is the principal cause of microvasculopathy but also appears to play an important role in causation of macrovasculopathy. There is thought to be an intersection between micro and macro vascular complications, but the two disorders seem to be strongly interconnected, with micro vascular diseases promoting atherosclerosis through processes such as hypoxia and changes in vasa vasorum. It is thus imperative to understand whether microvascular complications distinctly precede macrovascular complications or do both of them progress simultaneously as a continuum. This will allow re-focusing on the clinical issues with a unifying perspective which can improve type 2 diabetes mellitus outcomes."
},
{
"pmid": "26468341",
"title": "Diabetes and cardiovascular disease: Epidemiology, biological mechanisms, treatment recommendations and future research.",
"abstract": "The incidence of diabetes mellitus (DM) continues to rise and has quickly become one of the most prevalent and costly chronic diseases worldwide. A close link exists between DM and cardiovascular disease (CVD), which is the most prevalent cause of morbidity and mortality in diabetic patients. Cardiovascular (CV) risk factors such as obesity, hypertension and dyslipidemia are common in patients with DM, placing them at increased risk for cardiac events. In addition, many studies have found biological mechanisms associated with DM that independently increase the risk of CVD in diabetic patients. Therefore, targeting CV risk factors in patients with DM is critical to minimize the long-term CV complications of the disease. This paper summarizes the relationship between diabetes and CVD, examines possible mechanisms of disease progression, discusses current treatment recommendations, and outlines future research directions."
},
{
"pmid": "24729964",
"title": "Evaluating word representation features in biomedical named entity recognition tasks.",
"abstract": "Biomedical Named Entity Recognition (BNER), which extracts important entities such as genes and proteins, is a crucial step of natural language processing in the biomedical domain. Various machine learning-based approaches have been applied to BNER tasks and showed good performance. In this paper, we systematically investigated three different types of word representation (WR) features for BNER, including clustering-based representation, distributional representation, and word embeddings. We selected one algorithm from each of the three types of WR features and applied them to the JNLPBA and BioCreAtIvE II BNER tasks. Our results showed that all the three WR algorithms were beneficial to machine learning-based BNER systems. Moreover, combining these different types of WR features further improved BNER performance, indicating that they are complementary to each other. By combining all the three types of WR features, the improvements in F-measure on the BioCreAtIvE II GM and JNLPBA corpora were 3.75% and 1.39%, respectively, when compared with the systems using baseline features. To the best of our knowledge, this is the first study to systematically evaluate the effect of three different types of WR features for BNER tasks."
},
{
"pmid": "27283952",
"title": "TaggerOne: joint named entity recognition and normalization with semi-Markov Models.",
"abstract": "MOTIVATION\nText mining is increasingly used to manage the accelerating pace of the biomedical literature. Many text mining applications depend on accurate named entity recognition (NER) and normalization (grounding). While high performing machine learning methods trainable for many entity types exist for NER, normalization methods are usually specialized to a single entity type. NER and normalization systems are also typically used in a serial pipeline, causing cascading errors and limiting the ability of the NER system to directly exploit the lexical information provided by the normalization.\n\n\nMETHODS\nWe propose the first machine learning model for joint NER and normalization during both training and prediction. The model is trainable for arbitrary entity types and consists of a semi-Markov structured linear classifier, with a rich feature approach for NER and supervised semantic indexing for normalization. We also introduce TaggerOne, a Java implementation of our model as a general toolkit for joint NER and normalization. TaggerOne is not specific to any entity type, requiring only annotated training data and a corresponding lexicon, and has been optimized for high throughput.\n\n\nRESULTS\nWe validated TaggerOne with multiple gold-standard corpora containing both mention- and concept-level annotations. Benchmarking results show that TaggerOne achieves high performance on diseases (NCBI Disease corpus, NER f-score: 0.829, normalization f-score: 0.807) and chemicals (BioCreative 5 CDR corpus, NER f-score: 0.914, normalization f-score 0.895). These results compare favorably to the previous state of the art, notwithstanding the greater flexibility of the model. We conclude that jointly modeling NER and normalization greatly improves performance.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe TaggerOne source code and an online demonstration are available at: http://www.ncbi.nlm.nih.gov/bionlp/taggerone\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "29272325",
"title": "GRAM-CNN: a deep learning approach with local context for named entity recognition in biomedical text.",
"abstract": "Motivation\nBest performing named entity recognition (NER) methods for biomedical literature are based on hand-crafted features or task-specific rules, which are costly to produce and difficult to generalize to other corpora. End-to-end neural networks achieve state-of-the-art performance without hand-crafted features and task-specific knowledge in non-biomedical NER tasks. However, in the biomedical domain, using the same architecture does not yield competitive performance compared with conventional machine learning models.\n\n\nResults\nWe propose a novel end-to-end deep learning approach for biomedical NER tasks that leverages the local contexts based on n-gram character and word embeddings via Convolutional Neural Network (CNN). We call this approach GRAM-CNN. To automatically label a word, this method uses the local information around a word. Therefore, the GRAM-CNN method does not require any specific knowledge or feature engineering and can be theoretically applied to a wide range of existing NER problems. The GRAM-CNN approach was evaluated on three well-known biomedical datasets containing different BioNER entities. It obtained an F1-score of 87.26% on the Biocreative II dataset, 87.26% on the NCBI dataset and 72.57% on the JNLPBA dataset. Those results put GRAM-CNN in the lead of the biological NER methods. To the best of our knowledge, we are the first to apply CNN based structures to BioNER problems.\n\n\nAvailability and implementation\nThe GRAM-CNN source code, datasets and pre-trained model are available online at: https://github.com/valdersoul/GRAM-CNN.\n\n\nContact\[email protected] or [email protected].\n\n\nSupplementary information\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "26380306",
"title": "GNormPlus: An Integrative Approach for Tagging Genes, Gene Families, and Protein Domains.",
"abstract": "The automatic recognition of gene names and their associated database identifiers from biomedical text has been widely studied in recent years, as these tasks play an important role in many downstream text-mining applications. Despite significant previous research, only a small number of tools are publicly available and these tools are typically restricted to detecting only mention level gene names or only document level gene identifiers. In this work, we report GNormPlus: an end-to-end and open source system that handles both gene mention and identifier detection. We created a new corpus of 694 PubMed articles to support our development of GNormPlus, containing manual annotations for not only gene names and their identifiers, but also closely related concepts useful for gene name disambiguation, such as gene families and protein domains. GNormPlus integrates several advanced text-mining techniques, including SimConcept for resolving composite gene names. As a result, GNormPlus compares favorably to other state-of-the-art methods when evaluated on two widely used public benchmarking datasets, achieving 86.7% F1-score on the BioCreative II Gene Normalization task dataset and 50.1% F1-score on the BioCreative III Gene Normalization task dataset. The GNormPlus source code and its annotated corpus are freely available, and the results of applying GNormPlus to the entire PubMed are freely accessible through our web-based tool PubTator."
},
{
"pmid": "30560325",
"title": "Statistical principle-based approach for gene and protein related object recognition.",
"abstract": "The large number of chemical and pharmaceutical patents has attracted researchers doing biomedical text mining to extract valuable information such as chemicals, genes and gene products. To facilitate gene and gene product annotations in patents, BioCreative V.5 organized a gene- and protein-related object (GPRO) recognition task, in which participants were assigned to identify GPRO mentions and determine whether they could be linked to their unique biological database records. In this paper, we describe the system constructed for this task. Our system is based on two different NER approaches: the statistical-principle-based approach (SPBA) and conditional random fields (CRF). Therefore, we call our system SPBA-CRF. SPBA is an interpretable machine-learning framework for gene mention recognition. The predictions of SPBA are used as features for our CRF-based GPRO recognizer. The recognizer was developed for identifying chemical mentions in patents, and we adapted it for GPRO recognition. In the BioCreative V.5 GPRO recognition task, SPBA-CRF obtained an F-score of 73.73% on the evaluation metric of GPRO type 1 and an F-score of 78.66% on the evaluation metric of combining GPRO types 1 and 2. Our results show that SPBA trained on an external NER dataset can perform reasonably well on the partial match evaluation metric. Furthermore, SPBA can significantly improve performance of the CRF-based recognizer trained on the GPRO dataset."
},
{
"pmid": "27173521",
"title": "Mining chemical patents with an ensemble of open systems.",
"abstract": "The significant amount of medicinal chemistry information contained in patents makes them an attractive target for text mining. In this manuscript, we describe systems for named entity recognition (NER) of chemicals and genes/proteins in patents, using the CEMP (for chemicals) and GPRO (for genes/proteins) corpora provided by the CHEMDNER task at BioCreative V. Our chemical NER system is an ensemble of five open systems, including both versions of tmChem, our previous work on chemical NER. Their output is combined using a machine learning classification approach. Our chemical NER system obtained 0.8752 precision and 0.9129 recall, for 0.8937 f-score on the CEMP task. Our gene/protein NER system is an extension of our previous work for gene and protein NER, GNormPlus. This system obtained a performance of 0.8143 precision and 0.8141 recall, for 0.8137 f-score on the GPRO task. Both systems achieved the highest performance in their respective tasks at BioCreative V. We conclude that an ensemble of independently-created open systems is sufficiently diverse to significantly improve performance over any individual system, even when they use a similar approach.Database URL: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/tmTools/."
},
{
"pmid": "31414701",
"title": "NERChem: adapting NERBio to chemical patents via full-token features and named entity feature with chemical sub-class composition.",
"abstract": "Chemical patents contain detailed information on novel chemical compounds that is valuable to the chemical and pharmaceutical industries. In this paper, we introduce a system, NERChem that can recognize chemical named entity mentions in chemical patents. NERChem is based on the conditional random fields model (CRF). Our approach incorporates ( 1 ) class composition, which is used for combining chemical classes whose naming conventions are similar; ( 2 ) BioNE features, which are used for distinguishing chemical mentions from other biomedical NE mentions in the patents; and ( 3 ) full-token word features, which are used to resolve the tokenization granularity problem. We evaluated our approach on the BioCreative V CHEMDNER-patent corpus, and achieved an F-score of 87.17% in the Chemical Entity Mention in Patents (CEMP) task and a sensitivity of 98.58% in the Chemical Passage Detection (CPD) task, ranking alongside the top systems. Database URL: Our NERChem web-based system is publicly available at iisrserv.csie.n cu.edu.tw/nerchem."
},
{
"pmid": "25864936",
"title": "An approach to improve kernel-based Protein-Protein Interaction extraction by learning from large-scale network data.",
"abstract": "Protein-Protein Interaction extraction (PPIe) from biomedical literatures is an important task in biomedical text mining and has achieved desirable results on the annotated datasets. However, the traditional machine learning methods on PPIe suffer badly from vocabulary gap and data sparseness, which weakens classification performance. In this work, an approach capturing external information from the web-based data is introduced to address these problems and boost the existing methods. The approach involves three kinds of word representation techniques: distributed representation, vector clustering and Brown clusters. Experimental results show that our method outperforms the state-of-the-art methods on five publicly available corpora. Our code and data are available at: http://chaoslog.com/improving-kernel-based-protein-protein-interaction-extraction-by-unsupervised-word-representation-codes-and-data.html."
},
{
"pmid": "28316651",
"title": "Improving chemical disease relation extraction with rich features and weakly labeled data.",
"abstract": "BACKGROUND\nDue to the importance of identifying relations between chemicals and diseases for new drug discovery and improving chemical safety, there has been a growing interest in developing automatic relation extraction systems for capturing these relations from the rich and rapid-growing biomedical literature. In this work we aim to build on current advances in named entity recognition and a recent BioCreative effort to further improve the state of the art in biomedical relation extraction, in particular for the chemical-induced disease (CID) relations.\n\n\nRESULTS\nWe propose a rich-feature approach with Support Vector Machine to aid in the extraction of CIDs from PubMed articles. Our feature vector includes novel statistical features, linguistic knowledge, and domain resources. We also incorporate the output of a rule-based system as features, thus combining the advantages of rule- and machine learning-based systems. Furthermore, we augment our approach with automatically generated labeled text from an existing knowledge base to improve performance without additional cost for corpus construction. To evaluate our system, we perform experiments on the human-annotated BioCreative V benchmarking dataset and compare with previous results. When trained using only BioCreative V training and development sets, our system achieves an F-score of 57.51 %, which already compares favorably to previous methods. Our system performance was further improved to 61.01 % in F-score when augmented with additional automatically generated weakly labeled data.\n\n\nCONCLUSIONS\nOur text-mining approach demonstrates state-of-the-art performance in disease-chemical relation extraction. More importantly, this work exemplifies the use of (freely available) curated document-level annotations in existing biomedical databases, which are largely overlooked in text-mining system development."
},
{
"pmid": "27466626",
"title": "Drug drug interaction extraction from biomedical literature using syntax convolutional neural network.",
"abstract": "MOTIVATION\nDetecting drug-drug interaction (DDI) has become a vital part of public health safety. Therefore, using text mining techniques to extract DDIs from biomedical literature has received great attentions. However, this research is still at an early stage and its performance has much room to improve.\n\n\nRESULTS\nIn this article, we present a syntax convolutional neural network (SCNN) based DDI extraction method. In this method, a novel word embedding, syntax word embedding, is proposed to employ the syntactic information of a sentence. Then the position and part of speech features are introduced to extend the embedding of each word. Later, auto-encoder is introduced to encode the traditional bag-of-words feature (sparse 0-1 vector) as the dense real value vector. Finally, a combination of embedding-based convolutional features and traditional features are fed to the softmax classifier to extract DDIs from biomedical literature. Experimental results on the DDIExtraction 2013 corpus show that SCNN obtains a better performance (an F-score of 0.686) than other state-of-the-art methods.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe source code is available for academic use at http://202.118.75.18:8080/DDI/SCNN-DDI.zip CONTACT: [email protected] information: Supplementary data are available at Bioinformatics online."
},
{
"pmid": "24725842",
"title": "dRiskKB: a large-scale disease-disease risk relationship knowledge base constructed from biomedical text.",
"abstract": "BACKGROUND\nDiscerning the genetic contributions to complex human diseases is a challenging mandate that demands new types of data and calls for new avenues for advancing the state-of-the-art in computational approaches to uncovering disease etiology. Systems approaches to studying observable phenotypic relationships among diseases are emerging as an active area of research for both novel disease gene discovery and drug repositioning. Currently, systematic study of disease relationships on a phenome-wide scale is limited due to the lack of large-scale machine understandable disease phenotype relationship knowledge bases. Our study innovates a semi-supervised iterative pattern learning approach that is used to build an precise, large-scale disease-disease risk relationship (D1 → D2) knowledge base (dRiskKB) from a vast corpus of free-text published biomedical literature.\n\n\nRESULTS\n21,354,075 MEDLINE records comprised the text corpus under study. First, we used one typical disease risk-specific syntactic pattern (i.e. \"D1 due to D2\") as a seed to automatically discover other patterns specifying similar semantic relationships among diseases. We then extracted D1 → D2 risk pairs from MEDLINE using the learned patterns. We manually evaluated the precisions of the learned patterns and extracted pairs. Finally, we analyzed the correlations between disease-disease risk pairs and their associated genes and drugs. The newly created dRiskKB consists of a total of 34,448 unique D1 → D2 pairs, representing the risk-specific semantic relationships among 12,981 diseases with each disease linked to its associated genes and drugs. The identified patterns are highly precise (average precision of 0.99) in specifying the risk-specific relationships among diseases. The precisions of extracted pairs are 0.919 for those that are exactly matched and 0.988 for those that are partially matched. By comparing the iterative pattern approach starting from different seeds, we demonstrated that our algorithm is robust in terms of seed choice. We show that diseases and their risk diseases as well as diseases with similar risk profiles tend to share both genes and drugs.\n\n\nCONCLUSIONS\nThis unique dRiskKB, when combined with existing phenotypic, genetic, and genomic datasets, can have profound implications in our deeper understanding of disease etiology and in drug repositioning."
},
{
"pmid": "24393765",
"title": "NCBI disease corpus: a resource for disease name recognition and concept normalization.",
"abstract": "Information encoded in natural language in biomedical literature publications is only useful if efficient and reliable ways of accessing and analyzing that information are available. Natural language processing and text mining tools are therefore essential for extracting valuable information, however, the development of powerful, highly effective tools to automatically detect central biomedical concepts such as diseases is conditional on the availability of annotated corpora. This paper presents the disease name and concept annotations of the NCBI disease corpus, a collection of 793 PubMed abstracts fully annotated at the mention and concept level to serve as a research resource for the biomedical natural language processing community. Each PubMed abstract was manually annotated by two annotators with disease mentions and their corresponding concepts in Medical Subject Headings (MeSH®) or Online Mendelian Inheritance in Man (OMIM®). Manual curation was performed using PubTator, which allowed the use of pre-annotations as a pre-step to manual annotations. Fourteen annotators were randomly paired and differing annotations were discussed for reaching a consensus in two annotation phases. In this setting, a high inter-annotator agreement was observed. Finally, all results were checked against annotations of the rest of the corpus to assure corpus-wide consistency. The public release of the NCBI disease corpus contains 6892 disease mentions, which are mapped to 790 unique disease concepts. Of these, 88% link to a MeSH identifier, while the rest contain an OMIM identifier. We were able to link 91% of the mentions to a single disease concept, while the rest are described as a combination of concepts. In order to help researchers use the corpus to design and test disease identification methods, we have prepared the corpus as training, testing and development sets. To demonstrate its utility, we conducted a benchmarking experiment where we compared three different knowledge-based disease normalization methods with a best performance in F-measure of 63.7%. These results show that the NCBI disease corpus has the potential to significantly improve the state-of-the-art in disease name recognition and normalization research, by providing a high-quality gold standard thus enabling the development of machine-learning based approaches for such tasks. The NCBI disease corpus, guidelines and other associated resources are available at: http://www.ncbi.nlm.nih.gov/CBBresearch/Dogan/DISEASE/."
},
{
"pmid": "15608251",
"title": "Online Mendelian Inheritance in Man (OMIM), a knowledgebase of human genes and genetic disorders.",
"abstract": "Online Mendelian Inheritance in Man (OMIM) is a comprehensive, authoritative and timely knowledgebase of human genes and genetic disorders compiled to support human genetics research and education and the practice of clinical genetics. Started by Dr Victor A. McKusick as the definitive reference Mendelian Inheritance in Man, OMIM (http://www.ncbi.nlm.nih.gov/omim/) is now distributed electronically by the National Center for Biotechnology Information, where it is integrated with the Entrez suite of databases. Derived from the biomedical literature, OMIM is written and edited at Johns Hopkins University with input from scientists and physicians around the world. Each OMIM entry has a full-text summary of a genetically determined phenotype and/or gene and has numerous links to other genetic databases such as DNA and protein sequence, PubMed references, general and locus-specific mutation databases, HUGO nomenclature, MapViewer, GeneTests, patient support groups and many others. OMIM is an easy and straightforward portal to the burgeoning information in human genetics."
},
{
"pmid": "15811782",
"title": "Comparative experiments on learning information extractors for proteins and their interactions.",
"abstract": "OBJECTIVE\nAutomatically extracting information from biomedical text holds the promise of easily consolidating large amounts of biological knowledge in computer-accessible form. This strategy is particularly attractive for extracting data relevant to genes of the human genome from the 11 million abstracts in Medline. However, extraction efforts have been frustrated by the lack of conventions for describing human genes and proteins. We have developed and evaluated a variety of learned information extraction systems for identifying human protein names in Medline abstracts and subsequently extracting information on interactions between the proteins.\n\n\nMETHODS AND MATERIAL\nWe used a variety of machine learning methods to automatically develop information extraction systems for extracting information on gene/protein name, function and interactions from Medline abstracts. We present cross-validated results on identifying human proteins and their interactions by training and testing on a set of approximately 1000 manually-annotated Medline abstracts that discuss human genes/proteins.\n\n\nRESULTS\nWe demonstrate that machine learning approaches using support vector machines and maximum entropy are able to identify human proteins with higher accuracy than several previous approaches. We also demonstrate that various rule induction methods are able to identify protein interactions with higher precision than manually-developed rules.\n\n\nCONCLUSION\nOur results show that it is promising to use machine learning to automatically build systems for extracting information from biomedical text. The results also give a broad picture of the relative strengths of a wide variety of methods when tested on a reasonably large human-annotated corpus."
},
{
"pmid": "25887686",
"title": "Wide-coverage relation extraction from MEDLINE using deep syntax.",
"abstract": "BACKGROUND\nRelation extraction is a fundamental technology in biomedical text mining. Most of the previous studies on relation extraction from biomedical literature have focused on specific or predefined types of relations, which inherently limits the types of the extracted relations. With the aim of fully leveraging the knowledge described in the literature, we address much broader types of semantic relations using a single extraction framework.\n\n\nRESULTS\nOur system, which we name PASMED, extracts diverse types of binary relations from biomedical literature using deep syntactic patterns. Our experimental results demonstrate that it achieves a level of recall considerably higher than the state of the art, while maintaining reasonable precision. We have then applied PASMED to the whole MEDLINE corpus and extracted more than 137 million semantic relations. The extracted relations provide a quantitative understanding of what kinds of semantic relations are actually described in MEDLINE and can be ultimately extracted by (possibly type-specific) relation extraction systems.\n\n\nCONCLUSION\nPASMED extracts a large number of relations that have previously been missed by existing text mining systems. The entire collection of the relations extracted from MEDLINE is publicly available in machine-readable form, so that it can serve as a potential knowledge base for high-level text-mining applications."
},
{
"pmid": "22595237",
"title": "Hash subgraph pairwise kernel for protein-protein interaction extraction.",
"abstract": "Extracting protein-protein interaction (PPI) from biomedical literature is an important task in biomedical text mining (BioTM). In this paper, we propose a hash subgraph pairwise (HSP) kernel-based approach for this task. The key to the novel kernel is to use the hierarchical hash labels to express the structural information of subgraphs in a linear time. We apply the graph kernel to compute dependency graphs representing the sentence structure for protein-protein interaction extraction task, which can efficiently make use of full graph structural information, and particularly capture the contiguous topological and label information ignored before. We evaluate the proposed approach on five publicly available PPI corpora. The experimental results show that our approach significantly outperforms all-path kernel approach on all five corpora and achieves state-of-the-art performance."
},
{
"pmid": "25861377",
"title": "Feature engineering for drug name recognition in biomedical texts: feature conjunction and feature selection.",
"abstract": "Drug name recognition (DNR) is a critical step for drug information extraction. Machine learning-based methods have been widely used for DNR with various types of features such as part-of-speech, word shape, and dictionary feature. Features used in current machine learning-based methods are usually singleton features which may be due to explosive features and a large number of noisy features when singleton features are combined into conjunction features. However, singleton features that can only capture one linguistic characteristic of a word are not sufficient to describe the information for DNR when multiple characteristics should be considered. In this study, we explore feature conjunction and feature selection for DNR, which have never been reported. We intuitively select 8 types of singleton features and combine them into conjunction features in two ways. Then, Chi-square, mutual information, and information gain are used to mine effective features. Experimental results show that feature conjunction and feature selection can improve the performance of the DNR system with a moderate number of features and our DNR system significantly outperforms the best system in the DDIExtraction 2013 challenge."
},
{
"pmid": "27454611",
"title": "Protein-protein interaction extraction with feature selection by evaluating contribution levels of groups consisting of related features.",
"abstract": "BACKGROUND\nProtein-protein interaction (PPI) extraction from published scientific articles is one key issue in biological research due to its importance in grasping biological processes. Despite considerable advances of recent research in automatic PPI extraction from articles, demand remains to enhance the performance of the existing methods.\n\n\nRESULTS\nOur feature-based method incorporates the strength of many kinds of diverse features, such as lexical and word context features derived from sentences, syntactic features derived from parse trees, and features using existing patterns to extract PPIs automatically from articles. Among these abundant features, we assemble the related features into four groups and define the contribution level (CL) for each group, which consists of related features. Our method consists of two steps. First, we divide the training set into subsets based on the structure of the sentence and the existence of significant keywords (SKs) and apply the sentence patterns given in advance to each subset. Second, we automatically perform feature selection based on the CL values of the four groups that consist of related features and the k-nearest neighbor algorithm (k-NN) through three approaches: (1) focusing on the group with the best contribution level (BEST1G); (2) unoptimized combination of three groups with the best contribution levels (U3G); (3) optimized combination of two groups with the best contribution levels (O2G).\n\n\nCONCLUSIONS\nOur method outperforms other state-of-the-art PPI extraction systems in terms of F-score on the HPRD50 corpus and achieves promising results that are comparable with these PPI extraction systems on other corpora. Further, our method always obtains the best F-score on all the corpora than when using k-NN only without exploiting the CLs of the groups of related features."
},
{
"pmid": "23969135",
"title": "DNorm: disease name normalization with pairwise learning to rank.",
"abstract": "MOTIVATION\nDespite the central role of diseases in biomedical research, there have been much fewer attempts to automatically determine which diseases are mentioned in a text-the task of disease name normalization (DNorm)-compared with other normalization tasks in biomedical text mining research.\n\n\nMETHODS\nIn this article we introduce the first machine learning approach for DNorm, using the NCBI disease corpus and the MEDIC vocabulary, which combines MeSH® and OMIM. Our method is a high-performing and mathematically principled framework for learning similarities between mentions and concept names directly from training data. The technique is based on pairwise learning to rank, which has not previously been applied to the normalization task but has proven successful in large optimization problems for information retrieval.\n\n\nRESULTS\nWe compare our method with several techniques based on lexical normalization and matching, MetaMap and Lucene. Our algorithm achieves 0.782 micro-averaged F-measure and 0.809 macro-averaged F-measure, an increase over the highest performing baseline method of 0.121 and 0.098, respectively.\n\n\nAVAILABILITY\nThe source code for DNorm is available at http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/DNorm, along with a web-based demonstration and links to the NCBI disease corpus. Results on PubMed abstracts are available in PubTator: http://www.ncbi.nlm.nih.gov/CBBresearch/Lu/Demo/PubTator ."
},
{
"pmid": "23714032",
"title": "[Causes of moderate to severe visual impairment and blindness in population aged 50 years or more in rural Shandong province].",
"abstract": "OBJECTIVE\nTo investigate the etiological spectrum of moderate to severe visual impairment and blindness in population aged 50 years or more in rural Shandong province, China.\n\n\nMETHODS\nA population based, random cluster sampling was used to screening the adults aged 50 years or more living in rural Shandong Province from April to July 2008. Three counties and one suburb representing the different levels of socioeconomic development within Shandong area were selected as the investigated areas. Geographically defined cluster sampling was used in randomly selecting a cross-section of residents aged ≥ 50 years from each county. Best corrected visual acuity and intra-ocular pressure were evaluated in those with presenting visual acuity ≤ 0.5 and suspected glaucoma respectively. The major causes of visual impairment and blindness were diagnosed in those with presenting visual acuity ≤ 0.3. According to the results of presenting visual acuity and best corrected visual acuity, the etiology constituent ratios of the moderate to severe visual impairment and blindness were analyzed respectively.\n\n\nRESULTS\nAccording to the number of people, the first three principal causes for blindness based on the presenting visual acuity were cataract (59.8%, 168/281), fundus disease (12.1%, 34/281) and corneal opacity (4.3%, 12/281) or ametropia (4.3%, 12/281). The first three principal causes for moderate to severe visual impairment and blindness were cataract (55.2%, 844/1530), uncorrected refractive error (18.2%, 278/1530) and fundus disease (11.9%, 182/1530). Based on the best corrected visual acuity, the first three principal causes for blindness were cataract (64.6%, 153/237), fundus disease (10.5%, 25/237) and corneal opacity (4.7%, 11/237), respectively. The first three principal causes for moderate to severe visual impairment and blindness were cataract (66.4%, 590/889), fundus disease (16.0%, 142/889) and optic nerve atrophy (3.0%, 27/889). According to number of the eyes, proportion of cataract in cases with moderate to severe visual impairment and blindness had positive relation with age, the proportion of ametropia and fundus disease had negative relation with age. The etiology constituent ratio had no difference between male and female. The proportion of cataract in cases with moderate to severe visual impairment and blindness in Huaiyin District of Jinan was slightly lower than those in other areas, however, the proportion ratio of ametropia and fundus disease was slightly higher than those in other areas.\n\n\nCONCLUSION\nCataract, uncorrected refractive error, and fundus diseases are ranked in the top three causes of moderate to severe visual impairment and blindness in adults aged 50 years or more in rural Shandong Province."
},
{
"pmid": "24339694",
"title": "Visual hallucinations (Charles Bonnet syndrome) associated with neurosarcoidosis.",
"abstract": "The Charles Bonnet syndrome (CBS) refers to lucid and complex visual hallucinations in cognitively normal patients with acquired vision loss. It can be associated with any type of vision loss including that related to macular degeneration, corneal disease, diabetic retinopathy, and occipital infarct. Neurosarcoidosis, a multi-systemic inflammatory granulomatous disease affecting both the central and peripheral nervous systems, is rarely associated with CBS. We report a patient with biopsy-confirmed neurosarcoidosis who experienced visual hallucinations following the development of a right seventh-nerve palsy, right facial paresthesia, and bilateral progressive visual loss. This case highlights the importance of recognizing that the CBS can occur in visual loss of any etiology."
}
] |
JMIR Medical Informatics | 31558433 | PMC6913743 | 10.2196/14993 | Exploiting Machine Learning Algorithms and Methods for the Prediction of Agitated Delirium After Cardiac Surgery: Models Development and Validation Study | BackgroundDelirium is a temporary mental disorder that occasionally affects patients undergoing surgery, especially cardiac surgery. It is strongly associated with major adverse events, which in turn leads to increased cost and poor outcomes (eg, need for nursing home due to cognitive impairment, stroke, and death). The ability to foresee patients at risk of delirium will guide the timely initiation of multimodal preventive interventions, which will aid in reducing the burden and negative consequences associated with delirium. Several studies have focused on the prediction of delirium. However, the number of studies in cardiac surgical patients that have used machine learning methods is very limited.ObjectiveThis study aimed to explore the application of several machine learning predictive models that can pre-emptively predict delirium in patients undergoing cardiac surgery and compare their performance.MethodsWe investigated a number of machine learning methods to develop models that can predict delirium after cardiac surgery. A clinical dataset comprising over 5000 actual patients who underwent cardiac surgery in a single center was used to develop the models using logistic regression, artificial neural networks (ANN), support vector machines (SVM), Bayesian belief networks (BBN), naïve Bayesian, random forest, and decision trees.ResultsOnly 507 out of 5584 patients (11.4%) developed delirium. We addressed the underlying class imbalance, using random undersampling, in the training dataset. The final prediction performance was validated on a separate test dataset. Owing to the target class imbalance, several measures were used to evaluate algorithm’s performance for the delirium class on the test dataset. Out of the selected algorithms, the SVM algorithm had the best F1 score for positive cases, kappa, and positive predictive value (40.2%, 29.3%, and 29.7%, respectively) with a P=.01, .03, .02, respectively. The ANN had the best receiver-operator area-under the curve (78.2%; P=.03). The BBN had the best precision-recall area-under the curve for detecting positive cases (30.4%; P=.03).ConclusionsAlthough delirium is inherently complex, preventive measures to mitigate its negative effect can be applied proactively if patients at risk are prospectively identified. Our results highlight 2 important points: (1) addressing class imbalance on the training dataset will augment machine learning model’s performance in identifying patients likely to develop postoperative delirium, and (2) as the prediction of postoperative delirium is difficult because it is multifactorial and has complex pathophysiology, applying machine learning methods (complex or simple) may improve the prediction by revealing hidden patterns, which will lead to cost reduction by prevention of complications and will optimize patients’ outcomes. | Related WorkAlthough the prevalence of postoperative delirium is low (10%-25%), it is associated with cognitive deterioration coupled with a set of complications in surgical patients. The complexity of delirium comes from its relation to multiple risk factors and the accompanying uncertainty of its pathophysiology [10,11,36]; this leads to challenges in pre-emptively identifying patients that are likely to develop postoperative delirium. Several authors have indicated that delirium is associated with adverse outcomes and advocate early recognition to ensure preventive measures can be applied in a timely and effective manner [3,7,9,10,13,14,37]. Some of the proposed preventive interventions that have been shown to reduce the incidence of delirium in high-risk patients include early mobilization and use of patient’s personal aids (reading glasses, hearing aid, etc) [38]. However, the pre-emptive identification of postoperative delirium is clinically challenging.A structured PubMed search using the PubMed Advanced Search Builder with the structure (“delirium”) AND “predictive model”, will result in only 38 items. If we direct our attention to all the research published focusing on delirium and cardiac surgery, query structure (“delirium”) AND “cardiac surgery”, we will get 485 items. If we combine all the 3 terms, query structure ((“delirium”) AND “cardiac surgery”) AND “predictive model”, we will narrow the results down to 4 items.In recognition of the importance of delirium within the cardiac surgical population, some have attempted to develop a predictive model. In this work, we decided to focus on articles that were published in English and focused on developing a predictive model for the prediction of delirium after cardiac surgery in adult patients. The initial search resulted in 38 articles. After reviewing the articles’ abstracts, we excluded articles that were not written in English, not about cardiac surgery patients, and in which no statistical model was developed. We ended up with 16 articles that were available for review. Multimedia Appendix 1 represents a summary of most relevant studies that attempted to develop a model for the prediction of delirium after cardiac surgery on adult patients.For patients who underwent cardiac surgery, Afonso et al [12] conducted a prospective observational study on 112 consecutive adult cardiac surgical patients. Patients were evaluated twice daily for delirium using Richmond Agitation-Sedation Scale (RASS) and confusion assessment method for the intensive care unit (CAM-ICU), and the overall incidence of delirium was 34%. Increased age and the surgical procedure duration were found to be independently associated with postoperative delirium. Similarly, Bakker et al [13] prospectively enrolled 201 cardiac surgery patients aged 70 years and above. They found that a low Mini-Mental State Exam score and a higher preoperative creatinine were independent predictors of postoperative delirium [13]. Unfortunately, both of these models were based on a small sample size (<250 patients) and did not have a validation cohort.Research in the use of machine learning–based prediction models to detect delirium is rather limited, especially for cardiac surgery. Kramer et al [39] developed predictive models using a large dataset comprising medical and geriatrics patients that had the diagnosis of delirium in their discharge code and a control group of randomly selected patients from the same period who did not develop delirium. The prediction models performed well with the highest performance achieved by the RF model (receiver operating characteristic-area under the curve [ROC-AUC]≈91%). Although they argue that their data were imbalanced, they used the ROC-AUC as their evaluation metric, which does not consider the class imbalance. Davoudi et al [40] applied 7 different machine learning methods on data extracted from the electronic health (eHealth) record of patients undergoing major surgery in a large tertiary medical center to predict delirium; they found an incidence of 3.1%. They were able to achieve a ROC-AUC ranging from 71% to 86%. Owing to the class imbalance secondary to the low incidence of delirium and to improve the model’s performance, they applied data-level manipulation using over- and undersampling, which did not result in a significant improvement (ROC-AUC ranging from 79% to 86%). Lee et al [41] published a nice systematic review and identified 3 high-quality ICU delirium risk prediction models: the Katznelson model, the original PRE-DELIRIC (PREdiction of DELIRium in ICu patients), and the international recalibrated PRE-DELIRIC model. All of these models used LR modeling as the primary technique for creating the predictive model. In the same paper by Lee et al [41], they externally validated these models on a prospective cohort of 600 adult patients that underwent cardiac surgery in a single institution. After updating, recalibrating, and applying decision curve analysis (DCA) to the models, they concluded that the recalibrated PRE-DELIRIC risk model is slightly more helpful. They argue that available models of predicting delirium after cardiac surgery have only modest accuracy. The current models are suboptimal for routine clinical use. Corradi et al [42] developed a predictive model using a large dataset (~78,000 patients) over 3 years in a single center using a good number of feature set (~128 variables). Their model had very good accuracy and the ROC-AUC ~90% on their test dataset. They used the CAM to detect delirium in the intensive care (CAM-ICU) and regular patient wards. Lee et al [41] conducted a systematic review in search for prediction models for delirium specifically designed for cardiac surgery patients. They found only 3 high-quality models and externally validated them on a local population of 600 patients. They used several metrics to evaluate the recalibrated models on the validation cohort (ROC-AUC, Hosmer–Lemeshow test, Nagelkerke’s R2, Brier score, and DCA). In their analysis, the recalibrated PRE-DELIRIC prediction model performed better when compared with the Katznelson model. However, based on the DCA and the expected net benefit of both models, there appears to be limited clinical utility of any of the models. | [
"19022003",
"14630448",
"20875113",
"22200370",
"20373345",
"23887126",
"23270646",
"22170877",
"20647262",
"22345177",
"21741272",
"21996075",
"15869215",
"11958484",
"20231191",
"28246056",
"20732482",
"1952470",
"15713231",
"19901087",
"22255820",
"16798296",
"29241659",
"29208328",
"23182527",
"24476433",
"23796313",
"28508776",
"28186224",
"30430256",
"29035732",
"18999030",
"28662816",
"29241658",
"23414459",
"25738806",
"8357112",
"8831879",
"22323509",
"19104172",
"23314969",
"24064236",
"30894129",
"24373760",
"29054250",
"25953014",
"21242556",
"24364769",
"23355807",
"21994844",
"25888230"
] | [
{
"pmid": "19022003",
"title": "Delirium after cardiac surgery and predictive validity of a risk checklist.",
"abstract": "BACKGROUND\nDelirium or acute confusion is a temporary mental disorder that occurs frequently among hospitalized elderly patients. Patients who undergo cardiac surgery have an increased risk of delirium, which is associated with many negative consequences. Therefore, prevention or early recognition of delirium is essential.\n\n\nMETHODS\nIn this observational study, a risk checklist for delirium was used during the preoperative outpatient screening in 112 patients who underwent elective cardiac surgery. The Delirium Observation Screening (DOS) scale was used before and after surgery to assess whether delirium had developed in patients. The psychiatrist was consulted to confirm or refute the diagnosis delirium.\n\n\nRESULTS\nThe incidence of delirium after cardiac surgery was 21%, and the mean duration of delirium was 2.5 days. The time to discharge was 11 days longer for patients with delirium. The delirium risk checklist could accurately predict postoperative delirium in patients who underwent elective cardiac surgery based on a disturbance in the electrolytes sodium and potassium and on EuroSCORE (European System for Cardiac Operative Risk Evaluation). When using a probability of delirium of 50%, the sensitivity of the risk checklist was 25.0% and specificity was 95.5%. The predictive value of a positive test was 60.0%, and the predictive value of a negative test was 82.4%. The area under the receiver-operating characteristic curve was 0.75.\n\n\nCONCLUSIONS\nWith the risk checklist for delirium, patients at an increased risk of delirium after elective cardiac surgery can be identified."
},
{
"pmid": "14630448",
"title": "Anaesthesia: the patient's point of view.",
"abstract": "Patients scheduled for surgical procedures continue to express concerns about their safety, outcome, and comfort. All medical interventions carry risks, but the patient often considers anaesthesia as the intervention with the greatest risk. Many still worry that they will not wake up after their surgery, or that they will be awake during the operation. Such events have received attention from the media, but are very rare. Challenges to improve the comfort of patients continue, especially with regard to the almost universal problems of nausea, vomiting, and pain after surgery. A newer concern is that patients will develop some degree of mental impairment that may delay return to a full work and social lifestyle for days and weeks. Developments in technology, education, and training have had a major effect on anaesthetic practice, so that anaesthesia is increasingly regarded as safe for the patient. This article explores patients' concerns, and considers whether science and technology help to provide solutions to these complex difficulties."
},
{
"pmid": "20875113",
"title": "Delirium as a predictor of sepsis in post-coronary artery bypass grafting patients: a retrospective cohort study.",
"abstract": "INTRODUCTION\nDelirium is the most common neurological complication following cardiac surgery. Much research has focused on potential causes of delirium; however, the sequelae of delirium have not been well investigated. The objective of this study was to investigate the relationship between delirium and sepsis post coronary artery bypass grafting (CABG) and to determine if delirium is a predictor of sepsis.\n\n\nMETHODS\nPeri-operative data were collected prospectively on all patients. Subjects were identified as having agitated delirium if they experienced a short-term mental disturbance marked by confusion, illusions and cerebral excitement. Patient characteristics were compared between those who became delirious and those who did not. The primary outcome of interest was post-operative sepsis. The association of delirium with sepsis was assessed by logistic regression, adjusting for differences in age, acuity, and co-morbidities.\n\n\nRESULTS\nAmong 14,301 patients, 981 became delirious and 227 developed sepsis post-operatively. Rates of delirium increased over the years of the study from 4.8 to 8.0% (P = 0.0003). A total of 70 patients of the 227 with sepsis, were delirious. In 30.8% of patients delirium preceded the development of overt sepsis by at least 48 hours. Multivariate analysis identified several factors associated with sepsis, (receiver operating characteristic (ROC) 79.3%): delirium (odds ratio (OR) 2.3, 95% confidence interval (CI) 1.6 to 3.4), emergent surgery (OR 3.3, CI 2.2 to 5.1), age (OR 1.2, CI 1.0 to 1.3), pre-operative length of stay (LOS) more than seven days (OR 1.6, CI 1.1 to 2.3), pre-operative renal insufficiency (OR 1.9, CI 1.2 to 2.9) and complex coronary disease (OR 3.1, CI 1.8 to 5.3).\n\n\nCONCLUSIONS\nThese data demonstrate an association between delirium and post-operative sepsis in the CABG population. Delirium emerged as an independent predictor of sepsis, along with traditional risk factors including age, pre-operative renal failure and peripheral vascular disease. Given the advancing age and increasing rates of delirium in the CABG population, the prevention and management of delirium need to be addressed."
},
{
"pmid": "22200370",
"title": "Delirium: a cause for concern beyond the immediate postoperative period.",
"abstract": "BACKGROUND\nDelirium is a common neurologic complication after cardiac surgery, and may be associated with increased morbidity and mortality. Research has focused on potential causes of delirium, with little attention to its sequelae.\n\n\nMETHODS\nPerioperative data were collected prospectively on all isolated cases of coronary artery bypass grafting (CABG) performed from 1995 to 2006 at a single center. The definition of delirium used in the study was that of the Society of Thoracic Surgeons. Characteristics of patients who became delirious postoperatively were compared with those of patients who did not. The outcomes of interest were long-term all-cause mortality, hospital admission for stroke, and in-hospital mortality, examined in all three cases through multivariate analysis.\n\n\nRESULTS\nOf 8,474 patients who underwent CABG within the defined period, 496 (5.8%) developed postoperative delirium and 229 (2.7%) died while in the hospital. At baseline, patients who developed delirium were more likely to be older and to have a greater burden of comorbid illness. Delirium was an independent predictor of perioperative stroke (odds ratio [OR]; 1.96; 95% confidence interval [CI], 1.22 to 3.16), but was not associated with in-hospital mortality (OR, 0.81; 95%CI, 0.49 to 1.34). Delirious patients had a median postoperative hospital stay of 12 days (interquartile range [IQR], 8 to 21 days) versus 6 days (IQR, 5 to 8 days) for those who were nondelirious. Delirium was identified as an independent predictor of all-cause mortality (hazard ratio [HR], 1.52; 95%CI, 1.29 to 1.78) and hospitalization for stroke (HR, 1.54; 95%CI, 1.10 to 2.17).\n\n\nCONCLUSIONS\nThere was an association between delirium and adverse outcomes after CABG that persisted beyond the immediate perioperative period. Patients with delirium after CABG appear to have an increased long-term risk of death and stroke. The advancing age and rising rates of delirium in the CABG population make it necessary to address the prevention and management of delirium in this population."
},
{
"pmid": "20373345",
"title": "Delirium after coronary artery bypass graft surgery and late mortality.",
"abstract": "OBJECTIVE\nDelirium is common after cardiac surgery, although under-recognized, and its long-term consequences are likely underestimated. The primary goal of this study was to determine whether patients with delirium after coronary artery bypass graft (CABG) surgery have higher long-term out-of-hospital mortality when compared with CABG patients without delirium.\n\n\nMETHODS\nWe studied 5,034 consecutive patients undergoing CABG surgery at a single institution from 1997 to 2007. Presence or absence of neurologic complications, including delirium, was assessed prospectively. Survival analysis was performed to determine the role of delirium in the hazard of death, including a propensity score to adjust for potential confounders. These analyses were repeated to determine the association between postoperative stroke and long-term mortality.\n\n\nRESULTS\nIndividuals with delirium had an increased hazard of death (adjusted hazard ratio [HR], 1.65; 95% confidence interval [CI], 1.38-1.97) up to 10 years postoperatively, after adjustment for perioperative and vascular risk factors. Patients with postoperative stroke had a HR of 2.34 (95% CI, 1.87-2.92). The effect of delirium on subsequent mortality was the strongest among those without a prior stroke (HR 1.83 vs HR 1.11 [with a prior stroke] [p-interaction = 0.02]) or who were younger (HR 2.42 [<65 years old] vs HR 1.49 [>/=65 years old] [p-interaction = 0.04]).\n\n\nINTERPRETATION\nDelirium after cardiac surgery is a strong independent predictor of mortality up to 10 years postoperatively, especially in younger individuals and in those without prior stroke. Future studies are needed to determine the impact of delirium prevention and/or treatment in long-term patient mortality."
},
{
"pmid": "23887126",
"title": "Delirium after cardiac surgery: incidence and risk factors.",
"abstract": "OBJECTIVES\nDelirium after cardiac surgery is a problem with consequences for patients and healthcare. Preventive strategies from known risk factors may reduce the incidence and severity of delirium. The present aim was to explore risk factors behind delirium in older patients undergoing cardiac surgery with cardiopulmonary bypass.\n\n\nMETHODS\nPatients (≥70 years) scheduled for routine cardiac surgery were included (n = 142). The patients were assessed and monitored pre-/postoperatively, and delirium was diagnosed from repeated assessments with the Mini-Mental State Examination and the Organic Brain Syndrome Scale, using the DSM-IV-TR criteria. Variables were analysed by uni-/multivariable logistic regression, including both preoperative variables (predisposing) and those extracted during surgery and in the early postoperative period (precipitating).\n\n\nRESULTS\nDelirium was diagnosed in 78 patients (54.9%). Delirium was independently associated with both predisposing and precipitating factors (P-value, odds ratio, upper/lower confidence interval): age (0.036, 1.1, 1.0/1.2), diabetes (0.032, 3.5, 1.1/11.0), gastritis/ulcer problems (0.050, 4.0, 1.0/16.1), volume load during operation (0.001, 2.8, 1.5/5.1), ventilator time in ICU (0.042, 1.2, 1.0/1.4), highest temperature recorded in ICU (0.044, 2.2, 1.0/4.8) and sodium concentration in ICU (0.038, 1.2, 1.0/1.4).\n\n\nCONCLUSIONS\nDelirium was common among older patients undergoing cardiac surgery. Both predisposing and precipitating factors contributed to delirium. When combined, the predictive strength of the model improved. Preventive strategies may be considered, in particular among the precipitating factors. Of interest, delirium was strongly associated with an increased volume load during surgery."
},
{
"pmid": "23270646",
"title": "Delirium in the ICU: an overview.",
"abstract": "Delirium is characterized by a disturbance of consciousness with accompanying change in cognition. Delirium typically manifests as a constellation of symptoms with an acute onset and a fluctuating course. Delirium is extremely common in the intensive care unit (ICU) especially amongst mechanically ventilated patients. Three subtypes have been recognized: hyperactive, hypoactive, and mixed. Delirium is frequently undiagnosed unless specific diagnostic instruments are used. The CAM-ICU is the most widely studied and validated diagnostic instrument. However, the accuracy of this tool may be less than ideal without adequate training of the providers applying it. The presence of delirium has important prognostic implications; in mechanically ventilated patients it is associated with a 2.5-fold increase in short-term mortality and a 3.2-fold increase in 6-month mortality. Nonpharmacological approaches, such as physical and occupational therapy, decrease the duration of delirium and should be encouraged. Pharmacological treatment for delirium traditionally includes haloperidol; however, more data for haloperidol are needed given the paucity of placebo-controlled trials testing its efficacy to treat delirium in the ICU. Second-generation antipsychotics have emerged as an alternative for the treatment of delirium, and they may have a better safety profile. Dexmedetomidine may prove to be a valuable adjunctive agent for patients with delirium in the ICU."
},
{
"pmid": "22170877",
"title": "Early post-cardiac surgery delirium risk factors.",
"abstract": "The purpose of this study was to identify the post-cardiac surgery delirium risk factors and to evaluate clinical outcomes. Data on 90 patients with postoperative delirium after cardiac surgery on cardiopulmonary bypass (CPB) were analyzed retrospectively. The patients were divided into two groups by evaluating the severity of the delirium: light and moderate delirium group (n=74) and severe delirium group (n=16). We found that the rate of early post-cardiac surgery delirium was low (4.17%). We have determined that post-cardiac surgery delirium prolonged the length of stay in the Intensive Care Unit (ICU) by (8.4 (8.6)) and the hospital stay by (23.6 (13.0)) days. The patients had higher preoperative risk scores, their age was 71.5 (8.9) years, the body mass index was 28.8 (4.4) kg/m(2), the majority were male (72.2%), and the left ventricular ejection fraction was 46.1(11.9) %. Statistical analysis by multivariable logistic regression has indicated that increasing the dose of fentanyl administered during surgery over 1.4 mg also increased the possibility of developing a severe delirium (OR=29.4, CI 4.1-210.3) and longer aortic clamping time could be independently associated with severe postoperative delirium (OR=8.0, CI 1.7-37.2). After surgery, new atrial fibrillation (AF) episodes amounted to 53.3% and, after distinguishing the delirium severity groups, AF developed in the patients belonging to the severe delirium groups statistically significantly more frequently, 81.8 vs 47.3, where p=0.01. Our data suggest that early post-cardiac surgery delirium is not a common complication, but it prolonged the length of stay at the ICU and in the hospital. The delirium risk factors, such as longer aortic clamping time, the dose of fentanyl and new atrial fibrillation episodes occurring after cardiac surgery, are associated statistically significantly with the development of severe post-cardiac surgery delirium."
},
{
"pmid": "20647262",
"title": "Predictive model for postoperative delirium in cardiac surgical patients.",
"abstract": "Delirium is a common complication following cardiac surgery, and the predictors of delirium remain unclear. The authors performed a prospective observational analysis to develop a predictive model for postoperative delirium using demographic and procedural parameters. A total of 112 adult postoperative cardiac surgical patients were evaluated twice daily for delirium using the Richmond Agitation-Sedation Scale (RASS) and Confusion Assessment Model for the ICU (CAM-ICU). The incidence of delirium was 34% (n = 38). Increased age (odds ratio [OR] = 2.5; 95% confidence interval [CI] = 1.6-3.9; P < .0001, per 10 years) and increased duration of surgery (OR = 1.3; 95% CI = 1.1-1.5; P = .0002, per 30 minutes) were independently associated with postoperative delirium. Gender, BMI, diabetes mellitus, preoperative ejection fraction, surgery type, length of cardiopulmonary bypass, intraoperative blood component administration, Acute Physiology and Chronic Health Evaluation II score, Sequential Organ Failure Assessment score, and Charlson Comorbidity Index, were not independently associated with postoperative delirium."
},
{
"pmid": "22345177",
"title": "Preoperative and operative predictors of delirium after cardiac surgery in elderly patients.",
"abstract": "OBJECTIVES\nDelirium is a common complication in elderly patients after cardiac surgery and is associated with adverse outcomes including prolonged hospital stay and increased mortality. Therefore, prevention or early detection of delirium is indicated. Our objective was to identify preoperative and operative characteristics that could predict delirium after cardiac surgery in elderly patients.\n\n\nMETHODS\nWe conducted a prospective cohort study in which we analysed 201 patients of 70 years and older who underwent cardiac surgery, for developing a delirium. Patients were assessed daily using the Confusion Assessment Method-Intensive Care Unit.\n\n\nRESULTS\nSixty-three patients (31%) developed a delirium after cardiac surgery. The Mini-Mental State Examination (MMSE) score prior to surgery was lower in the delirious patients when compared with the non-delirious patients (27 vs. 28, P = 0.026), creatinine level was higher (98 vs. 88 μmol/l, P = 0.003) and extracorporeal circulation (ECC) time was longer (145 vs. 113 min, P < 0.001). Mortality during the first 30 days after surgery in patients with delirium was significantly higher than that in the non-delirious patients (14 vs. 0%, P < 0.001).\n\n\nCONCLUSIONS\nLow MMSE score and high creatinine level prior to surgery as well as increased ECC time are important independent predictors of delirium. In addition, delirium is an important predictor of 30-day mortality. Patients with a substantial risk for delirium should be candidates for interventions to reduce postoperative delirium and to potentially improve overall surgical outcomes."
},
{
"pmid": "21741272",
"title": "Hypoactive delirium after cardiac surgery as an independent risk factor for prolonged mechanical ventilation.",
"abstract": "OBJECTIVE\nThe authors' intention was to evaluate the incidence of the three subtypes of delirium, the risk factors of the subtypes in cardiac surgery, and the impact of the subtypes on clinical outcomes.\n\n\nDESIGN\nA prospective study.\n\n\nSETTING\nA university hospital.\n\n\nPARTICIPANTS\nA total population of 506 patients undergoing cardiac surgery was screened for delirium.\n\n\nINTERVENTIONS\nNone.\n\n\nMEASUREMENT AND MAIN RESULTS\nPatients undergoing cardiac surgery were screened by using the Intensive Care Delirium Screening Checklist (ICDSC) and the Richmond Agitation and Sedation Scale (RASS). Patients with hypoactive delirium were compared with nondelirious patients. Outcomes measured were the duration of mechanical ventilation and the length of stay in the intensive care unit. The overall delirium incidence was 11.6%, whereas the incidence of the hypoactive subtype was 9%. Age (odds ratio [OR] 1.04; 95% confidence interval [CI], 1.01-1.09, p = 0.02), a history of depression (OR = 3.57; 95% CI, 1.04-10.74; p = 0.03), preoperative therapy with diuretics (OR = 2.85; 95% CI, 1.36-6.35; p < 0.01), aortic clamping times (OR = 1.01; 95% CI, 1.00-1.02; p < 0.01) and blood transfusions (OR = 1.18; 95% CI, 1.05-1.34; p < 0.01) were predictors for the development of hypoactive delirium. Preoperative therapy with β-blockers (OR = 0.32; 95% CI, 0.16-0.65; p < 0.01) and higher hemoglobin before surgery (OR = 0.73; 95% CI, 0.60-0.91; p < 0.01) were associated with a lower prevalence of hypoactive delirium. Hypoactive delirium is an independent predictor for prolonged mechanical ventilation time (OR = 1.56; 95% CI, 1.25-1.92; p < 0.01) and the length of stay in the ICU (OR = 1.42; 95% CI, 1.22-1.65, p < 0.01).\n\n\nCONCLUSION\nHypoactive delirium itself is a strong predictor for a longer ICU stay and a prolonged period of mechanical ventilation. Some of the risk factors related to the intraoperative and postoperative setting are suitable for preventive action."
},
{
"pmid": "21996075",
"title": "Logistic regression: a brief primer.",
"abstract": "Regression techniques are versatile in their application to medical research because they can measure associations, predict outcomes, and control for confounding variable effects. As one such technique, logistic regression is an efficient and powerful way to analyze the effect of a group of independent variables on a binary outcome by quantifying each independent variable's unique contribution. Using components of linear regression reflected in the logit scale, logistic regression iteratively identifies the strongest linear combination of variables with the greatest probability of detecting the observed outcome. Important considerations when conducting logistic regression include selecting independent variables, ensuring that relevant assumptions are met, and choosing an appropriate model building strategy. For independent variable selection, one should be guided by such factors as accepted theory, previous empirical investigations, clinical considerations, and univariate statistical analyses, with acknowledgement of potential confounding variables that should be accounted for. Basic assumptions that must be met for logistic regression include independence of errors, linearity in the logit for continuous variables, absence of multicollinearity, and lack of strongly influential outliers. Additionally, there should be an adequate number of events per independent variable to avoid an overfit model, with commonly recommended minimum \"rules of thumb\" ranging from 10 to 20 events per covariate. Regarding model building strategies, the three general types are direct/standard, sequential/hierarchical, and stepwise/statistical, with each having a different emphasis and purpose. Before reaching definitive conclusions from the results of any of these methods, one should formally quantify the model's internal validity (i.e., replicability within the same data set) and external validity (i.e., generalizability beyond the current sample). The resulting logistic regression model's overall fit to the sample data is assessed using various goodness-of-fit measures, with better fit characterized by a smaller difference between observed and model-predicted values. Use of diagnostic statistics is also recommended to further assess the adequacy of the model. Finally, results for independent variables are typically reported as odds ratios (ORs) with 95% confidence intervals (CIs)."
},
{
"pmid": "15869215",
"title": "Data mining applications in healthcare.",
"abstract": "Data mining has been used intensively and extensively by many organizations. In healthcare, data mining is becoming increasingly popular, if not increasingly essential. Data mining applications can greatly benefit all parties involved in the healthcare industry. For example, data mining can help healthcare insurers detect fraud and abuse, healthcare organizations make customer relationship management decisions, physicians identify effective treatments and best practices, and patients receive better and more affordable healthcare services. The huge amounts of data generated by healthcare transactions are too complex and voluminous to be processed and analyzed by traditional methods. Data mining provides the methodology and technology to transform these mounds of data into useful information for decision making. This article explores data mining applications in healthcare. In particular, it discusses data mining and its applications within healthcare in major areas such as the evaluation of treatment effectiveness, management of healthcare, customer relationship management, and the detection of fraud and abuse. It also gives an illustrative example of a healthcare data mining application involving the identification of risk factors associated with the onset of diabetes. Finally, the article highlights the limitations of data mining and discusses some future directions."
},
{
"pmid": "11958484",
"title": "A review of evidence of health benefit from artificial neural networks in medical intervention.",
"abstract": "The purpose of this review is to assess the evidence of healthcare benefits involving the application of artificial neural networks to the clinical functions of diagnosis, prognosis and survival analysis, in the medical domains of oncology, critical care and cardiovascular medicine. The primary source of publications is PUBMED listings under Randomised Controlled Trials and Clinical Trials. The rĵle of neural networks is introduced within the context of advances in medical decision support arising from parallel developments in statistics and artificial intelligence. This is followed by a survey of published Randomised Controlled Trials and Clinical Trials, leading to recommendations for good practice in the design and evaluation of neural networks for use in medical intervention."
},
{
"pmid": "20231191",
"title": "Electronic health record-based decision support to improve asthma care: a cluster-randomized trial.",
"abstract": "OBJECTIVE\nAsthma continues to be 1 of the most common chronic diseases of childhood and affects approximately 6 million US children. Although National Asthma Education Prevention Program guidelines exist and are widely accepted, previous studies have demonstrated poor clinician adherence across a variety of populations. We sought to determine if clinical decision support (CDS) embedded in an electronic health record (EHR) would improve clinician adherence to national asthma guidelines in the primary care setting.\n\n\nMETHODS\nWe conducted a prospective cluster-randomized trial in 12 primary care sites over a 1-year period. Practices were stratified for analysis according to whether the site was urban or suburban. Children aged 0 to 18 years with persistent asthma were identified by International Classification of Diseases, Ninth Revision codes for asthma. The 6 intervention-practice sites had CDS alerts imbedded in the EHR. Outcomes of interest were the proportion of children with at least 1 prescription for controller medication, an up-to-date asthma care plan, and the performance of office-based spirometry.\n\n\nRESULTS\nIncreases in the number of prescriptions for controller medications, over time, was 6% greater (P = .006) and 3% greater for spirometry (P = .04) in the intervention urban practices. Filing an up-to-date asthma care plan improved 14% (P = .03) and spirometry improved 6% (P = .003) in the suburban practices with the intervention.\n\n\nCONCLUSION\nIn our study, using a cluster-randomized trial design, CDS in the EHR, at the point of care, improved clinician compliance with National Asthma Education Prevention Program guidelines."
},
{
"pmid": "28246056",
"title": "Predicting risk for portal vein thrombosis in acute pancreatitis patients: A comparison of radical basis function artificial neural network and logistic regression models.",
"abstract": "OBJECTIVE\nTo construct a radical basis function (RBF) artificial neural networks (ANNs) model to predict the incidence of acute pancreatitis (AP)-induced portal vein thrombosis.\n\n\nMETHODS\nThe analysis included 353 patients with AP who had admitted between January 2011 and December 2015. RBF ANNs model and logistic regression model were constructed based on eleven factors relevant to AP respectively. Statistical indexes were used to evaluate the value of the prediction in two models.\n\n\nRESULTS\nThe predict sensitivity, specificity, positive predictive value, negative predictive value and accuracy by RBF ANNs model for PVT were 73.3%, 91.4%, 68.8%, 93.0% and 87.7%, respectively. There were significant differences between the RBF ANNs and logistic regression models in these parameters (P<0.05). In addition, a comparison of the area under receiver operating characteristic curves of the two models showed a statistically significant difference (P<0.05).\n\n\nCONCLUSION\nThe RBF ANNs model is more likely to predict the occurrence of PVT induced by AP than logistic regression model. D-dimer, AMY, Hct and PT were important prediction factors of approval for AP-induced PVT."
},
{
"pmid": "20732482",
"title": "Development of a hybrid decision support model for optimal ventricular assist device weaning.",
"abstract": "BACKGROUND\nDespite the small but promising body of evidence for cardiac recovery in patients that have received ventricular assist device (VAD) support, the criteria for identifying and selecting candidates who might be weaned from a VAD have not been established.\n\n\nMETHODS\nA clinical decision support system was developed based on a Bayesian Belief Network that combined expert knowledge with multivariate statistical analysis. Expert knowledge was derived from interviews of 11 members of the Artificial Heart Program at the University of Pittsburgh Medical Center. This was supplemented by retrospective clinical data from the 19 VAD patients considered for weaning between 1996 and 2004. Artificial Neural Networks and Natural Language Processing were used to mine these data and extract sensitive variables.\n\n\nRESULTS\nThree decision support models were compared. The model exclusively based on expert-derived knowledge was the least accurate and most conservative. It underestimated the incidence of heart recovery, incorrectly identifying 4 of the successfully weaned patients as transplant candidates. The model derived exclusively from clinical data performed better but misidentified 2 patients: 1 weaned successfully, and 1 that needed a cardiac transplant ultimately. An expert-data hybrid model performed best, with 94.74% accuracy and 75.37% to 99.07% confidence interval, misidentifying only 1 patient weaned from support.\n\n\nCONCLUSIONS\nA clinical decision support system may facilitate and improve the identification of VAD patients who are candidates for cardiac recovery and may benefit from VAD removal. It could be potentially used to translate success of active centers to those less established and thereby expand use of VAD therapy."
},
{
"pmid": "1952470",
"title": "Use of an artificial neural network for the diagnosis of myocardial infarction.",
"abstract": "OBJECTIVE\nTo validate prospectively the use of an artificial neural network to identify myocardial infarction in patients presenting to an emergency department with anterior chest pain.\n\n\nDESIGN\nProspective, blinded testing.\n\n\nSETTING\nTertiary university teaching center.\n\n\nPATIENTS\nA total of 331 consecutive adult patients presenting with anterior chest pain.\n\n\nMEASUREMENTS\nDiagnostic sensitivity and specificity with regard to the diagnosis of acute myocardial infarction.\n\n\nMAIN RESULTS\nAn artificial neural network was trained on clinical pattern sets retrospectively derived from the cases of 351 patients hospitalized with a high likelihood of having myocardial infarction. It was prospectively tested on 331 consecutive patients presenting to an emergency department with anterior chest pain. The ability of the network to distinguish patients with from those without acute myocardial infarction was compared with that of physicians caring for the same patients. The physicians had a diagnostic sensitivity of 77.7% (95% CI, 77.0% to 82.9%) and a diagnostic specificity of 84.7% (CI, 84.0% to 86.4%). The artificial neural network had a sensitivity of 97.2% (CI, 97.2% to 97.5%; P = 0.033) and a specificity of 96.2% (CI, 96.2% to 96.4%; P less than 0.001).\n\n\nCONCLUSION\nAn artificial neural network trained to identify myocardial infarction in adult patients presenting to an emergency department may be a valuable aid to the clinical diagnosis of myocardial infarction; however, this possibility must be confirmed through prospective testing on a larger patient sample."
},
{
"pmid": "15713231",
"title": "Comparison of artificial neural network and logistic regression models for prediction of mortality in head trauma based on initial clinical data.",
"abstract": "BACKGROUND\nIn recent years, outcome prediction models using artificial neural network and multivariable logistic regression analysis have been developed in many areas of health care research. Both these methods have advantages and disadvantages. In this study we have compared the performance of artificial neural network and multivariable logistic regression models, in prediction of outcomes in head trauma and studied the reproducibility of the findings.\n\n\nMETHODS\n1000 Logistic regression and ANN models based on initial clinical data related to the GCS, tracheal intubation status, age, systolic blood pressure, respiratory rate, pulse rate, injury severity score and the outcome of 1271 mainly head injured patients were compared in this study. For each of one thousand pairs of ANN and logistic models, the area under the receiver operating characteristic (ROC) curves, Hosmer-Lemeshow (HL) statistics and accuracy rate were calculated and compared using paired T-tests.\n\n\nRESULTS\nANN significantly outperformed logistic models in both fields of discrimination and calibration but under performed in accuracy. In 77.8% of cases the area under the ROC curves and in 56.4% of cases the HL statistics for the neural network model were superior to that for the logistic model. In 68% of cases the accuracy of the logistic model was superior to the neural network model.\n\n\nCONCLUSIONS\nANN significantly outperformed the logistic models in both fields of discrimination and calibration but lagged behind in accuracy. This study clearly showed that any single comparison between these two models might not reliably represent the true end results. External validation of the designed models, using larger databases with different rates of outcomes is necessary to get an accurate measure of performance outside the development population."
},
{
"pmid": "19901087",
"title": "Informatics in radiology: comparison of logistic regression and artificial neural network models in breast cancer risk estimation.",
"abstract": "Computer models in medical diagnosis are being developed to help physicians differentiate between healthy patients and patients with disease. These models can aid in successful decision making by allowing calculation of disease likelihood on the basis of known patient characteristics and clinical test results. Two of the most frequently used computer models in clinical risk estimation are logistic regression and an artificial neural network. A study was conducted to review and compare these two models, elucidate the advantages and disadvantages of each, and provide criteria for model selection. The two models were used for estimation of breast cancer risk on the basis of mammographic descriptors and demographic risk factors. Although they demonstrated similar performance, the two models have unique characteristics-strengths as well as limitations-that must be considered and may prove complementary in contributing to improved clinical decision making."
},
{
"pmid": "22255820",
"title": "Early detection and characterization of Alzheimer's disease in clinical scenarios using Bioprofile concepts and K-means.",
"abstract": "Alzheimer's Disease (AD) is the most common neurodegenerative disease in elderly people. There is a need for objective means to detect AD early to allow targeted interventions and to monitor response to treatment. To help clinicians in these tasks, we propose the creation of the Bioprofile of AD. A Bioprofile should reveal key patterns of a disease in the subject's biodata. We applied k-means clustering to data features taken from the ADNI database to divide the subjects into pathologic and non-pathologic groups in five clinical scenarios. The preliminary results confirm previous findings and show that there is an important AD pattern in the biodata of controls, AD, and Mild Cognitive Impairment (MCI) patients. Furthermore, the Bioprofile could help in the early detection of AD at the MCI stage since it divided the MCI subjects into groups with different rates of conversion to AD."
},
{
"pmid": "16798296",
"title": "Risk factor identification and mortality prediction in cardiac surgery using artificial neural networks.",
"abstract": "OBJECTIVE\nThe artificial neural network model is a nonlinear technology useful for complex pattern recognition problems. This study aimed to develop a method to select risk variables and predict mortality after cardiac surgery by using artificial neural networks.\n\n\nMETHODS\nProspectively collected data from 18,362 patients undergoing cardiac surgery at 128 European institutions in 1995 (the European System for Cardiac Operative Risk Evaluation database) were used. Models to predict the operative mortality were constructed using artificial neural networks. For calibration a sixfold cross-validation technique was used, and for testing a fourfold cross-testing was performed. Risk variables were ranked and minimized in number by calibrated artificial neural networks. Mortality prediction with 95% confidence limits for each patient was obtained by the bootstrap technique. The area under the receiver operating characteristics curve was used as a quantitative measure of the ability to distinguish between survivors and nonsurvivors. Subgroup analysis of surgical operation categories was performed. The results were compared with those from logistic European System for Cardiac Operative Risk Evaluation analysis.\n\n\nRESULTS\nThe operative mortality was 4.9%. Artificial neural networks selected 34 of the total 72 risk variables as relevant for mortality prediction. The receiver operating characteristics area for artificial neural networks (0.81) was larger than the logistic European System for Cardiac Operative Risk Evaluation model (0.79; P = .0001). For different surgical operation categories, there were no differences in the discriminatory power for the artificial neural networks (P = .15) but significant differences were found for the logistic European System for Cardiac Operative Risk Evaluation (P = .0072).\n\n\nCONCLUSIONS\nRisk factors in a ranked order contributing to the mortality prediction were identified. A minimal set of risk variables achieving a superior mortality prediction was defined. The artificial neural network model is applicable independent of the cardiac surgical procedure."
},
{
"pmid": "29241659",
"title": "A novel method for predicting kidney stone type using ensemble learning.",
"abstract": "The high morbidity rate associated with kidney stone disease, which is a silent killer, is one of the main concerns in healthcare systems all over the world. Advanced data mining techniques such as classification can help in the early prediction of this disease and reduce its incidence and associated costs. The objective of the present study is to derive a model for the early detection of the type of kidney stone and the most influential parameters with the aim of providing a decision-support system. Information was collected from 936 patients with nephrolithiasis at the kidney center of the Razi Hospital in Rasht from 2012 through 2016. The prepared dataset included 42 features. Data pre-processing was the first step toward extracting the relevant features. The collected data was analyzed with Weka software, and various data mining models were used to prepare a predictive model. Various data mining algorithms such as the Bayesian model, different types of Decision Trees, Artificial Neural Networks, and Rule-based classifiers were used in these models. We also proposed four models based on ensemble learning to improve the accuracy of each learning algorithm. In addition, a novel technique for combining individual classifiers in ensemble learning was proposed. In this technique, for each individual classifier, a weight is assigned based on our proposed genetic algorithm based method. The generated knowledge was evaluated using a 10-fold cross-validation technique based on standard measures. However, the assessment of each feature for building a predictive model was another significant challenge. The predictive strength of each feature for creating a reproducible outcome was also investigated. Regarding the applied models, parameters such as sex, acid uric condition, calcium level, hypertension, diabetes, nausea and vomiting, flank pain, and urinary tract infection (UTI) were the most vital parameters for predicting the chance of nephrolithiasis. The final ensemble-based model (with an accuracy of 97.1%) was a robust one and could be safely applied to future studies to predict the chances of developing nephrolithiasis. This model provides a novel way to study stone disease by deciphering the complex interaction among different biological variables, thus helping in an early identification and reduction in diagnosis time."
},
{
"pmid": "29208328",
"title": "Different approaches for identifying important concepts in probabilistic biomedical text summarization.",
"abstract": "Automatic text summarization tools help users in the biomedical domain to acquire their intended information from various textual resources more efficiently. Some of biomedical text summarization systems put the basis of their sentence selection approach on the frequency of concepts extracted from the input text. However, it seems that exploring other measures rather than the raw frequency for identifying valuable contents within an input document, or considering correlations existing between concepts, may be more useful for this type of summarization. In this paper, we describe a Bayesian summarization method for biomedical text documents. The Bayesian summarizer initially maps the input text to the Unified Medical Language System (UMLS) concepts; then it selects the important ones to be used as classification features. We introduce six different feature selection approaches to identify the most important concepts of the text and select the most informative contents according to the distribution of these concepts. We show that with the use of an appropriate feature selection approach, the Bayesian summarizer can improve the performance of biomedical summarization. Using the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) toolkit, we perform extensive evaluations on a corpus of scientific papers in the biomedical domain. The results show that when the Bayesian summarizer utilizes the feature selection methods that do not use the raw frequency, it can outperform the biomedical summarizers that rely on the frequency of concepts, domain-independent and baseline methods."
},
{
"pmid": "23182527",
"title": "Preventing delirium in the intensive care unit.",
"abstract": "Delirium in the intensive care unit (ICU) is exceedingly common, and risk factors for delirium among the critically ill are nearly ubiquitous. Addressing modifiable risk factors including sedation management, deliriogenic medications, immobility, and sleep disruption can help to prevent and reduce the duration of this deadly syndrome. The ABCDE approach to critical care is a bundled approach that clinicians can implement for many patients treated in their ICUs to prevent the adverse outcomes associated with delirium and critical illness."
},
{
"pmid": "23796313",
"title": "Preadmission interventions to prevent postoperative complications in older cardiac surgery patients: a systematic review.",
"abstract": "OBJECTIVE(S)\nThe literature on postoperative complications in cardiac surgery patients shows high incidences of postoperative complications such as delirium, depression, pressure ulcer, infection, pulmonary complications and atrial fibrillation. These complications are associated with functional and cognitive decline and a decrease in the quality of life after discharge. Several studies attempted to prevent one or more postoperative complications by preoperative interventions. Here we provide a comprehensive overview of both single and multiple component preadmission interventions designed to prevent postoperative complications.\n\n\nMETHODS\nWe systematically reviewed the literature following the PRISMA statement guidelines.\n\n\nRESULTS\nOf 1335 initial citations, 31 were subjected to critical appraisal. Finally, 23 studies were included, of which we derived a list of interventions that can be applied in the preadmission period to effectively reduce postoperative depression, infection, pulmonary complications, atrial fibrillation, prolonged intensive care unit stay and hospital stay in older elective cardiac surgery patients. No high quality studies were found describing effective interventions to prevent postoperative delirium. We did not find studies specifically targeting the prevention of pressure ulcers in this patient population.\n\n\nCONCLUSIONS\nMulti-component approaches that include different single interventions have the strongest effect in preventing postoperative depression, pulmonary complications, prolonged intensive care unit stay and hospital stay. Postoperative infection can be best prevented by disinfection with chlorhexidine combined with immune-enhancing nutritional supplements. Atrial fibrillation might be prevented by ingestion of N-3 polyunsaturated fatty acids. High quality studies are urgently needed to evaluate preadmission preventive strategies to reduce postoperative delirium or pressure ulcers in older elective cardiac surgery patients."
},
{
"pmid": "28508776",
"title": "Development and Validation of a Multivariable Prediction Model for the Occurrence of Delirium in Hospitalized Gerontopsychiatry and Internal Medicine Patients.",
"abstract": "Delirium is an acute confusion condition, which is common in elderly and often misdiagnosed in hospitalized patients. Early identification and prevention of delirium could reduce morbidity and mortality rates in those affected and reduce hospitalization costs. We have developed and validated a multivariate prediction model that predicts delirium and gives an early warning to physicians. A large set of patient electronic medical records have been used in developing the models. Classical learning algorithms have been used to develop the models and compared the results. Excellent results were obtained with the feature set and parameter settings attaining accuracy of 84%."
},
{
"pmid": "28186224",
"title": "Risk prediction models for delirium in the intensive care unit after cardiac surgery: a systematic review and independent external validation.",
"abstract": "Numerous risk prediction models are available for predicting delirium after cardiac surgery, but few have been directly compared with one another or been validated in an independent data set. We conducted a systematic review to identify validated risk prediction models of delirium (using the Confusion Assessment Method-Intensive Care Unit tool) after cardiac surgery and assessed the transportability of the risk prediction models on a prospective cohort of 600 consecutive patients undergoing cardiac surgery at a university hospital in Hong Kong from July 2013 to July 2015. The discrimination (c-statistic), calibration (GiViTI calibration belt), and clinical usefulness (decision curve analysis) of the risk prediction models were examined in a stepwise manner. Three published high-quality intensive care unit delirium risk prediction models (n=5939) were identified: Katznelson, the original PRE-DELIRIC, and the international recalibrated PRE-DELIRIC model. Delirium occurred in 83 patients (13.8%, 95% CI: 11.2-16.9%). After updating the intercept and regression coefficients in the Katznelson model, there was fair discrimination (0.62, 95% CI: 0.58-0.66) and good calibration. As the original PRE-DELIRIC model was already validated externally and recalibrated in six countries, we performed a logistic calibration on the recalibrated model and found acceptable discrimination (0.75, 95% CI: 0.72-0.79) and good calibration. Decision curve analysis demonstrated that the recalibrated PRE-DELIRIC risk model was marginally more clinically useful than the Katznelson model. Current models predict delirium risk in the intensive care unit after cardiac surgery with only fair to moderate accuracy and are insufficient for routine clinical use."
},
{
"pmid": "30430256",
"title": "Prediction of Incident Delirium Using a Random Forest classifier.",
"abstract": "Delirium is a serious medical complication associated with poor outcomes. Given the complexity of the syndrome, prevention and early detection are critical in mitigating its effects. We used Confusion Assessment Method (CAM) screening and Electronic Health Record (EHR) data for 64,038 inpatient visits to train and test a model predicting delirium arising in hospital. Incident delirium was defined as the first instance of a positive CAM occurring at least 48 h into a hospital stay. A Random Forest machine learning algorithm was used with demographic data, comorbidities, medications, procedures, and physiological measures. The data set was randomly partitioned 80% / 20% for training and validating the predictive model, respectively. Of the 51,240 patients in the training set, 2774 (5.4%) experienced delirium during their hospital stay; and of the 12,798 patients in the validation set, 701 (5.5%) experienced delirium. Under-sampling of the delirium negative population was used to address the class imbalance. The Random Forest predictive model yielded an area under the receiver operating characteristic curve (ROC AUC) of 0.909 (95% CI 0.898 to 0.921). Important variables in the model included previously identified predisposing and precipitating risk factors. This machine learning approach displayed a high degree of accuracy and has the potential to provide a clinically useful predictive model for earlier intervention in those patients at greatest risk of developing delirium."
},
{
"pmid": "29035732",
"title": "Development and validation of an automated delirium risk assessment system (Auto-DelRAS) implemented in the electronic health record system.",
"abstract": "BACKGROUND\nA key component of the delirium management is prevention and early detection.\n\n\nOBJECTIVE\nTo develop an automated delirium risk assessment system (Auto-DelRAS) that automatically alerts health care providers of an intensive care unit (ICU) patient's delirium risk based only on data collected in an electronic health record (EHR) system, and to evaluate the clinical validity of this system.\n\n\nDESIGN\nCohort and system development designs were used.\n\n\nSETTING\nMedical and surgical ICUs in two university hospitals in Seoul, Korea.\n\n\nPARTICIPANTS\nA total of 3284 patients for the development of Auto-DelRAS, 325 for external validation, 694 for validation after clinical applications.\n\n\nMETHODS\nThe 4211 data items were extracted from the EHR system and delirium was measured using CAM-ICU (Confusion Assessment Method for Intensive Care Unit). The potential predictors were selected and a logistic regression model was established to create a delirium risk scoring algorithm to construct the Auto-DelRAS. The Auto-DelRAS was evaluated at three months and one year after its application to clinical practice to establish the predictive validity of the system.\n\n\nRESULTS\nEleven predictors were finally included in the logistic regression model. The results of the Auto-DelRAS risk assessment were shown as high/moderate/low risk on a Kardex screen. The predictive validity, analyzed after the clinical application of Auto-DelRAS after one year, showed a sensitivity of 0.88, specificity of 0.72, positive predictive value of 0.53, negative predictive value of 0.94, and a Youden index of 0.59.\n\n\nCONCLUSIONS\nA relatively high level of predictive validity was maintained with the Auto-DelRAS system, even one year after it was applied to clinical practice."
},
{
"pmid": "18999030",
"title": "Evaluation of a dynamic bayesian belief network to predict osteoarthritic knee pain using data from the osteoarthritis initiative.",
"abstract": "The most common cause of disability in older adults in the United States is osteoarthritis. To address the problem of early disease prediction, we have constructed a Bayesian belief network (BBN) composed of knee OA-related symptoms to support prognostic queries. The purpose of this study is to evaluate a static and dynamic BBN--based on the NIH Osteoarthritis Initiative (OAI) data--in predicting the likelihood of a patient being diagnosed with knee OA. Initial validation results are promising: our model outperforms a logistic regression model in several designed studies. We can conclude that our model can effectively predict the symptoms that are commonly associated with the presence of knee OA."
},
{
"pmid": "28662816",
"title": "Premature ventricular contraction detection combining deep neural networks and rules inference.",
"abstract": "Premature ventricular contraction (PVC), which is a common form of cardiac arrhythmia caused by ectopic heartbeat, can lead to life-threatening cardiac conditions. Computer-aided PVC detection is of considerable importance in medical centers or outpatient ECG rooms. In this paper, we proposed a new approach that combined deep neural networks and rules inference for PVC detection. The detection performance and generalization were studied using publicly available databases: the MIT-BIH arrhythmia database (MIT-BIH-AR) and the Chinese Cardiovascular Disease Database (CCDD). The PVC detection accuracy on the MIT-BIH-AR database was 99.41%, with a sensitivity and specificity of 97.59% and 99.54%, respectively, which were better than the results from other existing methods. To test the generalization capability, the detection performance was also evaluated on the CCDD. The effectiveness of the proposed method was confirmed by the accuracy (98.03%), sensitivity (96.42%) and specificity (98.06%) with the dataset over 140,000 ECG recordings of the CCDD."
},
{
"pmid": "29241658",
"title": "Spatiotemporal Bayesian networks for malaria prediction.",
"abstract": "Targeted intervention and resource allocation are essential for effective malaria control, particularly in remote areas, with predictive models providing important information for decision making. While a diversity of modeling technique have been used to create predictive models of malaria, no work has made use of Bayesian networks. Bayes nets are attractive due to their ability to represent uncertainty, model time lagged and nonlinear relations, and provide explanations. This paper explores the use of Bayesian networks to model malaria, demonstrating the approach by creating village level models with weekly temporal resolution for Tha Song Yang district in northern Thailand. The networks are learned using data on cases and environmental covariates. Three types of networks are explored: networks for numeric prediction, networks for outbreak prediction, and networks that incorporate spatial autocorrelation. Evaluation of the numeric prediction network shows that the Bayes net has prediction accuracy in terms of mean absolute error of about 1.4 cases for 1 week prediction and 1.7 cases for 6 week prediction. The network for outbreak prediction has an ROC AUC above 0.9 for all prediction horizons. Comparison of prediction accuracy of both Bayes nets against several traditional modeling approaches shows the Bayes nets to outperform the other models for longer time horizon prediction of high incidence transmission. To model spread of malaria over space, we elaborate the models with links between the village networks. This results in some very large models which would be far too laborious to build by hand. So we represent the models as collections of probability logic rules and automatically generate the networks. Evaluation of the models shows that the autocorrelation links significantly improve prediction accuracy for some villages in regions of high incidence. We conclude that spatiotemporal Bayesian networks are a highly promising modeling alternative for prediction of malaria and other vector-borne diseases."
},
{
"pmid": "23414459",
"title": "Using the random forest method to detect a response shift in the quality of life of multiple sclerosis patients: a cohort study.",
"abstract": "BACKGROUND\nMultiple sclerosis (MS), a common neurodegenerative disease, has well-described associations with quality of life (QoL) impairment. QoL changes found in longitudinal studies are difficult to interpret due to the potential response shift (RS) corresponding to respondents' changing standards, values, and conceptualization of QoL. This study proposes to test the capacity of Random Forest (RF) for detecting RS reprioritization as the relative importance of QoL domains' changes over time.\n\n\nMETHODS\nThis was a longitudinal observational study. The main inclusion criteria were patients 18 years old or more with relapsing-remitting multiple sclerosis. Every 6 months up to month 24, QoL was recorded using generic and MS-specific questionnaires (MusiQoL and SF-36). At 24 months, individuals were divided into two 'disability change' groups: worsened and not-worsened patients. The RF method was performed based on Breiman's description. Analyses were performed to determine which QoL scores of SF-36 predicted the MusiQoL index. The average variable importance (AVI) was estimated.\n\n\nRESULTS\nA total of 417 (79.6%) patients were defined as not-worsened and 107 (20.4%) as worsened. A clear RS was identified in worsened patients. While the mental score AVI was almost one third higher than the physical score AVI at 12 months, it was 1.5 times lower at 24 months.\n\n\nCONCLUSION\nThis work confirms that the RF method offers a useful statistical approach for RS detection. How to integrate the RS in the interpretation of QoL scores remains a challenge for future research.\n\n\nTRIAL REGISTRATION\nClinicalTrials.gov identifier: NCT00702065."
},
{
"pmid": "25738806",
"title": "The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets.",
"abstract": "Binary classifiers are routinely evaluated with performance measures such as sensitivity and specificity, and performance is frequently illustrated with Receiver Operating Characteristics (ROC) plots. Alternative measures such as positive predictive value (PPV) and the associated Precision/Recall (PRC) plots are used less frequently. Many bioinformatics studies develop and evaluate classifiers that are to be applied to strongly imbalanced datasets in which the number of negatives outweighs the number of positives significantly. While ROC plots are visually appealing and provide an overview of a classifier's performance across a wide range of specificities, one can ask whether ROC plots could be misleading when applied in imbalanced classification scenarios. We show here that the visual interpretability of ROC plots in the context of imbalanced datasets can be deceptive with respect to conclusions about the reliability of classification performance, owing to an intuitive but wrong interpretation of specificity. PRC plots, on the other hand, can provide the viewer with an accurate prediction of future classification performance due to the fact that they evaluate the fraction of true positives among positive predictions. Our findings have potential implications for the interpretation of a large number of studies that use ROC plots on imbalanced datasets."
},
{
"pmid": "8357112",
"title": "A predictive model for delirium in hospitalized elderly medical patients based on admission characteristics.",
"abstract": "OBJECTIVE\nTo prospectively develop and validate a predictive model for the occurrence of new delirium in hospitalized elderly medical patients based on characteristics present at admission.\n\n\nDESIGN\nTwo prospective cohort studies done in tandem.\n\n\nSETTING\nUniversity teaching hospital.\n\n\nPATIENTS\nThe development cohort included 107 hospitalized general medical patients 70 years or older who did not have dementia or delirium at admission. The validation cohort included 174 comparable patients.\n\n\nMEASUREMENTS\nPatients were assessed daily for delirium using a standardized, validated instrument. The predictive model developed in the initial cohort was then validated in a separate cohort of patients.\n\n\nRESULTS\nDelirium developed in 27 of 107 patients (25%) in the development cohort. Four independent baseline risk factors for delirium were identified using proportional hazards analysis: These included vision impairment (adjusted relative risk, 3.5; 95% Cl, 1.2 to 10.7); severe illness (relative risk, 3.5; Cl, 1.5 to 8.2); cognitive impairment (relative risk, 2.8; Cl, 1.2 to 6.7); and a high blood urea nitrogen/creatinine ratio (relative risk, 2.0; Cl, 0.9 to 4.6). A risk stratification system was developed by assigning 1 point for each risk factor present. Rates of delirium for low- (0 points), intermediate- (1 to 2 points), and high-risk (3 to 4 points) groups were 9%, 23%, and 83% (P < 0.0001), respectively. The corresponding rates in the validation cohort, in which 29 of 174 patients (17%) developed delirium, were 3%, 16%, and 32% (P < 0.002). The rates of death or nursing home placement, outcomes potentially related to delirium, were 9%, 16%, and 42% (P = 0.02) in the development cohort and 3%, 14%, and 26% (P = 0.007) in the validation cohort.\n\n\nCONCLUSIONS\nDelirium among elderly hospitalized patients is common, and a simple predictive model based on four risk factors can be used at admission to identify elderly persons at the greatest risk."
},
{
"pmid": "8831879",
"title": "Predicting delirium in elderly patients: development and validation of a risk-stratification model.",
"abstract": "Delirium is a common and serious complication of acute illness in elderly patients. The aim of this study was to develop and validate a model for predicting development of delirium in elderly medical inpatients who did not have delirium on admission. Consecutive admissions to an acute geriatric unit underwent standardized cognitive assessment every 48 hours. Delirium was diagnosed according to DSM-3 criteria. Independent predictors of delirium in a derivation group of 100 patients were determined using stepwise logistic regression analysis; the predictive model comprised dementia, severe illness and elevated serum urea. This model performed well in a validation group of 84 patients. We conclude that elderly medical patients can be stratified according to their risk for developing delirium using a simple clinical model."
},
{
"pmid": "22323509",
"title": "Development and validation of PRE-DELIRIC (PREdiction of DELIRium in ICu patients) delirium prediction model for intensive care patients: observational multicentre study.",
"abstract": "OBJECTIVES\nTo develop and validate a delirium prediction model for adult intensive care patients and determine its additional value compared with prediction by caregivers.\n\n\nDESIGN\nObservational multicentre study.\n\n\nSETTING\nFive intensive care units in the Netherlands (two university hospitals and three university affiliated teaching hospitals).\n\n\nPARTICIPANTS\n3056 intensive care patients aged 18 years or over.\n\n\nMAIN OUTCOME MEASURE\nDevelopment of delirium (defined as at least one positive delirium screening) during patients' stay in intensive care.\n\n\nRESULTS\nThe model was developed using 1613 consecutive intensive care patients in one hospital and temporally validated using 549 patients from the same hospital. For external validation, data were collected from 894 patients in four other hospitals. The prediction (PRE-DELIRIC) model contains 10 risk factors-age, APACHE-II score, admission group, coma, infection, metabolic acidosis, use of sedatives and morphine, urea concentration, and urgent admission. The model had an area under the receiver operating characteristics curve of 0.87 (95% confidence interval 0.85 to 0.89) and 0.86 after bootstrapping. Temporal validation and external validation resulted in areas under the curve of 0.89 (0.86 to 0.92) and 0.84 (0.82 to 0.87). The pooled area under the receiver operating characteristics curve (n=3056) was 0.85 (0.84 to 0.87). The area under the curve for nurses' and physicians' predictions (n=124) was significantly lower at 0.59 (0.49 to 0.70) for both.\n\n\nCONCLUSION\nThe PRE-DELIRIC model for intensive care patients consists of 10 risk factors that are readily available within 24 hours after intensive care admission and has a high predictive value. Clinical prediction by nurses and physicians performed significantly worse. The model allows for early prediction of delirium and initiation of preventive measures. Trial registration Clinical trials NCT00604773 (development study) and NCT00961389 (validation study)."
},
{
"pmid": "19104172",
"title": "Preoperative use of statins is associated with reduced early delirium rates after cardiac surgery.",
"abstract": "BACKGROUND\nDelirium is an acute deterioration of brain function characterized by fluctuating consciousness and an inability to maintain attention. Use of statins has been shown to decrease morbidity and mortality after major surgical procedures. The objective of this study was to determine an association between preoperative administration of statins and postoperative delirium in a large prospective cohort of patients undergoing cardiac surgery with cardiopulmonary bypass.\n\n\nMETHODS\nAfter Institutional Review Board approval, data were prospectively collected on consecutive patients undergoing cardiac surgery with cardiopulmonary bypass from April 2005 to June 2006 in an academic hospital. All patients were screened for delirium during their hospitalization using the Confusion Assessment Method in the intensive care unit. Multivariable logistic regression analysis was used to identify independent perioperative predictors of delirium after cardiac surgery. Statins were tested for a potential protective effect.\n\n\nRESULTS\nOf the 1,059 patients analyzed, 122 patients (11.5%) had delirium at any time during their cardiovascular intensive care unit stay. Administration of statins had a protective effect, reducing the odds of delirium by 46%. Independent predictors of postoperative delirium included older age, preoperative depression, preoperative renal dysfunction, complex cardiac surgery, perioperative intraaortic balloon pump support, and massive blood transfusion. The model was reliable (Hosmer-Lemeshow test, P = 0.3) and discriminative (area under receiver operating characteristic curve = 0.77).\n\n\nCONCLUSIONS\nPreoperative administration of statins is associated with the reduced risk of postoperative delirium after cardiac surgery with cardiopulmonary bypass."
},
{
"pmid": "23314969",
"title": "Incidence and predictors for delirium in hospitalized elderly patients: a retrospective cohort study.",
"abstract": "AIM\nto determine the incidence and predictors for delirium and to develop a prediction model for delirium in hospitalized elderly patient in Indonesia.\n\n\nMETHODS\na retrospective cohort study was conducted in elderly patients (aged 60 years and older) who were hospitalized in Internal Medicine Ward and Acute Geriatric Ward Cipto Mangunkusumo Hospital from January 2008 until December 2010. Patients were not delirious on admission. Twelve predefined predictors for development of delirium during hospitalization were identified on admission. Independent predictors for delirium were identified by Cox's proportional hazard regression analysis and each independent predictor was quantified to develop delirium prediction model. The calibration performance of the model was tested by Hosmer-Lameshow test and its discrimination ability was determined by calculating area under the receiver operating characteristic curve (AUC).\n\n\nRESULTS\nsubjects consist of 457 patients, predominantly male (52.5%) and were in 60-69 age group (55.8%), with mean age of 69.6 (SD 7.09) years old. Delirium developed in 86 patients (cumulative incidence 18.8%, incidence density 0.021 per person-days) during first fourteen-days of hospitalization. Three independent predictors for delirium were identified, including: infection (without sepsis, adjusted HR1.83 (95% CI 0.82-4.10); with sepsis, adjusted HR 4.86, 95% CI 2.14-11.04), cognitive impairment (adjusted HR 3.12; 95%CI 1.89-5.13) and decrease of functional status (adjusted HR 1.74; 95% CI 1.07-2.82). Predictive model was performed using the final model of multivariate analysis and stratified into three levels: low- (rate of delirium 4.4%), intermediate- (32.8%), and high-risk (54.7%) groups.The Hosmer-Lemeshow test revealed good precision (p-value 0,066) and the AUC showed good discrimination ability (0.82, 95% CI 0.78-0.88).\n\n\nCONCLUSION\nincidence of delirium is 18.8% in hospitalized elderly patients, with incidence density of 0.021 per person days. Infections, cognitive impairment, and decrease of functional status on admission are independent predictors for the development of delirium during hospitalization."
},
{
"pmid": "24064236",
"title": "Development and validation of a delirium predictive score in older people.",
"abstract": "BACKGROUND\ndelirium is frequently under diagnosed in older hospitalised patients. Predictive models have not been widely incorporated in clinical practice.\n\n\nOBJECTIVE\nto develop and validate a predictive score for incident delirium.\n\n\nDESIGN AND SETTING\ntwo consecutive observational prospective cohorts (development and validation) in a university affiliated hospital.\n\n\nSUBJECTS\ninpatients 65 years and older.\n\n\nMETHODS\nin the development cohort patients were assessed within the first 48 h of admission, and every 48 h thereafter, using the confusion assessment method to diagnose delirium and data were collected on comorbidity, illness severity, functional status and laboratory. Delirium predictive score (DPS) was constructed in the development cohort using variables associated with incident delirium in the multivariate analysis (P < 0.05), and then tested in a validation cohort of comparable patients, admitted without delirium. Receiver operating characteristic (ROC) analysis and likelihood ratio (LR) were calculated.\n\n\nRESULTS\nthe development cohort included 374 patients, incident delirium occurred in 25. After multivariate analysis incident delirium was independently associated with lower functional status (Barthel Index) and a proxy for dehydration (elevated urea to creatinine ratio). Using these variables, DPS was constructed with a performance in the ROC curve area of 0.86 (95% CI: 0.82-0.91) and (-) LR = 0.16 and (+) LR = 3.4. The validation cohort included 104 patients and the performance of the score was ROC 0.78 (95% CI: 0.66-0.90).\n\n\nCONCLUSIONS\nThis simple predictive model highlights functional status and a proxy for dehydration as a useful tool for identifying older patients that may benefit from close monitoring and preventive care for early diagnosis of delirium."
},
{
"pmid": "30894129",
"title": "Postoperative delirium in critically ill surgical patients: incidence, risk factors, and predictive scores.",
"abstract": "BACKGROUND\nA common postoperative complication found among patients who are critically ill is delirium, which has a high mortality rate. A predictive model is needed to identify high-risk patients in order to apply strategies which will prevent and/or reduce adverse outcomes.\n\n\nOBJECTIVES\nTo identify the incidence of, and the risk factors for, postoperative delirium (POD) in surgical intensive care unit (SICU) patients, and to determine predictive scores for the development of POD.\n\n\nMETHODS\nThis study enrolled adults aged over 18 years who had undergone an operation within the preceding week and who had been admitted to a SICU for a period that was expected to be longer than 24 h. The CAM - ICU score was used to determine the occurrence of delirium.\n\n\nRESULTS\nOf the 250 patients enrolled, delirium was found in 61 (24.4%). The independent risk factors for delirium that were identified by a multivariate analysis comprised age, diabetes mellitus, severity of disease (SOFA score), perioperative use of benzodiazepine, and mechanical ventilation. A predictive score (age + (5 × SOFA) + (15 × Benzodiazepine use) + (20 × DM) + (20 × mechanical ventilation) + (20 × modified IQCODE > 3.42)) was created. The area under the receiver operating characteristic (ROC) curve (AUC) was 0.84 (95% CI: 0.786 to 0.897). The cut point of 125 demonstrated a sensitivity of 72.13% and a specificity of 80.95%, and the hospital mortality rate was significantly greater among the delirious than the non-delirious patients (25% vs. 6%, p < 0.01).\n\n\nCONCLUSIONS\nPOD was experienced postoperatively by a quarter of the surgical patients who were critically ill. A risk score utilizing 6 variables was able to predict which patients would develop POD. The identification of high-risk patients following SICU admission can provide a basis for intervention strategies to improve outcomes.\n\n\nTRIAL REGISTRATION\nThai Clinical Trials Registry TCTR20181204006 . Date registered on December 4, 2018. Retrospectively registered."
},
{
"pmid": "24373760",
"title": "The changing face of cardiac surgery: practice patterns and outcomes 2001-2010.",
"abstract": "BACKGROUND\nAdvances in cardiac surgical care have allowed for successful surgery in high-risk elderly patients. Advances in percutaneous coronary intervention (PCI) techniques and expanded indications for PCI have resulted in a decrease in referrals for coronary artery bypass grafting (CABG). Our objective was to document changes in practice patterns and outcomes in a single tertiary cardiac surgery centre serving a large geographic area.\n\n\nMETHODS\nFor all cardiac surgery cases performed from 2001-2010 we examined its use, patient clinical characteristics, and outcomes. Frailty was assessed using a measure we have previously demonstrated to be associated with adverse outcomes.\n\n\nRESULTS\nDuring the study period, annual case volume decreased by 13%. The number of isolated CABG cases declined, and valve surgery and other complex procedures increased. The proportion of patients aged ≥ 80 years rose from 7%-12%, and the proportion of frail patients increased from 4%-10%. Although unadjusted in-hospital mortality remained relatively unchanged, intensive care unit (ICU) stays and prolonged institutional care increased. Older age and frailty were associated with mortality, prolonged ICU stays, prolonged institutional care, and a composite of mortality and major morbidities.\n\n\nCONCLUSIONS\nOur findings showed a decline in CABG, an increase in more complex operations, and an increase in prolonged ICU stays and prolonged institutional care. The proportion of frail and elderly patients increased over time and these patient groups were at higher risk of adverse postoperative outcomes. Particular attention is required in the decision for surgery and perioperative management of these patients."
},
{
"pmid": "29054250",
"title": "Using anchors from free text in electronic health records to diagnose postoperative delirium.",
"abstract": "OBJECTIVES\nPostoperative delirium is a common complication after major surgery among the elderly. Despite its potentially serious consequences, the complication often goes undetected and undiagnosed. In order to provide diagnosis support one could potentially exploit the information hidden in free text documents from electronic health records using data-driven clinical decision support tools. However, these tools depend on labeled training data and can be both time consuming and expensive to create.\n\n\nMETHODS\nThe recent learning with anchors framework resolves this problem by transforming key observations (anchors) into labels. This is a promising framework, but it is heavily reliant on clinicians knowledge for specifying good anchor choices in order to perform well. In this paper we propose a novel method for specifying anchors from free text documents, following an exploratory data analysis approach based on clustering and data visualization techniques. We investigate the use of the new framework as a way to detect postoperative delirium.\n\n\nRESULTS\nBy applying the proposed method to medical data gathered from a Norwegian university hospital, we increase the area under the precision-recall curve from 0.51 to 0.96 compared to baselines.\n\n\nCONCLUSIONS\nThe proposed approach can be used as a framework for clinical decision support for postoperative delirium."
},
{
"pmid": "25953014",
"title": "Prediction of in-hospital mortality after ruptured abdominal aortic aneurysm repair using an artificial neural network.",
"abstract": "OBJECTIVE\nRuptured abdominal aortic aneurysm (rAAA) carries a high mortality rate, even with prompt transfer to a medical center. An artificial neural network (ANN) is a computational model that improves predictive ability through pattern recognition while continually adapting to new input data. The goal of this study was to effectively use ANN modeling to provide vascular surgeons a discriminant adjunct to assess the likelihood of in-hospital mortality on a pending rAAA admission using easily obtainable patient information from the field.\n\n\nMETHODS\nOf 332 total patients from a single institution from 1998 to 2013 who had attempted rAAA repair, 125 were reviewed for preoperative factors associated with in-hospital mortality; 108 patients received an open operation, and 17 patients received endovascular repair. Five variables were found significant on multivariate analysis (P < .05), and four of these five (preoperative shock, loss of consciousness, cardiac arrest, and age) were modeled by multiple logistic regression and an ANN. These predictive models were compared against the Glasgow Aneurysm Score. All models were assessed by generation of receiver operating characteristic curves and actual vs predicted outcomes plots, with area under the curve and Pearson r(2) value as the primary measures of discriminant ability.\n\n\nRESULTS\nOf the 125 patients, 53 (42%) did not survive to discharge. Five preoperative factors were significant (P < .05) independent predictors of in-hospital mortality in multivariate analysis: advanced age, renal disease, loss of consciousness, cardiac arrest, and shock, although renal disease was excluded from the models. The sequential accumulation of zero to four of these risk factors progressively increased overall mortality rate, from 11% to 16% to 44% to 76% to 89% (age ≥ 70 years considered a risk factor). Algorithms derived from multiple logistic regression, ANN, and Glasgow Aneurysm Score models generated area under the curve values of 0.85 ± 0.04, 0.88 ± 0.04 (training set), and 0.77 ± 0.06 and Pearson r(2) values of .36, .52 and .17, respectively. The ANN model represented the most discriminant of the three.\n\n\nCONCLUSIONS\nAn ANN-based predictive model may represent a simple, useful, and highly discriminant adjunct to the vascular surgeon in accurately identifying those patients who may carry a high mortality risk from attempted repair of rAAA, using only easily definable preoperative variables. Although still requiring external validation, our model is available for demonstration at https://redcap.vanderbilt.edu/surveys/?s=NN97NM7DTK."
},
{
"pmid": "21242556",
"title": "Impact of electronic health record clinical decision support on diabetes care: a randomized trial.",
"abstract": "PURPOSE\nWe wanted to assess the impact of an electronic health record-based diabetes clinical decision support system on control of hemoglobin A(1c) (glycated hemoglobin), blood pressure, and low-density lipoprotein (LDL) cholesterol levels in adults with diabetes.\n\n\nMETHODS\nWe conducted a clinic-randomized trial conducted from October 2006 to May 2007 in Minnesota. Included were 11 clinics with 41 consenting primary care physicians and the physicians' 2,556 patients with diabetes. Patients were randomized either to receive or not to receive an electronic health record (EHR)-based clinical decision support system designed to improve care for those patients whose hemoglobin A(1c), blood pressure, or LDL cholesterol levels were higher than goal at any office visit. Analysis used general and generalized linear mixed models with repeated time measurements to accommodate the nested data structure.\n\n\nRESULTS\nThe intervention group physicians used the EHR-based decision support system at 62.6% of all office visits made by adults with diabetes. The intervention group diabetes patients had significantly better hemoglobin A(1c) (intervention effect -0.26%; 95% confidence interval, -0.06% to -0.47%; P=.01), and better maintenance of systolic blood pressure control (80.2% vs 75.1%, P=.03) and borderline better maintenance of diastolic blood pressure control (85.6% vs 81.7%, P =.07), but not improved low-density lipoprotein cholesterol levels (P = .62) than patients of physicians randomized to the control arm of the study. Among intervention group physicians, 94% were satisfied or very satisfied with the intervention, and moderate use of the support system persisted for more than 1 year after feedback and incentives to encourage its use were discontinued.\n\n\nCONCLUSIONS\nEHR-based diabetes clinical decision support significantly improved glucose control and some aspects of blood pressure control in adults with type 2 diabetes."
},
{
"pmid": "24364769",
"title": "Can intensive care unit delirium be prevented and reduced? Lessons learned and future directions.",
"abstract": "Delirium is a form of acute brain injury that occurs in up to 80% of critically ill patients. It is a source of enormous societal and financial burdens due to increased mortality, prolonged intensive care unit (ICU) and hospital stays, and long-term neuropsychological and functional deficits in ICU survivors. These poor outcomes are not only independently associated with the development of delirium but are also associated with increasing delirium duration. Therefore, interventions should strive both to prevent the occurrence of ICU delirium and to limit its persistence. Both patient-centered and ICU-acquired risk factors need to be addressed early in the ICU course to maximize the efficacy of prevention strategies and to improve long-term outcomes of ICU patients. In this article, we review strategies for early detection of patients who are delirious and who are at high risk for developing delirium, and we present a clinically useful ICU delirium prevention and reduction strategy for clinicians to incorporate into their daily practice."
},
{
"pmid": "23355807",
"title": "Improving delirium care through early intervention: from bench to bedside to boardroom.",
"abstract": "Delirium is a complex neuropsychiatric syndrome that impacts adversely upon patient outcomes and healthcare outcomes. Delirium occurs in approximately one in five hospitalised patients and is especially common in the elderly and patients who are highly morbid and/or have pre-existing cognitive impairment. However, efforts to improve management of delirium are hindered by gaps in our knowledge and issues that reflect a disparity between existing knowledge and real-world practice. This review focuses on evidence that can assist in prevention, earlier detection and more timely and effective pharmacological and non-pharmacological management of emergent cases and their aftermath. It points towards a new approach to delirium care, encompassing laboratory and clinical aspects and health services realignment supported by health managers prioritising delirium on the healthcare change agenda. Key areas for future research and service organisation are outlined in a plan for improved delirium care across the range of healthcare settings and patient populations in which it occurs."
},
{
"pmid": "21994844",
"title": "A clinical update on delirium: from early recognition to effective management.",
"abstract": "Delirium is a neuropsychiatric syndrome characterized by altered consciousness and attention with cognitive, emotional and behavioural symptoms. It is particularly frequent in elderly people with medical or surgical conditions and is associated with adverse outcomes. Predisposing factors render the subject more vulnerable to a congregation of precipitating factors which potentially affect brain function and induce an imbalance in all the major neurotransmitter systems. Early diagnosis of delirium is crucial to improve the prognosis of patients requiring the identification of subtle and fluctuating signs. Increased awareness of clinical staff, particularly nurses, and routine screening of cognitive function with standardized instruments, can be decisive to increase detection rates of delirium. General measures to prevent delirium include the implementation of protocols to systematically identify and minimize all risk factors present in a particular clinical setting. As soon as delirium is recognized, prompt removal of precipitating factors is warranted together with environmental changes and early mobilization of patients. Low doses of haloperidol or olanzapine can be used for brief periods, for the behavioural control of delirium. All of these measures are a part of the multicomponent strategy for prevention and treatment of delirium, in which the nursing care plays a vital role."
},
{
"pmid": "25888230",
"title": "A systematic review of implementation strategies for assessment, prevention, and management of ICU delirium and their effect on clinical outcomes.",
"abstract": "INTRODUCTION\nDespite recommendations from professional societies and patient safety organizations, the majority of ICU patients worldwide are not routinely monitored for delirium, thus preventing timely prevention and management. The purpose of this systematic review is to summarize what types of implementation strategies have been tested to improve ICU clinicians' ability to effectively assess, prevent and treat delirium and to evaluate the effect of these strategies on clinical outcomes.\n\n\nMETHOD\nWe searched PubMed, Embase, PsychINFO, Cochrane and CINAHL (January 2000 and April 2014) for studies on implementation strategies that included delirium-oriented interventions in adult ICU patients. Studies were suitable for inclusion if implementation strategies' efficacy, in terms of a clinical outcome, or process outcome was described.\n\n\nRESULTS\nWe included 21 studies, all including process measures, while 9 reported both process measures and clinical outcomes. Some individual strategies such as \"audit and feedback\" and \"tailored interventions\" may be important to establish clinical outcome improvements, but otherwise robust data on effectiveness of specific implementation strategies were scarce. Successful implementation interventions were frequently reported to change process measures, such as improvements in adherence to delirium screening with up to 92%, but relating process measures to outcome changes was generally not possible. In meta-analyses, reduced mortality and ICU length of stay reduction were statistically more likely with implementation programs that employed more (six or more) rather than less implementation strategies and when a framework was used that either integrated current evidence on pain, agitation and delirium management (PAD) or when a strategy of early awakening, breathing, delirium screening and early exercise (ABCDE bundle) was employed. Using implementation strategies aimed at organizational change, next to behavioral change, was also associated with reduced mortality.\n\n\nCONCLUSION\nOur findings may indicate that multi-component implementation programs with a higher number of strategies targeting ICU delirium assessment, prevention and treatment and integrated within PAD or ABCDE bundle have the potential to improve clinical outcomes. However, prospective confirmation of these findings is needed to inform the most effective implementation practice with regard to integrated delirium management and such research should clearly delineate effective practice change from improvements in clinical outcomes."
}
] |
JMIR Medical Informatics | 31674914 | PMC6913747 | 10.2196/15980 | Cohort Selection for Clinical Trials From Longitudinal Patient Records: Text Mining Approach | BackgroundClinical trials are an important step in introducing new interventions into clinical practice by generating data on their safety and efficacy. Clinical trials need to ensure that participants are similar so that the findings can be attributed to the interventions studied and not to some other factors. Therefore, each clinical trial defines eligibility criteria, which describe characteristics that must be shared by the participants. Unfortunately, the complexities of eligibility criteria may not allow them to be translated directly into readily executable database queries. Instead, they may require careful analysis of the narrative sections of medical records. Manual screening of medical records is time consuming, thus negatively affecting the timeliness of the recruitment process.ObjectiveTrack 1 of the 2018 National Natural Language Processing Clinical Challenge focused on the task of cohort selection for clinical trials, aiming to answer the following question: Can natural language processing be applied to narrative medical records to identify patients who meet eligibility criteria for clinical trials? The task required the participating systems to analyze longitudinal patient records to determine if the corresponding patients met the given eligibility criteria. We aimed to describe a system developed to address this task.MethodsOur system consisted of 13 classifiers, one for each eligibility criterion. All classifiers used a bag-of-words document representation model. To prevent the loss of relevant contextual information associated with such representation, a pattern-matching approach was used to extract context-sensitive features. They were embedded back into the text as lexically distinguishable tokens, which were consequently featured in the bag-of-words representation. Supervised machine learning was chosen wherever a sufficient number of both positive and negative instances was available to learn from. A rule-based approach focusing on a small set of relevant features was chosen for the remaining criteria.ResultsThe system was evaluated using microaveraged F measure. Overall, 4 machine algorithms, including support vector machine, logistic regression, naïve Bayesian classifier, and gradient tree boosting (GTB), were evaluated on the training data using 10–fold cross-validation. Overall, GTB demonstrated the most consistent performance. Its performance peaked when oversampling was used to balance the training data. The final evaluation was performed on previously unseen test data. On average, the F measure of 89.04% was comparable to 3 of the top ranked performances in the shared task (91.11%, 90.28%, and 90.21%). With an F measure of 88.14%, we significantly outperformed these systems (81.03%, 78.50%, and 70.81%) in identifying patients with advanced coronary artery disease.ConclusionsThe holdout evaluation provides evidence that our system was able to identify eligible patients for the given clinical trial with high accuracy. Our approach demonstrates how rule-based knowledge infusion can improve the performance of machine learning algorithms even when trained on a relatively small dataset. | Related WorkThe problem of matching the eligibility criteria against their electronic medical records (EMRs) can be framed using a variety of natural language processing (NLP) tasks depending on the type and level of automation expected. In the context of decision making, automation can be applied to 4 classes of functions: information acquisition, information analysis, decision selection, and decision implementation [6]. In our scenario, we focused on a clinician as a human operator who, given a collection of EMRs and a set of eligibility criteria, needs to decide which patients should be recruited to a given clinical trial. In this context, we can think of information acquisition as identification of information relevant to the eligibility criteria. This task can be automated by means of information retrieval (IR) or information extraction (IE).IR can be applied to both structured and unstructured components of the EMRs to retrieve relevant records or their parts. The usability of any IR system depends on two key factors: system effectiveness and user utility [7]. A test collection of 56 topics based on patient statements (eg, signs, symptoms, and treatment) and inclusion/exclusion criteria (eg, patient’s demographics, laboratory test, and diagnosis) can be used to evaluate the effectiveness of IR for cohort selection [8]. The utility of IR systems can be improved by designing an intuitive visual query interface easily used by clinical researchers [9]. Both utility and effectiveness depend on how well the system incorporates domain-specific knowledge. An ontology can be used to support term disambiguation, term normalization, and subsumption reasoning. Most studies mapped textual elements to concepts in the Unified Medical Language System (UMLS) for normalization with few studies discussing the use of semantic Web technologies for phenotyping [10]. For instance, the UMLS hierarchy can be used to expand a query searching for cancer to other related terms (eg, neuroblastoma and glioma). However, using such a broad hierarchy for unsupervised expansion can introduce many irrelevant terms, which can be detrimental to eligibility-screening performance [11]. This problem can be reduced by using the UMLS to bootstrap creation of custom ontologies relevant to the problem at hand. For example, to identify patients with cerebral aneurysms, a domain-specific ontology was created by querying the UMLS for concepts related to the locations of aneurysms (eg, middle cerebral artery or anterior communicating artery), other clinical phenotypes related to cerebral aneurysms (eg, saccular aneurysm or subarachnoid hemorrhage), associated conditions (eg, polycystic kidney disease), and competing diagnoses (eg, arteriovenous malformation) [12]. Where available, other relevant systems can be used to inform the development of domain-specific ontologies. For instance, the Epilepsy Data Extraction and Annotation uses a novel Epilepsy and Seizure Ontology, which is based on the International League Against Epilepsy classification system as the core knowledge resource [9].The complexity of clinical sublanguage may require new language modeling approaches to be able to formulate multilayered queries and customize the level of linguistic granularity [13]. This approach to IR incorporates the output of other NLP systems to represent a document or a query using multiple aligned layers consisting of tokens, their part of speech, named entities with mappings to external knowledge sources, and syntactic dependencies among these elements. Other IR efforts focused on directing a clinician’s attention toward specific sentences that are relevant for eligibility determination [14]. This is achieved by segmenting the natural language description of eligibility criteria into individual sentences, analyzing them further to identify domain-specific concepts, and using them to identify sentences in the EMRs that make references to these concepts. This approach is designed to work with categorical data but falls short when numerical data need to be interpreted. For instance, 5 numerical values are needed to diagnose a metabolic syndrome [15]. Of these values, 3 (triglycerides, high-density lipoprotein cholesterol, and elevated fasting glucose) are stored in the laboratory information system, and as structured data are readily available for querying and comparison with referent values. However, in some systems, 2 values may be hidden in the narrative notes (elevated waist circumference and elevated blood pressure). Traditionally, IR approaches are based on the bag-of-words (BoW) model, which represents each document as an unordered collection of features that correspond to the words in a vocabulary for a given document collection. Therefore, by design, IR approaches will be ineffective when it comes to dealing with continuous variables. Conversely, IE based on simple regular expressions can be used to extract numerical values from text and make them amenable for further analysis and interpretation [15-18].However, the technical feasibility of the IE process does not mean that all relevant attributes are necessarily documented in a single source as the previous example illustrates. For example, a study on case-finding algorithms for hepatocellular cancer discovered significant differences in performance between 2 types of documents (pathology and radiology reports) [19]. It also revealed a significant difference between the narrative reports and coded fields. This raises an important aspect of the completeness of information recorded in an EMR [15]. It has been established that case finding by the International Classification of Diseases, Ninth Revision (ICD-9) coding alone is not sufficient to reliably identify patients with a particular disease or risk factors [20-22]. A few studies contrasted the utility of structured and unstructured information, with the NLP approaches usually demonstrating better results [19,23-28]. In particular, the use of ICD-9 codes for patient phenotyping demonstrated markedly lower precision (or positive predictive value) [19,24,26]. This finding is compatible with a hypothesis that ICD-9 codes are designed for billing purposes and as such may not capture the nuances of phenotypic characteristics in terms of information completeness, expressiveness, and granularity [23].The analysis of the strengths and weaknesses of both data sources together with practical experiments has led to a consensus that clinical narratives should be used in combination with structured data for eligibility screening [19,23,25,26,28]. Therefore, data fusion is a key component of the information acquisition step in eligibility screening. It should by no means be limited to these 2 modalities of data. For example, clinical electroencephalography (EEG) is the most important investigation in the diagnosis and management of epilepsies. A multimodal patient cohort retrieval system has been designed to leverage the heterogeneous nature of EEG data by integrating EEG reports with EEG signal data [29]. Though evidently important, data fusion techniques are beyond the scope of this study. Here, we focused exclusively on reviewing the methods used to mine clinical narratives for the purpose of eligibility screening. However, the awareness of the need for data fusion can help the reader realize the existence of an externally imposed upper bound on expected performance of text mining approaches.We have thus far discussed the role of IR and IE in the context of information acquisition. The clinician is still expected to review the retrieved information to decide who satisfies the eligibility criteria. Text mining can be used to support this process by automating information analysis and decision selection by means of feature extraction and text classification, respectively. Two NLP systems tailored to the clinical domain are most often used to extract rich linguistic and semantic features from the narrative found in EMRs: Medical Language Extraction and Encoding (MedLEE [30]) [16,23,25] and clinical Text Analysis and Knowledge Extraction System (cTAKES [31]) [9,11,12,16,18,19,32,33]. They model the semantics by mapping text to the UMLS or a custom dictionary if required. Clinical text analysis needs to make fine-grained semantic distinctions as medical concepts may be negated, may describe someone other than the patient, and may be referring to time other than the present [13]. MedLEE and cTAKES can not only identify concepts of interest but can also interpret their meaning in the context of negation, hedging, and specific sections. Both systems can also perform syntactic analysis to extract linguistic features such as part of speech and syntactic dependencies. Abbreviations are some of the most prominent features of clinical narratives. Unfortunately, both MedLEE and cTAKES demonstrated suboptimal performance in abbreviation recognition [34], which may require development of bespoke solutions [16,35].Once the pertinent features have been extracted, they can be exploited by rule-based or machine learning approaches. A review of approaches to identifying patient cohorts using EMRs revealed that out of 97 studies, 24 described rule-based systems; 41 used statistical analyses, data mining, or machine learning; and 22 described hybrid systems [10]. A minimal set of rules is sufficient to accurately extract highly standardized information from the narratives [15]. Their development requires iterative consultation with a clinical expert [26]. Nonetheless, a well-designed rule-based system can achieve good performance on cohort selection even with a small training dataset [36], which remains a problem associated with supervised machine learning approaches. When relevant concepts can be accurately identified from clinical text, both rule-based and machine learning approaches demonstrate good performance, albeit it is slightly in favor of machine learning [25,33].A variety of supervised machine learning approaches have been used to support cohort selection, including support vector machines (SVMs) [22,25], decision trees [22], Repeated Incremental Pruning to Produce Error Reduction, random forests [25], C4.5 [33], logistic regression (LR) [25,28], naïve Bayesian (NB) learning [22,37], perceptron [37], conditional random fields [19], and deep learning [29,38]. Unfortunately, few studies report systematic evaluation of a wide range of machine learning algorithms, thus offering little insight into the optimal performance of machine learning for cohort selection [39]. Another issue associated with supervised learning is that of imbalanced data. The number of positive examples will typically vary significantly across the eligibility criteria. The data used for the 2018 National Natural Language Processing Clinical Challenge (n2c2) shared task on cohort selection for clinical trials provide a perfect illustration of this problem [18,36,38]. Yet, few approaches tackled this issue with different sampling approaches. Instead, they may resort to using machine learning approaches generally perceived to be the most robust for imbalanced data, for example, SVMs [40,41].Our review of related work illustrates the ways in which the eligibility screening process can be automated. One study reported that the time for cohort identification was reduced significantly from a few weeks to a few seconds [16]. Others reported the workload reduction with automated eligibility screening around 90% [42] achieved a 450% increase in trial screening efficiency [11]. Most recently, the patient screening time was reduced by 34%, allowing for the saved time to be redirected to activities that further streamlined teamwork among the clinical research coordinators [43]. The same study showed that the numbers of subjects screened, approached, and enrolled were increased by 14.7%, 11.1%, and 11.1%, respectively. In this study, we aimed to illustrate the complexity of the eligibility screening problem and propose a way in which this task can be automated. | [
"29330082",
"25475878",
"30856272",
"23304396",
"24201027",
"25030032",
"27927935",
"29295172",
"26376462",
"25451102",
"26171080",
"28585184",
"31197354",
"23929403",
"15838413",
"21347124",
"24664671",
"18999285",
"21346976",
"22195222",
"22627647",
"26195183",
"26537487",
"28269938",
"7719797",
"20819853",
"24190931",
"24125142",
"23304375",
"31300825",
"15797003",
"31305921",
"29450781",
"26241355",
"25881112",
"31342909",
"26433122",
"26210362",
"27570656",
"12123149",
"20819858",
"22879764"
] | [
{
"pmid": "29330082",
"title": "Clinical trials recruitment planning: A proposed framework from the Clinical Trials Transformation Initiative.",
"abstract": "Patient recruitment is widely recognized as a key determinant of success for clinical trials. Yet a substantial number of trials fail to reach recruitment goals-a situation that has important scientific, financial, ethical, and policy implications. Further, there are important effects on stakeholders who directly contribute to the trial including investigators, sponsors, and study participants. Despite efforts over multiple decades to identify and address barriers, recruitment challenges persist. To advance a more comprehensive approach to trial recruitment, the Clinical Trials Transformation Initiative (CTTI) convened a project team to examine the challenges and to issue actionable, evidence-based recommendations for improving recruitment planning that extend beyond common study-specific strategies. We describe our multi-stakeholder effort to develop a framework that delineates three areas essential to strategic recruitment planning efforts: (1) trial design and protocol development, (2) trial feasibility and site selection, and (3) communication. Our recommendations propose an upstream approach to recruitment planning that has the potential to produce greater impact and reduce downstream barriers. Additionally, we offer tools to help facilitate adoption of the recommendations. We hope that our framework and recommendations will serve as a guide for initial efforts in clinical trial recruitment planning irrespective of disease or intervention focus, provide a common basis for discussions in this area and generate targets for further analysis and continual improvement."
},
{
"pmid": "25475878",
"title": "Unsuccessful trial accrual and human subjects protections: an empirical analysis of recently closed trials.",
"abstract": "BACKGROUND\nEthical evaluation of risk-benefit in clinical trials is premised on the achievability of resolving research questions motivating an investigation.\n\n\nOBJECTIVE\nTo determine the fraction and number of patients enrolled in trials that were at risk of not meaningfully addressing their primary research objective due to unsuccessful patient accrual.\n\n\nMETHODS\nWe used the National Library of Medicine clinical trial registry to capture all initiated phases 2 and 3 intervention clinical trials that were registered as closed in 2011. We then determined the number that had been terminated due to unsuccessful accrual and the number that had closed after less than 85% of the target number of human subjects had been enrolled. Five factors were tested for association with unsuccessful accrual.\n\n\nRESULTS\nOf 2579 eligible trials, 481 (19%) either terminated for failed accrual or completed with less than 85% expected enrolment, seriously compromising their statistical power. Factors associated with unsuccessful accrual included greater number of eligibility criteria (p = 0.013), non-industry funding (25% vs 16%, p < 0.0001), earlier trial phase (23% vs 16%, p < 0.0001), fewer number of research sites at trial completion (p < 0.0001) and at registration (p < 0.0001), and an active (non-placebo) comparator (23% vs 16%, p < 0.001).\n\n\nCONCLUSION\nA total of 48,027 patients had enrolled in trials closed in 2011 who were unable to answer the primary research question meaningfully. Ethics bodies, investigators, and data monitoring committees should carefully scrutinize trial design, recruitment plans, and feasibility of achieving accrual targets when designing and reviewing trials, monitor accrual once initiated, and take corrective action when accrual is lagging."
},
{
"pmid": "30856272",
"title": "Systematic Review and Meta-Analysis of the Magnitude of Structural, Clinical, and Physician and Patient Barriers to Cancer Clinical Trial Participation.",
"abstract": "BACKGROUND\nBarriers to cancer clinical trial participation have been the subject of frequent study, but the rate of trial participation has not changed substantially over time. Studies often emphasize patient-related barriers, but other types of barriers may have greater impact on trial participation. Our goal was to examine the magnitude of different domains of trial barriers by synthesizing prior research.\n\n\nMETHODS\nWe conducted a systematic review and meta-analysis of studies that examined the trial decision-making pathway using a uniform framework to characterize and quantify structural (trial availability), clinical (eligibility), and patient/physician barrier domains. The systematic review utilized the PubMed, Google Scholar, Web of Science, and Ovid Medline search engines. We used random effects to estimate rates of different domains across studies, adjusting for academic vs community care settings.\n\n\nRESULTS\nWe identified 13 studies (nine in academic and four in community settings) with 8883 patients. A trial was unavailable for patients at their institution 55.6% of the time (95% confidence interval [CI] = 43.7% to 67.3%). Further, 21.5% (95% CI = 10.9% to 34.6%) of patients were ineligible for an available trial, 14.8% (95% CI = 9.0% to 21.7%) did not enroll, and 8.1% (95% CI = 6.3% to 10.0%) enrolled. Rates of trial enrollment in academic (15.9% [95% CI = 13.8% to 18.2%]) vs community (7.0% [95% CI = 5.1% to 9.1%]) settings differed, but not rates of trial unavailability, ineligibility, or non-enrollment.\n\n\nCONCLUSIONS\nThese findings emphasize the enormous need to address structural and clinical barriers to trial participation, which combined make trial participation unachievable for more than three of four cancer patients."
},
{
"pmid": "23304396",
"title": "EpiDEA: extracting structured epilepsy and seizure information from patient discharge summaries for cohort identification.",
"abstract": "Sudden Unexpected Death in Epilepsy (SUDEP) is a poorly understood phenomenon. Patient cohorts to power statistical studies in SUDEP need to be drawn from multiple centers due to the low rate of reported SUDEP incidences. But the current practice of manual chart review of Epilepsy Monitoring Units (EMU) patient discharge summaries is time-consuming, tedious, and not scalable for large studies. To address this challenge in the multi-center NIH-funded Prevention and Risk Identification of SUDEP Mortality (PRISM) Project, we have developed the Epilepsy Data Extraction and Annotation (EpiDEA) system for effective processing of discharge summaries. EpiDEA uses a novel Epilepsy and Seizure Ontology (EpSO), which has been developed based on the International League Against Epilepsy (ILAE) classification system, as the core knowledge resource. By extending the cTAKES natural language processing tool developed at the Mayo Clinic, EpiDEA implements specialized functions to address the unique challenges of processing epilepsy and seizure-related clinical free text in discharge summaries. The EpiDEA system was evaluated on a corpus of 104 discharge summaries from the University Hospitals Case Medical Center EMU and achieved an overall precision of 93.59% and recall of 84.01% with an F-measure of 88.53%. The results were compared against a gold standard created by two epileptologists. We demonstrate the use of EpiDEA for cohort identification through use of an intuitive visual query interface that can be directly used by clinical researchers."
},
{
"pmid": "24201027",
"title": "A review of approaches to identifying patient phenotype cohorts using electronic health records.",
"abstract": "OBJECTIVE\nTo summarize literature describing approaches aimed at automatically identifying patients with a common phenotype.\n\n\nMATERIALS AND METHODS\nWe performed a review of studies describing systems or reporting techniques developed for identifying cohorts of patients with specific phenotypes. Every full text article published in (1) Journal of American Medical Informatics Association, (2) Journal of Biomedical Informatics, (3) Proceedings of the Annual American Medical Informatics Association Symposium, and (4) Proceedings of Clinical Research Informatics Conference within the past 3 years was assessed for inclusion in the review. Only articles using automated techniques were included.\n\n\nRESULTS\nNinety-seven articles met our inclusion criteria. Forty-six used natural language processing (NLP)-based techniques, 24 described rule-based systems, 41 used statistical analyses, data mining, or machine learning techniques, while 22 described hybrid systems. Nine articles described the architecture of large-scale systems developed for determining cohort eligibility of patients.\n\n\nDISCUSSION\nWe observe that there is a rise in the number of studies associated with cohort identification using electronic medical records. Statistical analyses or machine learning, followed by NLP techniques, are gaining popularity over the years in comparison with rule-based systems.\n\n\nCONCLUSIONS\nThere are a variety of approaches for classifying patients into a particular phenotype. Different techniques and data sources are used, and good performance is reported on datasets at respective institutions. However, no system makes comprehensive use of electronic medical records addressing all of their known weaknesses."
},
{
"pmid": "25030032",
"title": "Automated clinical trial eligibility prescreening: increasing the efficiency of patient identification for clinical trials in the emergency department.",
"abstract": "OBJECTIVES\n(1) To develop an automated eligibility screening (ES) approach for clinical trials in an urban tertiary care pediatric emergency department (ED); (2) to assess the effectiveness of natural language processing (NLP), information extraction (IE), and machine learning (ML) techniques on real-world clinical data and trials.\n\n\nDATA AND METHODS\nWe collected eligibility criteria for 13 randomly selected, disease-specific clinical trials actively enrolling patients between January 1, 2010 and August 31, 2012. In parallel, we retrospectively selected data fields including demographics, laboratory data, and clinical notes from the electronic health record (EHR) to represent profiles of all 202795 patients visiting the ED during the same period. Leveraging NLP, IE, and ML technologies, the automated ES algorithms identified patients whose profiles matched the trial criteria to reduce the pool of candidates for staff screening. The performance was validated on both a physician-generated gold standard of trial-patient matches and a reference standard of historical trial-patient enrollment decisions, where workload, mean average precision (MAP), and recall were assessed.\n\n\nRESULTS\nCompared with the case without automation, the workload with automated ES was reduced by 92% on the gold standard set, with a MAP of 62.9%. The automated ES achieved a 450% increase in trial screening efficiency. The findings on the gold standard set were confirmed by large-scale evaluation on the reference set of trial-patient matches.\n\n\nDISCUSSION AND CONCLUSION\nBy exploiting the text of trial criteria and the content of EHRs, we demonstrated that NLP-, IE-, and ML-based automated ES could successfully identify patients for clinical trials."
},
{
"pmid": "27927935",
"title": "Large-scale identification of patients with cerebral aneurysms using natural language processing.",
"abstract": "OBJECTIVE\nTo use natural language processing (NLP) in conjunction with the electronic medical record (EMR) to accurately identify patients with cerebral aneurysms and their matched controls.\n\n\nMETHODS\nICD-9 and Current Procedural Terminology codes were used to obtain an initial data mart of potential aneurysm patients from the EMR. NLP was then used to train a classification algorithm with .632 bootstrap cross-validation used for correction of overfitting bias. The classification rule was then applied to the full data mart. Additional validation was performed on 300 patients classified as having aneurysms. Controls were obtained by matching age, sex, race, and healthcare use.\n\n\nRESULTS\nWe identified 55,675 patients of 4.2 million patients with ICD-9 and Current Procedural Terminology codes consistent with cerebral aneurysms. Of those, 16,823 patients had the term aneurysm occur near relevant anatomic terms. After training, a final algorithm consisting of 8 coded and 14 NLP variables was selected, yielding an overall area under the receiver-operating characteristic curve of 0.95. After the final algorithm was applied, 5,589 patients were classified as having aneurysms, and 54,952 controls were matched to those patients. The positive predictive value based on a validation cohort of 300 patients was 0.86.\n\n\nCONCLUSIONS\nWe harnessed the power of the EMR by applying NLP to obtain a large cohort of patients with intracranial aneurysms and their matched controls. Such algorithms can be generalized to other diseases for epidemiologic and genetic studies."
},
{
"pmid": "29295172",
"title": "Aligned-Layer Text Search in Clinical Notes.",
"abstract": "Search techniques in clinical text need to make fine-grained semantic distinctions, since medical terms may be negated, about someone other than the patient, or at some time other than the present. While natural language processing (NLP) approaches address these fine-grained distinctions, a task like patient cohort identification from electronic health records (EHRs) simultaneously requires a much more coarse-grained combination of evidence from the text and structured data of each patient's health records. We thus introduce aligned-layer language models, a novel approach to information retrieval (IR) that incorporates the output of other NLP systems. We show that this framework is able to represent standard IR queries, formulate previously impossible multi-layered queries, and customize the desired degree of linguistic granularity."
},
{
"pmid": "26376462",
"title": "Textual inference for eligibility criteria resolution in clinical trials.",
"abstract": "Clinical trials are essential for determining whether new interventions are effective. In order to determine the eligibility of patients to enroll into these trials, clinical trial coordinators often perform a manual review of clinical notes in the electronic health record of patients. This is a very time-consuming and exhausting task. Efforts in this process can be expedited if these coordinators are directed toward specific parts of the text that are relevant for eligibility determination. In this study, we describe the creation of a dataset that can be used to evaluate automated methods capable of identifying sentences in a note that are relevant for screening a patient's eligibility in clinical trials. Using this dataset, we also present results for four simple methods in natural language processing that can be used to automate this task. We found that this is a challenging task (maximum F-score=26.25), but it is a promising direction for further research."
},
{
"pmid": "25451102",
"title": "Secondary use of electronic health records for building cohort studies through top-down information extraction.",
"abstract": "Controlled clinical trials are usually supported with an in-front data aggregation system, which supports the storage of relevant information according to the trial context within a highly structured environment. In contrast to the documentation of clinical trials, daily routine documentation has many characteristics that influence data quality. One such characteristic is the use of non-standardized text, which is an indispensable part of information representation in clinical information systems. Based on a cohort study we highlight challenges for mining electronic health records targeting free text entry fields within semi-structured data sources. Our prototypical information extraction system achieved an F-measure of 0.91 (precision=0.90, recall=0.93) for the training set and an F-measure of 0.90 (precision=0.89, recall=0.92) for the test set. We analyze the obtained results in detail and highlight challenges and future directions for the secondary use of routine data in general."
},
{
"pmid": "26171080",
"title": "Interactive Cohort Identification of Sleep Disorder Patients Using Natural Language Processing and i2b2.",
"abstract": "UNLABELLED\nNationwide Children's Hospital established an i2b2 (Informatics for Integrating Biology & the Bedside) application for sleep disorder cohort identification. Discrete data were gleaned from semistructured sleep study reports. The system showed to work more efficiently than the traditional manual chart review method, and it also enabled searching capabilities that were previously not possible.\n\n\nOBJECTIVE\nWe report on the development and implementation of the sleep disorder i2b2 cohort identification system using natural language processing of semi-structured documents.\n\n\nMETHODS\nWe developed a natural language processing approach to automatically parse concepts and their values from semi-structured sleep study documents. Two parsers were developed: a regular expression parser for extracting numeric concepts and a NLP based tree parser for extracting textual concepts. Concepts were further organized into i2b2 ontologies based on document structures and in-domain knowledge.\n\n\nRESULTS\n26,550 concepts were extracted with 99% being textual concepts. 1.01 million facts were extracted from sleep study documents such as demographic information, sleep study lab results, medications, procedures, diagnoses, among others. The average accuracy of terminology parsing was over 83% when comparing against those by experts. The system is capable of capturing both standard and non-standard terminologies. The time for cohort identification has been reduced significantly from a few weeks to a few seconds.\n\n\nCONCLUSION\nNatural language processing was shown to be powerful for quickly converting large amount of semi-structured or unstructured clinical data into discrete concepts, which in combination of intuitive domain specific ontologies, allows fast and effective interactive cohort identification through the i2b2 platform for research and clinical use."
},
{
"pmid": "28585184",
"title": "Text Mining of the Electronic Health Record: An Information Extraction Approach for Automated Identification and Subphenotyping of HFpEF Patients for Clinical Trials.",
"abstract": "Precision medicine requires clinical trials that are able to efficiently enroll subtypes of patients in whom targeted therapies can be tested. To reduce the large amount of time spent screening, identifying, and recruiting patients with specific subtypes of heterogeneous clinical syndromes (such as heart failure with preserved ejection fraction [HFpEF]), we need prescreening systems that are able to automate data extraction and decision-making tasks. However, a major obstacle is the vast amount of unstructured free-form text in medical records. Here we describe an information extraction-based approach that automatically converts unstructured text into structured data, which is cross-referenced against eligibility criteria using a rule-based system to determine which patients qualify for a major HFpEF clinical trial (PARAGON). We show that we can achieve a sensitivity and positive predictive value of 0.95 and 0.86, respectively. Our open-source algorithm could be used to efficiently identify and subphenotype patients with HFpEF and other disorders."
},
{
"pmid": "31197354",
"title": "Hybrid bag of approaches to characterize selection criteria for cohort identification.",
"abstract": "OBJECTIVE\nThe 2018 National NLP Clinical Challenge (2018 n2c2) focused on the task of cohort selection for clinical trials, where participating systems were tasked with analyzing longitudinal patient records to determine if the patients met or did not meet any of the 13 selection criteria. This article describes our participation in this shared task.\n\n\nMATERIALS AND METHODS\nWe followed a hybrid approach combining pattern-based, knowledge-intensive, and feature weighting techniques. After preprocessing the notes using publicly available natural language processing tools, we developed individual criterion-specific components that relied on collecting knowledge resources relevant for these criteria and pattern-based and weighting approaches to identify \"met\" and \"not met\" cases.\n\n\nRESULTS\nAs part of the 2018 n2c2 challenge, 3 runs were submitted. The overall micro-averaged F1 on the training set was 0.9444. On the test set, the micro-averaged F1 for the 3 submitted runs were 0.9075, 0.9065, and 0.9056. The best run was placed second in the overall challenge and all 3 runs were statistically similar to the top-ranked system. A reimplemented system achieved the best overall F1 of 0.9111 on the test set.\n\n\nDISCUSSION\nWe highlight the need for a focused resource-intensive effort to address the class imbalance in the cohort selection identification task.\n\n\nCONCLUSION\nOur hybrid approach was able to identify all selection criteria with high F1 performance on both training and test sets. Based on our participation in the 2018 n2c2 task, we conclude that there is merit in continuing a focused criterion-specific analysis and developing appropriate knowledge resources to build a quality cohort selection system."
},
{
"pmid": "23929403",
"title": "Validation of Case Finding Algorithms for Hepatocellular Cancer From Administrative Data and Electronic Health Records Using Natural Language Processing.",
"abstract": "BACKGROUND\nAccurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC International Classification of Diseases, 9th Revision (ICD-9) codes, and evaluated whether natural language processing by the Automated Retrieval Console (ARC) for document classification improves HCC identification.\n\n\nMETHODS\nWe identified a cohort of patients with ICD-9 codes for HCC during 2005-2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared with manual classification. PPV, sensitivity, and specificity of ARC were calculated.\n\n\nRESULTS\nA total of 1138 patients with HCC were identified by ICD-9 codes. On the basis of manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68.\n\n\nCONCLUSIONS\nA combined approach of ICD-9 codes and natural language processing of pathology and radiology reports improves HCC case identification in automated data."
},
{
"pmid": "15838413",
"title": "Accuracy of ICD-9-CM codes for identifying cardiovascular and stroke risk factors.",
"abstract": "OBJECTIVES\nWe sought to determine which ICD-9-CM codes in Medicare Part A data identify cardiovascular and stroke risk factors.\n\n\nDESIGN AND PARTICIPANTS\nThis was a cross-sectional study comparing ICD-9-CM data to structured medical record review from 23,657 Medicare beneficiaries aged 20 to 105 years who had atrial fibrillation.\n\n\nMEASUREMENTS\nQuality improvement organizations used standardized abstraction instruments to determine the presence of 9 cardiovascular and stroke risk factors. Using the chart abstractions as the gold standard, we assessed the accuracy of ICD-9-CM codes to identify these risk factors.\n\n\nMAIN RESULTS\nICD-9-CM codes for all risk factors had high specificity (>0.95) and low sensitivity (< or =0.76). The positive predictive values were greater than 0.95 for 5 common, chronic risk factors-coronary artery disease, stroke/transient ischemic attack, heart failure, diabetes, and hypertension. The sixth common risk factor, valvular heart disease, had a positive predictive value of 0.93. For all 6 common risk factors, negative predictive values ranged from 0.52 to 0.91. The rare risk factors-arterial peripheral embolus, intracranial hemorrhage, and deep venous thrombosis-had high negative predictive value (> or =0.98) but moderate positive predictive values (range, 0.54-0.77) in this population.\n\n\nCONCLUSIONS\nUsing ICD-9-CM codes alone, heart failure, coronary artery disease, diabetes, hypertension, and stroke can be ruled in but not necessarily ruled out. Where feasible, review of additional data (eg, physician notes or imaging studies) should be used to confirm the diagnosis of valvular disease, arterial peripheral embolus, intracranial hemorrhage, and deep venous thrombosis."
},
{
"pmid": "21347124",
"title": "Application of Natural Language Processing to VA Electronic Health Records to Identify Phenotypic Characteristics for Clinical and Research Purposes.",
"abstract": "Informatics tools to extract and analyze clinical information on patients have lagged behind data-mining developments in bioinformatics. While the analyses of an individual's partial or complete genotype is nearly a reality, the phenotypic characteristics that accompany the genotype are not well known and largely inaccessible in free-text patient health records. As the adoption of electronic medical records increases, there exists an urgent need to extract pertinent phenotypic information and make that available to clinicians and researchers. This usually requires the data to be in a structured format that is both searchable and amenable to computation. Using inflammatory bowel disease as an example, this study demonstrates the utility of a natural language processing system (MedLEE) in mining clinical notes in the paperless VA Health Care System. This adaptation of MedLEE is useful for identifying patients with specific clinical conditions, those at risk for or those with symptoms suggestive of those conditions."
},
{
"pmid": "24664671",
"title": "Using natural language processing and machine learning to identify gout flares from electronic clinical notes.",
"abstract": "OBJECTIVE\nGout flares are not well documented by diagnosis codes, making it difficult to conduct accurate database studies. We implemented a computer-based method to automatically identify gout flares using natural language processing (NLP) and machine learning (ML) from electronic clinical notes.\n\n\nMETHODS\nOf 16,519 patients, 1,264 and 1,192 clinical notes from 2 separate sets of 100 patients were selected as the training and evaluation data sets, respectively, which were reviewed by rheumatologists. We created separate NLP searches to capture different aspects of gout flares. For each note, the NLP search outputs became the ML system inputs, which provided the final classification decisions. The note-level classifications were grouped into patient-level gout flares. Our NLP+ML results were validated using a gold standard data set and compared with the claims-based method used by prior literatures.\n\n\nRESULTS\nFor 16,519 patients with a diagnosis of gout and a prescription for a urate-lowering therapy, we identified 18,869 clinical notes as gout flare positive (sensitivity 82.1%, specificity 91.5%): 1,402 patients with ≥3 flares (sensitivity 93.5%, specificity 84.6%), 5,954 with 1 or 2 flares, and 9,163 with no flare (sensitivity 98.5%, specificity 96.4%). Our method identified more flare cases (18,869 versus 7,861) and patients with ≥3 flares (1,402 versus 516) when compared to the claims-based method.\n\n\nCONCLUSION\nWe developed a computer-based method (NLP and ML) to identify gout flares from the clinical notes. Our method was validated as an accurate tool for identifying gout flares with higher sensitivity and specificity compared to previous studies."
},
{
"pmid": "18999285",
"title": "Comparing ICD9-encoded diagnoses and NLP-processed discharge summaries for clinical trials pre-screening: a case study.",
"abstract": "The prevalence of electronic medical record (EMR) systems has made mass-screening for clinical trials viable through secondary uses of clinical data, which often exist in both structured and free text formats. The tradeoffs of using information in either data format for clinical trials screening are understudied. This paper compares the results of clinical trial eligibility queries over ICD9-encoded diagnoses and NLP-processed textual discharge summaries. The strengths and weaknesses of both data sources are summarized along the following dimensions: information completeness, expressiveness, code granularity, and accuracy of temporal information. We conclude that NLP-processed patient reports supplement important information for eligibility screening and should be used in combination with structured data."
},
{
"pmid": "21346976",
"title": "Comparing methods for identifying pancreatic cancer patients using electronic data sources.",
"abstract": "We sought to determine the accuracy of two electronic methods of identifying pancreatic cancer in a cohort of pancreatic cyst patients, and to examine the reasons for identification failure. We used the International Classification of Diseases, 9(th) Edition (ICD-9) codes and natural language processing (NLP) technology to identify pancreatic cancer in these patients. We compared both methods to a human-validated gold-standard surgical database. Both ICD-9 codes and NLP technology achieved high sensitivity for identifying pancreatic cancer, but the ICD-9 code method achieved markedly lower specificity and PPV compared to the NLP method. The NLP method required only slightly greater expenditures of time and effort compared to the ICD-9 code method. We identified several variables influencing the accuracy of ICD-9 codes to identify cancer patients including: the identification algorithm, kind of cancer to be identified, presence of other conditions similar to cancer, and presence of conditions that are precancerous."
},
{
"pmid": "22195222",
"title": "Extracting and integrating data from entire electronic health records for detecting colorectal cancer cases.",
"abstract": "Identification of a cohort of patients with specific diseases is an important step for clinical research that is based on electronic health records (EHRs). Informatics approaches combining structured EHR data, such as billing records, with narrative text data have demonstrated utility for such tasks. This paper describes an algorithm combining machine learning and natural language processing to detect patients with colorectal cancer (CRC) from entire EHRs at Vanderbilt University Hospital. We developed a general case detection method that consists of two steps: 1) extraction of positive CRC concepts from all clinical notes (document-level concept identification); and 2) determination of CRC cases using aggregated information from both clinical narratives and structured billing data (patient-level case determination). For each step, we compared performance of rule-based and machine-learning-based approaches. Using a manually reviewed data set containing 300 possible CRC patients (150 for training and 150 for testing), we showed that our method achieved F-measures of 0.996 for document level concept identification, and 0.93 for patient level case detection."
},
{
"pmid": "22627647",
"title": "Automated identification of patients with pulmonary nodules in an integrated health system using administrative health plan data, radiology reports, and natural language processing.",
"abstract": "INTRODUCTION\nLung nodules are commonly encountered in clinical practice, yet little is known about their management in community settings. An automated method for identifying patients with lung nodules would greatly facilitate research in this area.\n\n\nMETHODS\nUsing members of a large, community-based health plan from 2006 to 2010, we developed a method to identify patients with lung nodules, by combining five diagnostic codes, four procedural codes, and a natural language processing algorithm that performed free text searches of radiology transcripts. An experienced pulmonologist reviewed a random sample of 116 radiology transcripts, providing a reference standard for the natural language processing algorithm.\n\n\nRESULTS\nWith the use of an automated method, we identified 7112 unique members as having one or more incident lung nodules. The mean age of the patients was 65 years (standard deviation 14 years). There were slightly more women (54%) than men, and Hispanics and non-whites comprised 45% of the lung nodule cohort. Thirty-six percent were never smokers whereas 11% were current smokers. Fourteen percent of the patients were subsequently diagnosed with lung cancer. The sensitivity and specificity of the natural language processing algorithm for identifying the presence of lung nodules were 96% and 86%, respectively, compared with clinician review. Among the true positive transcripts in the validation sample, only 35% were solitary and unaccompanied by one or more associated findings, and 56% measured 8 to 30 mm in diameter.\n\n\nCONCLUSIONS\nA combination of diagnostic codes, procedural codes, and a natural language processing algorithm for free text searching of radiology reports can accurately and efficiently identify patients with incident lung nodules, many of whom are subsequently diagnosed with lung cancer."
},
{
"pmid": "26195183",
"title": "A Robust e-Epidemiology Tool in Phenotyping Heart Failure with Differentiation for Preserved and Reduced Ejection Fraction: the Electronic Medical Records and Genomics (eMERGE) Network.",
"abstract": "Identifying populations of heart failure (HF) patients is paramount to research efforts aimed at developing strategies to effectively reduce the burden of this disease. The use of electronic medical record (EMR) data for this purpose is challenging given the syndromic nature of HF and the need to distinguish HF with preserved or reduced ejection fraction. Using a gold standard cohort of manually abstracted cases, an EMR-driven phenotype algorithm based on structured and unstructured data was developed to identify all the cases. The resulting algorithm was executed in two cohorts from the Electronic Medical Records and Genomics (eMERGE) Network with a positive predictive value of >95 %. The algorithm was expanded to include three hierarchical definitions of HF (i.e., definite, probable, possible) based on the degree of confidence of the classification to capture HF cases in a whole population whereby increasing the algorithm utility for use in e-Epidemiologic research."
},
{
"pmid": "26537487",
"title": "Development and Validation of an Algorithm to Identify Nonalcoholic Fatty Liver Disease in the Electronic Medical Record.",
"abstract": "BACKGROUND AND AIMS\nNonalcoholic fatty liver disease (NAFLD) is the most common cause of chronic liver disease worldwide. Risk factors for NAFLD disease progression and liver-related outcomes remain incompletely understood due to the lack of computational identification methods. The present study sought to design a classification algorithm for NAFLD within the electronic medical record (EMR) for the development of large-scale longitudinal cohorts.\n\n\nMETHODS\nWe implemented feature selection using logistic regression with adaptive LASSO. A training set of 620 patients was randomly selected from the Research Patient Data Registry at Partners Healthcare. To assess a true diagnosis for NAFLD we performed chart reviews and considered either a documentation of a biopsy or a clinical diagnosis of NAFLD. We included in our model variables laboratory measurements, diagnosis codes, and concepts extracted from medical notes. Variables with P < 0.05 were included in the multivariable analysis.\n\n\nRESULTS\nThe NAFLD classification algorithm included number of natural language mentions of NAFLD in the EMR, lifetime number of ICD-9 codes for NAFLD, and triglyceride level. This classification algorithm was superior to an algorithm using ICD-9 data alone with AUC of 0.85 versus 0.75 (P < 0.0001) and leads to the creation of a new independent cohort of 8458 individuals with a high probability for NAFLD.\n\n\nCONCLUSIONS\nThe NAFLD classification algorithm is superior to ICD-9 billing data alone. This approach is simple to develop, deploy, and can be applied across different institutions to create EMR-based cohorts of individuals with NAFLD."
},
{
"pmid": "28269938",
"title": "Multi-modal Patient Cohort Identification from EEG Report and Signal Data.",
"abstract": "Clinical electroencephalography (EEG) is the most important investigation in the diagnosis and management of epilepsies. An EEG records the electrical activity along the scalp and measures spontaneous electrical activity of the brain. Because the EEG signal is complex, its interpretation is known to produce moderate inter-observer agreement among neurologists. This problem can be addressed by providing clinical experts with the ability to automatically retrieve similar EEG signals and EEG reports through a patient cohort retrieval system operating on a vast archive of EEG data. In this paper, we present a multi-modal EEG patient cohort retrieval system called MERCuRY which leverages the heterogeneous nature of EEG data by processing both the clinical narratives from EEG reports as well as the raw electrode potentials derived from the recorded EEG signal data. At the core of MERCuRY is a novel multimodal clinical indexing scheme which relies on EEG data representations obtained through deep learning. The index is used by two clinical relevance models that we have generated for identifying patient cohorts satisfying the inclusion and exclusion criteria expressed in natural language queries. Evaluations of the MERCuRY system measured the relevance of the patient cohorts, obtaining MAP scores of 69.87% and a NDCG of 83.21%."
},
{
"pmid": "7719797",
"title": "A general natural-language text processor for clinical radiology.",
"abstract": "OBJECTIVE\nDevelopment of a general natural-language processor that identifies clinical information in narrative reports and maps that information into a structured representation containing clinical terms.\n\n\nDESIGN\nThe natural-language processor provides three phases of processing, all of which are driven by different knowledge sources. The first phase performs the parsing. It identifies the structure of the text through use of a grammar that defines semantic patterns and a target form. The second phase, regularization, standardizes the terms in the initial target structure via a compositional mapping of multi-word phrases. The third phase, encoding, maps the terms to a controlled vocabulary. Radiology is the test domain for the processor and the target structure is a formal model for representing clinical information in that domain.\n\n\nMEASUREMENTS\nThe impression sections of 230 radiology reports were encoded by the processor. Results of an automated query of the resultant database for the occurrences of four diseases were compared with the analysis of a panel of three physicians to determine recall and precision.\n\n\nRESULTS\nWithout training specific to the four diseases, recall and precision of the system (combined effect of the processor and query generator) were 70% and 87%. Training of the query component increased recall to 85% without changing precision."
},
{
"pmid": "20819853",
"title": "Mayo clinical Text Analysis and Knowledge Extraction System (cTAKES): architecture, component evaluation and applications.",
"abstract": "We aim to build and evaluate an open-source natural language processing system for information extraction from electronic medical record clinical free-text. We describe and evaluate our system, the clinical Text Analysis and Knowledge Extraction System (cTAKES), released open-source at http://www.ohnlp.org. The cTAKES builds on existing open-source technologies-the Unstructured Information Management Architecture framework and OpenNLP natural language processing toolkit. Its components, specifically trained for the clinical domain, create rich linguistic and semantic annotations. Performance of individual components: sentence boundary detector accuracy=0.949; tokenizer accuracy=0.949; part-of-speech tagger accuracy=0.936; shallow parser F-score=0.924; named entity recognizer and system-level evaluation F-score=0.715 for exact and 0.824 for overlapping spans, and accuracy for concept mapping, negation, and status attributes for exact and overlapping spans of 0.957, 0.943, 0.859, and 0.580, 0.939, and 0.839, respectively. Overall performance is discussed against five applications. The cTAKES annotations are the foundation for methods and modules for higher-level semantic processing of clinical free-text."
},
{
"pmid": "24190931",
"title": "Normalization and standardization of electronic health records for high-throughput phenotyping: the SHARPn consortium.",
"abstract": "RESEARCH OBJECTIVE\nTo develop scalable informatics infrastructure for normalization of both structured and unstructured electronic health record (EHR) data into a unified, concept-based model for high-throughput phenotype extraction.\n\n\nMATERIALS AND METHODS\nSoftware tools and applications were developed to extract information from EHRs. Representative and convenience samples of both structured and unstructured data from two EHR systems-Mayo Clinic and Intermountain Healthcare-were used for development and validation. Extracted information was standardized and normalized to meaningful use (MU) conformant terminology and value set standards using Clinical Element Models (CEMs). These resources were used to demonstrate semi-automatic execution of MU clinical-quality measures modeled using the Quality Data Model (QDM) and an open-source rules engine.\n\n\nRESULTS\nUsing CEMs and open-source natural language processing and terminology services engines-namely, Apache clinical Text Analysis and Knowledge Extraction System (cTAKES) and Common Terminology Services (CTS2)-we developed a data-normalization platform that ensures data security, end-to-end connectivity, and reliable data flow within and across institutions. We demonstrated the applicability of this platform by executing a QDM-based MU quality measure that determines the percentage of patients between 18 and 75 years with diabetes whose most recent low-density lipoprotein cholesterol test result during the measurement year was <100 mg/dL on a randomly selected cohort of 273 Mayo Clinic patients. The platform identified 21 and 18 patients for the denominator and numerator of the quality measure, respectively. Validation results indicate that all identified patients meet the QDM-based criteria.\n\n\nCONCLUSIONS\nEnd-to-end automated systems for extracting clinical information from diverse EHR systems require extensive use of standardized vocabularies and terminologies, as well as robust information models for storing, discovering, and processing that information. This study demonstrates the application of modular and open-source resources for enabling secondary use of EHR data through normalization into standards-based, comparable, and consistent format for high-throughput phenotyping to identify patient cohorts."
},
{
"pmid": "24125142",
"title": "Automated chart review for asthma cohort identification using natural language processing: an exploratory study.",
"abstract": "BACKGROUND\nA significant proportion of children with asthma have delayed diagnosis of asthma by health care providers. Manual chart review according to established criteria is more accurate than directly using diagnosis codes, which tend to under-identify asthmatics, but chart reviews are more costly and less timely.\n\n\nOBJECTIVE\nTo evaluate the accuracy of a computational approach to asthma ascertainment, characterizing its utility and feasibility toward large-scale deployment in electronic medical records.\n\n\nMETHODS\nA natural language processing (NLP) system was developed for extracting predetermined criteria for asthma from unstructured text in electronic medical records and then inferring asthma status based on these criteria. Using manual chart reviews as a gold standard, asthma status (yes vs no) and identification date (first date of a \"yes\" asthma status) were determined by the NLP system.\n\n\nRESULTS\nPatients were a group of children (n = 112, 84% Caucasian, 49% girls) younger than 4 years (mean 2.0 years, standard deviation 1.03 years) who participated in previous studies. The NLP approach to asthma ascertainment showed sensitivity, specificity, positive predictive value, negative predictive value, and median delay in diagnosis of 84.6%, 96.5%, 88.0%, 95.4%, and 0 months, respectively; this compared favorably with diagnosis codes, at 30.8%, 93.2%, 57.1%, 82.2%, and 2.3 months, respectively.\n\n\nCONCLUSION\nAutomated asthma ascertainment from electronic medical records using NLP is feasible and more accurate than traditional approaches such as diagnosis codes. Considering the difficulty of labor-intensive manual record review, NLP approaches for asthma ascertainment should be considered for improving clinical care and research, especially in large-scale efforts."
},
{
"pmid": "23304375",
"title": "A comparative study of current Clinical Natural Language Processing systems on handling abbreviations in discharge summaries.",
"abstract": "Clinical Natural Language Processing (NLP) systems extract clinical information from narrative clinical texts in many settings. Previous research mentions the challenges of handling abbreviations in clinical texts, but provides little insight into how well current NLP systems correctly recognize and interpret abbreviations. In this paper, we compared performance of three existing clinical NLP systems in handling abbreviations: MetaMap, MedLEE, and cTAKES. The evaluation used an expert-annotated gold standard set of clinical documents (derived from from 32 de-identified patient discharge summaries) containing 1,112 abbreviations. The existing NLP systems achieved suboptimal performance in abbreviation identification, with F-scores ranging from 0.165 to 0.601. MedLEE achieved the best F-score of 0.601 for all abbreviations and 0.705 for clinically relevant abbreviations. This study suggested that accurate identification of clinical abbreviations is a challenging task and that more advanced abbreviation recognition modules might improve existing clinical NLP systems."
},
{
"pmid": "31300825",
"title": "Clinical trial cohort selection based on multi-level rule-based natural language processing system.",
"abstract": "OBJECTIVE\nIdentifying patients who meet selection criteria for clinical trials is typically challenging and time-consuming. In this article, we describe our clinical natural language processing (NLP) system to automatically assess patients' eligibility based on their longitudinal medical records. This work was part of the 2018 National NLP Clinical Challenges (n2c2) Shared-Task and Workshop on Cohort Selection for Clinical Trials.\n\n\nMATERIALS AND METHODS\nThe authors developed an integrated rule-based clinical NLP system which employs a generic rule-based framework plugged in with lexical-, syntactic- and meta-level, task-specific knowledge inputs. In addition, the authors also implemented and evaluated a general clinical NLP (cNLP) system which is built with the Unified Medical Language System and Unstructured Information Management Architecture.\n\n\nRESULTS AND DISCUSSION\nThe systems were evaluated as part of the 2018 n2c2-1 challenge, and authors' rule-based system obtained an F-measure of 0.9028, ranking fourth at the challenge and had less than 1% difference from the best system. While the general cNLP system didn't achieve performance as good as the rule-based system, it did establish its own advantages and potential in extracting clinical concepts.\n\n\nCONCLUSION\nOur results indicate that a well-designed rule-based clinical NLP system is capable of achieving good performance on cohort selection even with a small training data set. In addition, the investigation of a Unified Medical Language System-based general cNLP system suggests that a hybrid system combining these 2 approaches is promising to surpass the state-of-the-art performance."
},
{
"pmid": "15797003",
"title": "Prospective recruitment of patients with congestive heart failure using an ad-hoc binary classifier.",
"abstract": "This paper addresses a very specific problem of identifying patients diagnosed with a specific condition for potential recruitment in a clinical trial or an epidemiological study. We present a simple machine learning method for identifying patients diagnosed with congestive heart failure and other related conditions by automatically classifying clinical notes dictated at Mayo Clinic. This method relies on an automatic classifier trained on comparable amounts of positive and negative samples of clinical notes previously categorized by human experts. The documents are represented as feature vectors, where features are a mix of demographic information as well as single words and concept mappings to MeSH and HICDA classification systems. We compare two simple and efficient classification algorithms (Naïve Bayes and Perceptron) and a baseline term spotting method with respect to their accuracy and recall on positive samples. Depending on the test set, we find that Naïve Bayes yields better recall on positive samples (95 vs. 86%) but worse accuracy than Perceptron (57 vs. 65%). Both algorithms perform better than the baseline with recall on positive samples of 71% and accuracy of 54%."
},
{
"pmid": "31305921",
"title": "Cohort selection for clinical trials using hierarchical neural network.",
"abstract": "OBJECTIVE\nCohort selection for clinical trials is a key step for clinical research. We proposed a hierarchical neural network to determine whether a patient satisfied selection criteria or not.\n\n\nMATERIALS AND METHODS\nWe designed a hierarchical neural network (denoted as CNN-Highway-LSTM or LSTM-Highway-LSTM) for the track 1 of the national natural language processing (NLP) clinical challenge (n2c2) on cohort selection for clinical trials in 2018. The neural network is composed of 5 components: (1) sentence representation using convolutional neural network (CNN) or long short-term memory (LSTM) network; (2) a highway network to adjust information flow; (3) a self-attention neural network to reweight sentences; (4) document representation using LSTM, which takes sentence representations in chronological order as input; (5) a fully connected neural network to determine whether each criterion is met or not. We compared the proposed method with its variants, including the methods only using the first component to represent documents directly and the fully connected neural network for classification (denoted as CNN-only or LSTM-only) and the methods without using the highway network (denoted as CNN-LSTM or LSTM-LSTM). The performance of all methods was measured by micro-averaged precision, recall, and F1 score.\n\n\nRESULTS\nThe micro-averaged F1 scores of CNN-only, LSTM-only, CNN-LSTM, LSTM-LSTM, CNN-Highway-LSTM, and LSTM-Highway-LSTM were 85.24%, 84.25%, 87.27%, 88.68%, 88.48%, and 90.21%, respectively. The highest micro-averaged F1 score is higher than our submitted 1 of 88.55%, which is 1 of the top-ranked results in the challenge. The results indicate that the proposed method is effective for cohort selection for clinical trials.\n\n\nDISCUSSION\nAlthough the proposed method achieved promising results, some mistakes were caused by word ambiguity, negation, number analysis and incomplete dictionary. Moreover, imbalanced data was another challenge that needs to be tackled in the future.\n\n\nCONCLUSION\nIn this article, we proposed a hierarchical neural network for cohort selection. Experimental results show that this method is good at selecting cohort."
},
{
"pmid": "29450781",
"title": "Measuring Use of Evidence Based Psychotherapy for Posttraumatic Stress Disorder in a Large National Healthcare System.",
"abstract": "To derive a method of identifying use of evidence-based psychotherapy (EBP) for post-traumatic stress disorder (PTSD), we used clinical note text from national Veterans Health Administration (VHA) medical records. Using natural language processing, we developed machine-learning algorithms to classify note text on a large scale in an observational study of Iraq and Afghanistan veterans with PTSD and one post-deployment psychotherapy visit by 8/5/15 (N = 255,968). PTSD visits were linked to 8.1 million psychotherapy notes. Annotators labeled 3467 randomly-selected psychotherapy notes (kappa = 0.88) to indicate receipt of EBP. We met our performance targets of overall classification accuracy (0.92); 20.2% of veterans received ≥ one session of EBP over the study period. Our method can assist with identifying EBP use and studying EBP-associated outcomes in routine clinical practice."
},
{
"pmid": "26241355",
"title": "A systematic comparison of feature space effects on disease classifier performance for phenotype identification of five diseases.",
"abstract": "Automated phenotype identification plays a critical role in cohort selection and bioinformatics data mining. Natural Language Processing (NLP)-informed classification techniques can robustly identify phenotypes in unstructured medical notes. In this paper, we systematically assess the effect of naive, lexically normalized, and semantic feature spaces on classifier performance for obesity, atherosclerotic cardiovascular disease (CAD), hyperlipidemia, hypertension, and diabetes. We train support vector machines (SVMs) using individual feature spaces as well as combinations of these feature spaces on two small training corpora (730 and 790 documents) and a combined (1520 documents) training corpus. We assess the importance of feature spaces and training data size on SVM model performance. We show that inclusion of semantically-informed features does not statistically improve performance for these models. The addition of training data has weak effects of mixed statistical significance across disease classes suggesting larger corpora are not necessary to achieve relatively high performance with these models."
},
{
"pmid": "25881112",
"title": "Increasing the efficiency of trial-patient matching: automated clinical trial eligibility pre-screening for pediatric oncology patients.",
"abstract": "BACKGROUND\nManual eligibility screening (ES) for a clinical trial typically requires a labor-intensive review of patient records that utilizes many resources. Leveraging state-of-the-art natural language processing (NLP) and information extraction (IE) technologies, we sought to improve the efficiency of physician decision-making in clinical trial enrollment. In order to markedly reduce the pool of potential candidates for staff screening, we developed an automated ES algorithm to identify patients who meet core eligibility characteristics of an oncology clinical trial.\n\n\nMETHODS\nWe collected narrative eligibility criteria from ClinicalTrials.gov for 55 clinical trials actively enrolling oncology patients in our institution between 12/01/2009 and 10/31/2011. In parallel, our ES algorithm extracted clinical and demographic information from the Electronic Health Record (EHR) data fields to represent profiles of all 215 oncology patients admitted to cancer treatment during the same period. The automated ES algorithm then matched the trial criteria with the patient profiles to identify potential trial-patient matches. Matching performance was validated on a reference set of 169 historical trial-patient enrollment decisions, and workload, precision, recall, negative predictive value (NPV) and specificity were calculated.\n\n\nRESULTS\nWithout automation, an oncologist would need to review 163 patients per trial on average to replicate the historical patient enrollment for each trial. This workload is reduced by 85% to 24 patients when using automated ES (precision/recall/NPV/specificity: 12.6%/100.0%/100.0%/89.9%). Without automation, an oncologist would need to review 42 trials per patient on average to replicate the patient-trial matches that occur in the retrospective data set. With automated ES this workload is reduced by 90% to four trials (precision/recall/NPV/specificity: 35.7%/100.0%/100.0%/95.5%).\n\n\nCONCLUSION\nBy leveraging NLP and IE technologies, automated ES could dramatically increase the trial screening efficiency of oncologists and enable participation of small practices, which are often left out from trial enrollment. The algorithm has the potential to significantly reduce the effort to execute clinical research at a point in time when new initiatives of the cancer care community intend to greatly expand both the access to trials and the number of available trials."
},
{
"pmid": "31342909",
"title": "A Real-Time Automated Patient Screening System for Clinical Trials Eligibility in an Emergency Department: Design and Evaluation.",
"abstract": "BACKGROUND\nOne critical hurdle for clinical trial recruitment is the lack of an efficient method for identifying subjects who meet the eligibility criteria. Given the large volume of data documented in electronic health records (EHRs), it is labor-intensive for the staff to screen relevant information, particularly within the time frame needed. To facilitate subject identification, we developed a natural language processing (NLP) and machine learning-based system, Automated Clinical Trial Eligibility Screener (ACTES), which analyzes structured data and unstructured narratives automatically to determine patients' suitability for clinical trial enrollment. In this study, we integrated the ACTES into clinical practice to support real-time patient screening.\n\n\nOBJECTIVE\nThis study aimed to evaluate ACTES's impact on the institutional workflow, prospectively and comprehensively. We hypothesized that compared with the manual screening process, using EHR-based automated screening would improve efficiency of patient identification, streamline patient recruitment workflow, and increase enrollment in clinical trials.\n\n\nMETHODS\nThe ACTES was fully integrated into the clinical research coordinators' (CRC) workflow in the pediatric emergency department (ED) at Cincinnati Children's Hospital Medical Center. The system continuously analyzed EHR information for current ED patients and recommended potential candidates for clinical trials. Relevant patient eligibility information was presented in real time on a dashboard available to CRCs to facilitate their recruitment. To assess the system's effectiveness, we performed a multidimensional, prospective evaluation for a 12-month period, including a time-and-motion study, quantitative assessments of enrollment, and postevaluation usability surveys collected from the CRCs.\n\n\nRESULTS\nCompared with manual screening, the use of ACTES reduced the patient screening time by 34% (P<.001). The saved time was redirected to other activities such as study-related administrative tasks (P=.03) and work-related conversations (P=.006) that streamlined teamwork among the CRCs. The quantitative assessments showed that automated screening improved the numbers of subjects screened, approached, and enrolled by 14.7%, 11.1%, and 11.1%, respectively, suggesting the potential of ACTES in streamlining recruitment workflow. Finally, the ACTES achieved a system usability scale of 80.0 in the postevaluation surveys, suggesting that it was a good computerized solution.\n\n\nCONCLUSIONS\nBy leveraging NLP and machine learning technologies, the ACTES demonstrated good capacity for improving efficiency of patient identification. The quantitative assessments demonstrated the potential of ACTES in streamlining recruitment workflow and improving patient enrollment. The postevaluation surveys suggested that the system was a good computerized solution with satisfactory usability."
},
{
"pmid": "26433122",
"title": "Creation of a new longitudinal corpus of clinical narratives.",
"abstract": "The 2014 i2b2/UTHealth Natural Language Processing (NLP) shared task featured a new longitudinal corpus of 1304 records representing 296 diabetic patients. The corpus contains three cohorts: patients who have a diagnosis of coronary artery disease (CAD) in their first record, and continue to have it in subsequent records; patients who do not have a diagnosis of CAD in the first record, but develop it by the last record; patients who do not have a diagnosis of CAD in any record. This paper details the process used to select records for this corpus and provides an overview of novel research uses for this corpus. This corpus is the only annotated corpus of longitudinal clinical narratives currently available for research to the general research community."
},
{
"pmid": "26210362",
"title": "Identifying risk factors for heart disease over time: Overview of 2014 i2b2/UTHealth shared task Track 2.",
"abstract": "The second track of the 2014 i2b2/UTHealth natural language processing shared task focused on identifying medical risk factors related to Coronary Artery Disease (CAD) in the narratives of longitudinal medical records of diabetic patients. The risk factors included hypertension, hyperlipidemia, obesity, smoking status, and family history, as well as diabetes and CAD, and indicators that suggest the presence of those diseases. In addition to identifying the risk factors, this track of the 2014 i2b2/UTHealth shared task studied the presence and progression of the risk factors in longitudinal medical records. Twenty teams participated in this track, and submitted 49 system runs for evaluation. Six of the top 10 teams achieved F1 scores over 0.90, and all 10 scored over 0.87. The most successful system used a combination of additional annotations, external lexicons, hand-written rules and Support Vector Machines. The results of this track indicate that identification of risk factors and their progression over time is well within the reach of automated systems."
},
{
"pmid": "27570656",
"title": "A Quantitative and Qualitative Evaluation of Sentence Boundary Detection for the Clinical Domain.",
"abstract": "Sentence boundary detection (SBD) is a critical preprocessing task for many natural language processing (NLP) applications. However, there has been little work on evaluating how well existing methods for SBD perform in the clinical domain. We evaluate five popular off-the-shelf NLP toolkits on the task of SBD in various kinds of text using a diverse set of corpora, including the GENIA corpus of biomedical abstracts, a corpus of clinical notes used in the 2010 i2b2 shared task, and two general-domain corpora (the British National Corpus and Switchboard). We find that, with the exception of the cTAKES system, the toolkits we evaluate perform noticeably worse on clinical text than on general-domain text. We identify and discuss major classes of errors, and suggest directions for future work to improve SBD methods in the clinical domain. We also make the code used for SBD evaluation in this paper available for download at http://github.com/drgriffis/SBD-Evaluation."
},
{
"pmid": "12123149",
"title": "A simple algorithm for identifying negated findings and diseases in discharge summaries.",
"abstract": "Narrative reports in medical records contain a wealth of information that may augment structured data for managing patient information and predicting trends in diseases. Pertinent negatives are evident in text but are not usually indexed in structured databases. The objective of the study reported here was to test a simple algorithm for determining whether a finding or disease mentioned within narrative medical reports is present or absent. We developed a simple regular expression algorithm called NegEx that implements several phrases indicating negation, filters out sentences containing phrases that falsely appear to be negation phrases, and limits the scope of the negation phrases. We compared NegEx against a baseline algorithm that has a limited set of negation phrases and a simpler notion of scope. In a test of 1235 findings and diseases in 1000 sentences taken from discharge summaries indexed by physicians, NegEx had a specificity of 94.5% (versus 85.3% for the baseline), a positive predictive value of 84.5% (versus 68.4% for the baseline) while maintaining a reasonable sensitivity of 77.8% (versus 88.3% for the baseline). We conclude that with little implementation effort a simple regular expression algorithm for determining whether a finding or disease is absent can identify a large portion of the pertinent negatives from discharge summaries."
},
{
"pmid": "20819858",
"title": "Medication information extraction with linguistic pattern matching and semantic rules.",
"abstract": "OBJECTIVE\nThis study presents a system developed for the 2009 i2b2 Challenge in Natural Language Processing for Clinical Data, whose aim was to automatically extract certain information about medications used by a patient from his/her medical report. The aim was to extract the following information for each medication: name, dosage, mode/route, frequency, duration and reason.\n\n\nDESIGN\nThe system implements a rule-based methodology, which exploits typical morphological, lexical, syntactic and semantic features of the targeted information. These features were acquired from the training dataset and public resources such as the UMLS and relevant web pages. Information extracted by pattern matching was combined together using context-sensitive heuristic rules.\n\n\nMEASUREMENTS\nThe system was applied to a set of 547 previously unseen discharge summaries, and the extracted information was evaluated against a manually prepared gold standard consisting of 251 documents. The overall ranking of the participating teams was obtained using the micro-averaged F-measure as the primary evaluation metric.\n\n\nRESULTS\nThe implemented method achieved the micro-averaged F-measure of 81% (with 86% precision and 77% recall), which ranked this system third in the challenge. The significance tests revealed the system's performance to be not significantly different from that of the second ranked system. Relative to other systems, this system achieved the best F-measure for the extraction of duration (53%) and reason (46%).\n\n\nCONCLUSION\nBased on the F-measure, the performance achieved (81%) was in line with the initial agreement between human annotators (82%), indicating that such a system may greatly facilitate the process of extracting relevant information from medical records by providing a solid basis for a manual review process."
},
{
"pmid": "22879764",
"title": "A naïve bayes approach to classifying topics in suicide notes.",
"abstract": "The authors present a system developed for the 2011 i2b2 Challenge on Sentiment Classification, whose aim was to automatically classify sentences in suicide notes using a scheme of 15 topics, mostly emotions. The system combines machine learning with a rule-based methodology. The features used to represent a problem were based on lexico-semantic properties of individual words in addition to regular expressions used to represent patterns of word usage across different topics. A naïve Bayes classifier was trained using the features extracted from the training data consisting of 600 manually annotated suicide notes. Classification was then performed using the naïve Bayes classifier as well as a set of pattern-matching rules. The classification performance was evaluated against a manually prepared gold standard consisting of 300 suicide notes, in which 1,091 out of a total of 2,037 sentences were associated with a total of 1,272 annotations. The competing systems were ranked using the micro-averaged F-measure as the primary evaluation metric. Our system achieved the F-measure of 53% (with 55% precision and 52% recall), which was significantly better than the average performance of 48.75% achieved by the 26 participating teams."
}
] |
BMC Medical Informatics and Decision Making | 31842854 | PMC6916209 | 10.1186/s12911-019-0985-7 | Representation learning for clinical time series prediction tasks in electronic health records | BackgroundElectronic health records (EHRs) provide possibilities to improve patient care and facilitate clinical research. However, there are many challenges faced by the applications of EHRs, such as temporality, high dimensionality, sparseness, noise, random error and systematic bias. In particular, temporal information is difficult to effectively use by traditional machine learning methods while the sequential information of EHRs is very useful.MethodIn this paper, we propose a general-purpose patient representation learning approach to summarize sequential EHRs. Specifically, a recurrent neural network based denoising autoencoder (RNN-DAE) is employed to encode inhospital records of each patient into a low dimensional dense vector.ResultsBased on EHR data collected from Shuguang Hospital affiliated to Shanghai University of Traditional Chinese Medicine, we experimentally evaluate our proposed RNN-DAE method on both mortality prediction task and comorbidity prediction task. Extensive experimental results show that our proposed RNN-DAE method outperforms existing methods. In addition, we apply the “Deep Feature” represented by our proposed RNN-DAE method to track similar patients with t-SNE, which also achieves some interesting observations.ConclusionWe propose an effective unsupervised RNN-DAE method to summarize patient sequential information in EHR data. Our proposed RNN-DAE method is useful on both mortality prediction task and comorbidity prediction task. | Related workIn this section, we first briefly introduce state-of-the-art models for the mortality prediction and disease risk prediction task of heart failure. then, we report the progress of the representation learning methods in the medical field.Mortality prediction and disease risk prediction for heart failureMortality prediction and disease risk prediction tasks are very two essential health applications. It has been found that many factors are able to increase mortality for heart failure, such as demographic factors (e.g., gender), clinical factors (e.g., renal dysfunction), comorbidities (e.g., diabetes), cardiac imaging markers (e.g., cardio-thoracic ratio and ejection fraction) and serum biomarkers (e.g., brain natriuretic peptide and C-reactive protein). In recent years, a lot of studies have shown that machine learning methods play an important role in medical research, including support vector machine, Bayesian network, decision tree, nearest neighbors method, and ensemble learning method [15]. For instance, Lee et al. [16] proposed a mortality prediction model with a patient similarity metric. Three types of classification models were used in their work, such as logistic regression, simple statistics and decision tree. Panahiazar et al. [17] designed a risk prediction model by using support vector machine, logistic regression, random forest, adaboost and decision tree. Furthermore, some researchers [15, 18] experimentally compared and analyzed multiple mortality prediction models. The results of these works varies because their data and experiment settings are totally different, but they did actually demonstrate that machine learning methods have limitations in some degree.Recently, deep learning methods play an important role in medical research. For example, Choi et al. [19] and Lipton et al. [20] integrated time-series information into medical applications by recurrent neural network. Nevertheless, their model focus on event-level time-series information (e.g., a series of blood pressure tests). Besides, their model is not universal and can only handle specific tasks. Cheng et al. [4] applied deep learning model to extract phenotypes from EHR data. Although the representations of phenotypes could be used in some further applications, the convolutional neural network they developed in this work might ignore the sequentiality of events. Compared with traditional machine learning models, deep learning models require less human efforts on feature engineering, but their results are more difficult to interpret.Representation learning in medical fieldSince effective feature representation is a basic step before further applications, a large amount of studies are devoted to exploring representation learning methods in the medical field.Inspired by the work of word embedding in natural language processing, many studies focus on representing medical concepts in recent years. For example, Minarro-Giménez et al. [21] developed skip-gram to get the representations of medical terms. Their medical texts are collected from Wikipedia, PubMed, Medscape and Merck Manuals. Choi et al. [22] learned low-dimensional vector representations of medical codes in longitudinal EHRs with skip-gram-based model. Medical codes include disease, medication and procedure codes. In their studies, patient representation with one record is generated by aggregating all the vectors of medical codes. Another study [10] proposed an approach named “Med2Vec" to learn the representations of medical codes in code level and visit level. Cui et al. [23] proposed a supervised model guided by specific prediction tasks to facilitate representations of medical codes, and it is effective to work with small EHR datasets. Deepika and Geetha [24] used a semi-supervised learning framework which contains representation learning of drugs to predict the drug interactions. However, these studies are all concept level, which means that the representations are learned to represent medical codes rather than patient representations.Meanwhile, patient representations are widely used in several applications to assist clinical staff. Considerable efforts were made to learn dense vector representations at the patient level. For example, Zhou et al. [12] developed an unsupervised feature selection scheme relied on stacked denoising autoencoders (SDAs). However, their model aims to summarize time-series features in an inpatient record, rather than the temporality between multiple inpatient records. Miotto et al. [25] adopted SDAs to generate patient representations. Furthermore, Sushil et al. [26] derived task-independent patient representations directly from clinical notes by applying SDAs and a paragraph vector model. The above two methods only consider the frequency of medical events. The main difference between our works and theirs is that they ignore the temporality of EHRs. In addition, Zhang et al. [27] applied Bi-LSTM network to derive the patient vectors based on specific prediction. Although they take time series into consideration, this method is task-driven and supervised. | [
"28258046",
"27913366",
"26262006",
"20473190",
"25160253",
"29928997",
"29959033",
"27185194",
"29966746",
"9377276",
"18267787",
"25717413"
] | [
{
"pmid": "28258046",
"title": "Patient Similarity in Prediction Models Based on Health Data: A Scoping Review.",
"abstract": "BACKGROUND\nPhysicians and health policy makers are required to make predictions during their decision making in various medical problems. Many advances have been made in predictive modeling toward outcome prediction, but these innovations target an average patient and are insufficiently adjustable for individual patients. One developing idea in this field is individualized predictive analytics based on patient similarity. The goal of this approach is to identify patients who are similar to an index patient and derive insights from the records of similar patients to provide personalized predictions..\n\n\nOBJECTIVE\nThe aim is to summarize and review published studies describing computer-based approaches for predicting patients' future health status based on health data and patient similarity, identify gaps, and provide a starting point for related future research.\n\n\nMETHODS\nThe method involved (1) conducting the review by performing automated searches in Scopus, PubMed, and ISI Web of Science, selecting relevant studies by first screening titles and abstracts then analyzing full-texts, and (2) documenting by extracting publication details and information on context, predictors, missing data, modeling algorithm, outcome, and evaluation methods into a matrix table, synthesizing data, and reporting results.\n\n\nRESULTS\nAfter duplicate removal, 1339 articles were screened in abstracts and titles and 67 were selected for full-text review. In total, 22 articles met the inclusion criteria. Within included articles, hospitals were the main source of data (n=10). Cardiovascular disease (n=7) and diabetes (n=4) were the dominant patient diseases. Most studies (n=18) used neighborhood-based approaches in devising prediction models. Two studies showed that patient similarity-based modeling outperformed population-based predictive methods.\n\n\nCONCLUSIONS\nInterest in patient similarity-based predictive modeling for diagnosis and prognosis has been growing. In addition to raw/coded health data, wavelet transform and term frequency-inverse document frequency methods were employed to extract predictors. Selecting predictors with potential to highlight special cases and defining new patient similarity metrics were among the gaps identified in the existing literature that provide starting points for future work. Patient status prediction models based on patient similarity and health data offer exciting potential for personalizing and ultimately improving health care, leading to better patient outcomes."
},
{
"pmid": "27913366",
"title": "$\\mathtt {Deepr}$: A Convolutional Net for Medical Records.",
"abstract": "Feature engineering remains a major bottleneck when creating predictive systems from electronic medical records. At present, an important missing element is detecting predictive regular clinical motifs from irregular episodic records. We present Deepr (short for Deep record), a new end-to-end deep learning system that learns to extract features from medical records and predicts future risk automatically. Deepr transforms a record into a sequence of discrete elements separated by coded time gaps and hospital transfers. On top of the sequence is a convolutional neural net that detects and combines predictive local clinical motifs to stratify the risk. Deepr permits transparent inspection and visualization of its inner working. We validate Deepr on hospital data to predict unplanned readmission after discharge. Deepr achieves superior accuracy compared to traditional techniques, detects meaningful clinical motifs, and uncovers the underlying structure of the disease and intervention space."
},
{
"pmid": "26262006",
"title": "Using EHRs and Machine Learning for Heart Failure Survival Analysis.",
"abstract": "\"Heart failure (HF) is a frequent health problem with high morbidity and mortality, increasing prevalence and escalating healthcare costs\" [1]. By calculating a HF survival risk score based on patient-specific characteristics from Electronic Health Records (EHRs), we can identify high-risk patients and apply individualized treatment and healthy living choices to potentially reduce their mortality risk. The Seattle Heart Failure Model (SHFM) is one of the most popular models to calculate HF survival risk that uses multiple clinical variables to predict HF prognosis and also incorporates impact of HF therapy on patient outcomes. Although the SHFM has been validated across multiple cohorts [1-5], these studies were primarily done using clinical trials databases that do not reflect routine clinical care in the community. Further, the impact of contemporary therapeutic interventions, such as beta-blockers or defibrillators, was incorporated in SHFM by extrapolation from external trials. In this study, we assess the performance of SHFM using EHRs at Mayo Clinic, and sought to develop a risk prediction model using machine learning techiniques that applies routine clinical care data. Our results shows the models which were built using EHR data are more accurate (11% improvement in AUC) with the convenience of being more readily applicable in routine clinical care. Furthermore, we demonstrate that new predictive markers (such as co-morbidities) when incorporated into our models improve prognostic performance significantly (8% improvement in AUC)."
},
{
"pmid": "20473190",
"title": "Prediction modeling using EHR data: challenges, strategies, and a comparison of machine learning approaches.",
"abstract": "BACKGROUND\nElectronic health record (EHR) databases contain vast amounts of information about patients. Machine learning techniques such as Boosting and support vector machine (SVM) can potentially identify patients at high risk for serious conditions, such as heart disease, from EHR data. However, these techniques have not yet been widely tested.\n\n\nOBJECTIVE\nTo model detection of heart failure more than 6 months before the actual date of clinical diagnosis using machine learning techniques applied to EHR data. To compare the performance of logistic regression, SVM, and Boosting, along with various variable selection methods in heart failure prediction.\n\n\nRESEARCH DESIGN\nGeisinger Clinic primary care patients with data in the EHR data from 2001 to 2006 diagnosed with heart failure between 2003 and 2006 were identified. Controls were randomly selected matched on sex, age, and clinic for this nested case-control study.\n\n\nMEASURES\nArea under the curve (AUC) of receiver operator characteristic curve was computed for each method using 10-fold cross-validation. The number of variables selected by each method was compared.\n\n\nRESULTS\nLogistic regression with model selection based on Bayesian information criterion provided the most parsimonious model, with about 10 variables selected on average, while maintaining a high AUC (0.77 in 10-fold cross-validation). Boosting with strict variable importance threshold provided similar performance.\n\n\nCONCLUSIONS\nHeart failure was predicted more than 6 months before clinical diagnosis, with AUC of about 0.76, using logistic regression and Boosting. These results were achieved even with strict model selection criteria. SVM had the poorest performance, possibly because of imbalanced data."
},
{
"pmid": "25160253",
"title": "Exploring the application of deep learning techniques on medical text corpora.",
"abstract": "With the rapidly growing amount of biomedical literature it becomes increasingly difficult to find relevant information quickly and reliably. In this study we applied the word2vec deep learning toolkit to medical corpora to test its potential for improving the accessibility of medical knowledge. We evaluated the efficiency of word2vec in identifying properties of pharmaceuticals based on mid-sized, unstructured medical text corpora without any additional background knowledge. Properties included relationships to diseases ('may treat') or physiological processes ('has physiological effect'). We evaluated the relationships identified by word2vec through comparison with the National Drug File - Reference Terminology (NDF-RT) ontology. The results of our first evaluation were mixed, but helped us identify further avenues for employing deep learning technologies in medical information retrieval, as well as using them to complement curated knowledge captured in ontologies and taxonomies."
},
{
"pmid": "29928997",
"title": "Prediction task guided representation learning of medical codes in EHR.",
"abstract": "There have been rapidly growing applications using machine learning models for predictive analytics in Electronic Health Records (EHR) to improve the quality of hospital services and the efficiency of healthcare resource utilization. A fundamental and crucial step in developing such models is to convert medical codes in EHR to feature vectors. These medical codes are used to represent diagnoses or procedures. Their vector representations have a tremendous impact on the performance of machine learning models. Recently, some researchers have utilized representation learning methods from Natural Language Processing (NLP) to learn vector representations of medical codes. However, most previous approaches are unsupervised, i.e. the generation of medical code vectors is independent from prediction tasks. Thus, the obtained feature vectors may be inappropriate for a specific prediction task. Moreover, unsupervised methods often require a lot of samples to obtain reliable results, but most practical problems have very limited patient samples. In this paper, we develop a new method called Prediction Task Guided Health Record Aggregation (PTGHRA), which aggregates health records guided by prediction tasks, to construct training corpus for various representation learning models. Compared with unsupervised approaches, representation learning models integrated with PTGHRA yield a significant improvement in predictive capability of generated medical code vectors, especially for limited training samples."
},
{
"pmid": "29959033",
"title": "A meta-learning framework using representation learning to predict drug-drug interaction.",
"abstract": "MOTIVATION\nPredicting Drug-Drug Interaction (DDI) has become a crucial step in the drug discovery and development process, owing to the rise in the number of drugs co-administered with other drugs. Consequently, the usage of computational methods for DDI prediction can greatly help in reducing the costs of in vitro experiments done during the drug development process. With lots of emergent data sources that describe the properties and relationships between drugs and drug-related entities (gene, protein, disease, and side effects), an integrated approach that uses multiple data sources would be most effective.\n\n\nMETHOD\nWe propose a semi-supervised learning framework which utilizes representation learning, positive-unlabeled (PU) learning and meta-learning efficiently to predict the drug interactions. Information from multiple data sources is used to create feature networks, which is used to learn the meta-knowledge about the DDIs. Given that DDIs have only positive labeled data, a PU learning-based classifier is used to generate meta-knowledge from feature networks. Finally, a meta-classifier that combines the predicted probability of interaction from the meta-knowledge learnt is designed.\n\n\nRESULTS\nNode2vec, a network representation learning method and bagging SVM, a PU learning algorithm, are used in this work. Both representation learning and PU learning algorithms improve the performance of the system by 22% and 12.7% respectively. The meta-classifier performs better and predicts more reliable DDIs than the base classifiers."
},
{
"pmid": "27185194",
"title": "Deep Patient: An Unsupervised Representation to Predict the Future of Patients from the Electronic Health Records.",
"abstract": "Secondary use of electronic health records (EHRs) promises to advance clinical research and better inform clinical decision making. Challenges in summarizing and representing patient data prevent widespread practice of predictive modeling using EHRs. Here we present a novel unsupervised deep feature learning method to derive a general-purpose patient representation from EHR data that facilitates clinical predictive modeling. In particular, a three-layer stack of denoising autoencoders was used to capture hierarchical regularities and dependencies in the aggregated EHRs of about 700,000 patients from the Mount Sinai data warehouse. The result is a representation we name \"deep patient\". We evaluated this representation as broadly predictive of health states by assessing the probability of patients to develop various diseases. We performed evaluation using 76,214 test patients comprising 78 diseases from diverse clinical domains and temporal windows. Our results significantly outperformed those achieved using representations based on raw EHR data and alternative feature learning strategies. Prediction performance for severe diabetes, schizophrenia, and various cancers were among the top performing. These findings indicate that deep learning applied to EHRs can derive patient representations that offer improved clinical predictions, and could provide a machine learning framework for augmenting clinical decision systems."
},
{
"pmid": "29966746",
"title": "Patient representation learning and interpretable evaluation using clinical notes.",
"abstract": "We have three contributions in this work: 1. We explore the utility of a stacked denoising autoencoder and a paragraph vector model to learn task-independent dense patient representations directly from clinical notes. To analyze if these representations are transferable across tasks, we evaluate them in multiple supervised setups to predict patient mortality, primary diagnostic and procedural category, and gender. We compare their performance with sparse representations obtained from a bag-of-words model. We observe that the learned generalized representations significantly outperform the sparse representations when we have few positive instances to learn from, and there is an absence of strong lexical features. 2. We compare the model performance of the feature set constructed from a bag of words to that obtained from medical concepts. In the latter case, concepts represent problems, treatments, and tests. We find that concept identification does not improve the classification performance. 3. We propose novel techniques to facilitate model interpretability. To understand and interpret the representations, we explore the best encoded features within the patient representations obtained from the autoencoder model. Further, we calculate feature sensitivity across two networks to identify the most significant input features for different classification tasks when we use these pretrained representations as the supervised input. We successfully extract the most influential features for the pipeline using this technique."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "18267787",
"title": "Learning long-term dependencies with gradient descent is difficult.",
"abstract": "Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered."
},
{
"pmid": "25717413",
"title": "Towards personalized medicine: leveraging patient similarity and drug similarity analytics.",
"abstract": "The rapid adoption of electronic health records (EHR) provides a comprehensive source for exploratory and predictive analytic to support clinical decision-making. In this paper, we investigate how to utilize EHR to tailor treatments to individual patients based on their likelihood to respond to a therapy. We construct a heterogeneous graph which includes two domains (patients and drugs) and encodes three relationships (patient similarity, drug similarity, and patient-drug prior associations). We describe a novel approach for performing a label propagation procedure to spread the label information representing the effectiveness of different drugs for different patients over this heterogeneous graph. The proposed method has been applied on a real-world EHR dataset to help identify personalized treatments for hypercholesterolemia. The experimental results demonstrate the effectiveness of the approach and suggest that the combination of appropriate patient similarity and drug similarity analytics could lead to actionable insights for personalized medicine. Particularly, by leveraging drug similarity in combination with patient similarity, our method could perform well even on new or rarely used drugs for which there are few records of known past performance."
}
] |
Frontiers in Neuroscience | 31920476 | PMC6920213 | 10.3389/fnins.2019.01290 | ACOEC-FD: Ant Colony Optimization for Learning Brain Effective Connectivity Networks From Functional MRI and Diffusion Tensor Imaging | Identifying brain effective connectivity (EC) networks from neuroimaging data has become an effective tool that can evaluate normal brain functions and the injuries associated with neurodegenerative diseases. So far, there are many methods used to identify EC networks. However, most of the research currently focus on learning EC networks from single modal imaging data such as functional magnetic resonance imaging (fMRI) data. This paper proposes a new method, called ACOEC-FD, to learn EC networks from fMRI and diffusion tensor imaging (DTI) using ant colony optimization (ACO). First, ACOEC-FD uses DTI data to acquire some positively correlated relations among regions of interest (ROI), and takes them as anatomical constraint information to effectively restrict the search space of candidate arcs in an EC network. ACOEC-FD then achieves multi-modal imaging data integration by incorporating anatomical constraint information into the heuristic function of probabilistic transition rules to effectively encourage ants more likely to search for connections between structurally connected regions. Through simulation studies on generated datasets and real fMRI-DTI datasets, we demonstrate that the proposed approach results in improved inference results on EC compared to some methods that only used fMRI data. | 2. Related Works2.1. Ant Colony Optimization (ACO)Ant colony optimization (ACO) is a meta-heuristic search algorithm inspired by the ant foraging theory. Ants use pheromones to communicate with each other in their feeding process. The more pheromones released on a route, the greater the probability is of ants selecting that route, which means that the more pheromones deposited on the shorter path over equal periods of time is, the greater the number of ants selecting the shorter path becomes. Thus, when one ant finds a very short path, other ants are more likely to follow this path. Such information feedback eventually leads all ants to select and follow the shortest path. In detail, each ant finds a solution starting from a start node and moving to feasible neighbor nodes step-by-step to construct a new solution. In the meantime, pheromones also evaporate over time during the process. For infrequently traveled paths, pheromone trails become weaker, and vice versa.2.2. ACO for Learning Brain Effective Connectivity (ACOEC)ACOEC (Liu et al., 2016) employs ACO to search for the best EC network, and takes each EC network as a directed acyclic graph (DAG) just like other methods based on BNs. It views each ant in ACO as an available solution (an EC network), employs a K2 scoring metric to evaluate each ant in a population, and guides ants to construct and search for the global maximum with the best K2 score in a feasible solution space.In ACOEC, each ant k starts from an empty graph G(0) including all nodes (ROIs) and no arc, and proceeds by adding an arc at a time, and this process will be repeatedly performed until there is no way to make the score of the candidate solution higher by adding an arc. At time t, the probabilistic transition rule that an ant k selects a directed arc aij between two ROIs Xi and Xj from the current set of candidate arcs is defined as:(1)ai,j={ arg maxi,j∈DAk(t){[τij(t)]·[ηij(t)]β}, if q≤q0,aI,J,otherwise,where τij(t) is the pheromone concentration, ηij(t) represent the heuristic information of aij, and β is the weighted coefficient which controls ηij(t) to influence the selection of arcs. DAk(t) (i, j ∈ DAk(t)) is the set of all candidate arcs whose heuristic information is larger than zero; q0 (0 ≤ q0 < 1) is an initial parameter that determines the relative importance of exploration vs. exploitation; q is a random number uniformly sampled in [0,1]; and I and J are a pair of ROIs randomly selected according to the probability in the following way:(2)pijk(t)={ [τij(t)]α·[ηij(t)]β∑r,l∈DAk(t)[τrl(t)]α·[ηrl(t)]β, if i,j∈DAk(t),0,otherwise,where α denotes the relative importance of τij(t) left by ants. The heuristic function ηij is defined as follows:(3)ηij(t)=ω·(f(Xi,Pa(Xi)∪Xj)-f(Xi,Pa(Xi))),where f(Xi, Pa(Xi)) is the K2 score of the initial structure while f(Xi, Pa(Xi) ∪ Xj) is the K2 score of new structure by adding an arc Xj → Xi, ω = 1 + Inf(Xi, Xj) is a weighted factor associated with the arc connecting intensity, and Inf(Xi, Xj) represents the mutual information between Xi and Xj.For τij(t), ACOEC respectively carries out two pheromone updating processes. Moreover, once the iterations of ant colony searching end, the algorithm gets the optimal solution G+, i.e., EC network with the highest k2 score, and calculates the connection strength for every arc in G+. | [
"28560308",
"24562737",
"22432952",
"28323165",
"17548818",
"19188601",
"31695580",
"27045295",
"31249501",
"29867310",
"29695961",
"24140939",
"16092131",
"22099467",
"20006713",
"19747552",
"17995910",
"19961876",
"20817103",
"19523523",
"22108139",
"24102130",
"19235882",
"17133390",
"23422254",
"28559793",
"25750621",
"26731636",
"24103849",
"27247279"
] | [
{
"pmid": "28560308",
"title": "Resting-state network dysfunction in Alzheimer's disease: A systematic review and meta-analysis.",
"abstract": "INTRODUCTION\nWe performed a systematic review and meta-analysis of the Alzheimer's disease (AD) literature to examine consistency of functional connectivity alterations in AD dementia and mild cognitive impairment, using resting-state functional magnetic resonance imaging.\n\n\nMETHODS\nStudies were screened using a standardized procedure. Multiresolution statistics were performed to assess the spatial consistency of findings across studies.\n\n\nRESULTS\nThirty-four studies were included (1363 participants, average 40 per study). Consistent alterations in connectivity were found in the default mode, salience, and limbic networks in patients with AD dementia, mild cognitive impairment, or in both groups. We also identified a strong tendency in the literature toward specific examination of the default mode network.\n\n\nDISCUSSION\nConvergent evidence across the literature supports the use of resting-state connectivity as a biomarker of AD. The locations of consistent alterations suggest that highly connected hub regions in the brain might be an early target of AD."
},
{
"pmid": "24562737",
"title": "Functional brain connectivity using fMRI in aging and Alzheimer's disease.",
"abstract": "Normal aging and Alzheimer's disease (AD) cause profound changes in the brain's structure and function. AD in particular is accompanied by widespread cortical neuronal loss, and loss of connections between brain systems. This degeneration of neural pathways disrupts the functional coherence of brain activation. Recent innovations in brain imaging have detected characteristic disruptions in functional networks. Here we review studies examining changes in functional connectivity, measured through fMRI (functional magnetic resonance imaging), starting with healthy aging and then Alzheimer's disease. We cover studies that employ the three primary methods to analyze functional connectivity--seed-based, ICA (independent components analysis), and graph theory. At the end we include a brief discussion of other methodologies, such as EEG (electroencephalography), MEG (magnetoencephalography), and PET (positron emission tomography). We also describe multi-modal studies that combine rsfMRI (resting state fMRI) with PET imaging, as well as studies examining the effects of medications. Overall, connectivity and network integrity appear to decrease in healthy aging, but this decrease is accelerated in AD, with specific systems hit hardest, such as the default mode network (DMN). Functional connectivity is a relatively new topic of research, but it holds great promise in revealing how brain network dynamics change across the lifespan and in disease."
},
{
"pmid": "22432952",
"title": "Functional and effective connectivity: a review.",
"abstract": "Over the past 20 years, neuroimaging has become a predominant technique in systems neuroscience. One might envisage that over the next 20 years the neuroimaging of distributed processing and connectivity will play a major role in disclosing the brain's functional architecture and operational principles. The inception of this journal has been foreshadowed by an ever-increasing number of publications on functional connectivity, causal modeling, connectomics, and multivariate analyses of distributed patterns of brain responses. I accepted the invitation to write this review with great pleasure and hope to celebrate and critique the achievements to date, while addressing the challenges ahead."
},
{
"pmid": "28323165",
"title": "On the importance of modeling fMRI transients when estimating effective connectivity: A dynamic causal modeling study using ASL data.",
"abstract": "Effective connectivity is commonly assessed using blood oxygenation level-dependent (BOLD) signals. In (Havlicek et al., 2015), we presented a novel, physiologically informed dynamic causal model (P-DCM) that extends current generative models. We demonstrated the improvements afforded by P-DCM in terms of the ability to model commonly observed neuronal and vascular transients in single regions. Here, we assess the ability of the novel and previous DCM variants to estimate effective connectivity among a network of five ROIs driven by a visuo-motor task. We demonstrate that connectivity estimates depend sensitively on the DCM used, due to differences in the modeling of hemodynamic response transients; such as the post-stimulus undershoot or adaptation during stimulation. In addition, using a novel DCM for arterial spin labeling (ASL) fMRI that measures BOLD and CBF signals simultaneously, we confirmed our findings (by using the BOLD data alone and in conjunction with CBF). We show that P-DCM provides better estimates of effective connectivity, regardless of whether it is applied to BOLD data alone or to ASL time-series, and that all new aspects of P-DCM (i.e. neuronal, neurovascular, hemodynamic components) constitute an improvement compared to those in the previous DCM variants. In summary, (i) accurate modeling of fMRI response transients is crucial to obtain valid effective connectivity estimates and (ii) any additional hemodynamic data, such as provided by ASL, increases the ability to disambiguate neuronal and vascular effects present in the BOLD signal."
},
{
"pmid": "17548818",
"title": "Network structure of cerebral cortex shapes functional connectivity on multiple time scales.",
"abstract": "Neuronal dynamics unfolding within the cerebral cortex exhibit complex spatial and temporal patterns even in the absence of external input. Here we use a computational approach in an attempt to relate these features of spontaneous cortical dynamics to the underlying anatomical connectivity. Simulating nonlinear neuronal dynamics on a network that captures the large-scale interregional connections of macaque neocortex, and applying information theoretic measures to identify functional networks, we find structure-function relations at multiple temporal scales. Functional networks recovered from long windows of neural activity (minutes) largely overlap with the underlying structural network. As a result, hubs in these long-run functional networks correspond to structural hubs. In contrast, significant fluctuations in functional topology are observed across the sequence of networks recovered from consecutive shorter (seconds) time windows. The functional centrality of individual nodes varies across time as interregional couplings shift. Furthermore, the transient couplings between brain regions are coordinated in a manner that reveals the existence of two anticorrelated clusters. These clusters are linked by prefrontal and parietal regions that are hub nodes in the underlying structural network. At an even faster time scale (hundreds of milliseconds) we detect individual episodes of interregional phase-locking and find that slow variations in the statistics of these transient episodes, contingent on the underlying anatomical structure, produce the transfer entropy functional connectivity and simulated blood oxygenation level-dependent correlation patterns observed on slower time scales."
},
{
"pmid": "19188601",
"title": "Predicting human resting-state functional connectivity from structural connectivity.",
"abstract": "In the cerebral cortex, the activity levels of neuronal populations are continuously fluctuating. When neuronal activity, as measured using functional MRI (fMRI), is temporally coherent across 2 populations, those populations are said to be functionally connected. Functional connectivity has previously been shown to correlate with structural (anatomical) connectivity patterns at an aggregate level. In the present study we investigate, with the aid of computational modeling, whether systems-level properties of functional networks--including their spatial statistics and their persistence across time--can be accounted for by properties of the underlying anatomical network. We measured resting state functional connectivity (using fMRI) and structural connectivity (using diffusion spectrum imaging tractography) in the same individuals at high resolution. Structural connectivity then provided the couplings for a model of macroscopic cortical dynamics. In both model and data, we observed (i) that strong functional connections commonly exist between regions with no direct structural connection, rendering the inference of structural connectivity from functional connectivity impractical; (ii) that indirect connections and interregional distance accounted for some of the variance in functional connectivity that was unexplained by direct structural connectivity; and (iii) that resting-state functional connectivity exhibits variability within and across both scanning sessions and model runs. These empirical and modeling results demonstrate that although resting state functional connectivity is variable and is frequently present between regions without direct structural linkage, its strength, persistence, and spatial statistics are nevertheless constrained by the large-scale anatomical structure of the human cerebral cortex."
},
{
"pmid": "31695580",
"title": "Pairwise Likelihood Ratios for Estimation of Non-Gaussian Structural Equation Models.",
"abstract": "We present new measures of the causal direction, or direction of effect, between two non-Gaussian random variables. They are based on the likelihood ratio under the linear non-Gaussian acyclic model (LiNGAM). We also develop simple first-order approximations of the likelihood ratio and analyze them based on related cumulant-based measures, which can be shown to find the correct causal directions. We show how to apply these measures to estimate LiNGAM for more than two variables, and even in the case of more variables than observations. We further extend the method to cyclic and nonlinear models. The proposed framework is statistically at least as good as existing ones in the cases of few data points or noisy data, and it is computationally and conceptually very simple. Results on simulated fMRI data indicate that the method may be useful in neuroimaging where the number of time points is typically quite small."
},
{
"pmid": "27045295",
"title": "Learning Effective Connectivity Network Structure from fMRI Data Based on Artificial Immune Algorithm.",
"abstract": "Many approaches have been designed to extract brain effective connectivity from functional magnetic resonance imaging (fMRI) data. However, few of them can effectively identify the connectivity network structure due to different defects. In this paper, a new algorithm is developed to infer the effective connectivity between different brain regions by combining artificial immune algorithm (AIA) with the Bayes net method, named as AIAEC. In the proposed algorithm, a brain effective connectivity network is mapped onto an antibody, and four immune operators are employed to perform the optimization process of antibodies, including clonal selection operator, crossover operator, mutation operator and suppression operator, and finally gets an antibody with the highest K2 score as the solution. AIAEC is then tested on Smith's simulated datasets, and the effect of the different factors on AIAEC is evaluated, including the node number, session length, as well as the other potential confounding factors of the blood oxygen level dependent (BOLD) signal. It was revealed that, as contrast to other existing methods, AIAEC got the best performance on the majority of the datasets. It was also found that AIAEC could attain a relative better solution under the influence of many factors, although AIAEC was differently affected by the aforementioned factors. AIAEC is thus demonstrated to be an effective method for detecting the brain effective connectivity."
},
{
"pmid": "31249501",
"title": "Application of Graph Theory for Identifying Connectivity Patterns in Human Brain Networks: A Systematic Review.",
"abstract": "Background: Analysis of the human connectome using functional magnetic resonance imaging (fMRI) started in the mid-1990s and attracted increasing attention in attempts to discover the neural underpinnings of human cognition and neurological disorders. In general, brain connectivity patterns from fMRI data are classified as statistical dependencies (functional connectivity) or causal interactions (effective connectivity) among various neural units. Computational methods, especially graph theory-based methods, have recently played a significant role in understanding brain connectivity architecture. Objectives: Thanks to the emergence of graph theoretical analysis, the main purpose of the current paper is to systematically review how brain properties can emerge through the interactions of distinct neuronal units in various cognitive and neurological applications using fMRI. Moreover, this article provides an overview of the existing functional and effective connectivity methods used to construct the brain network, along with their advantages and pitfalls. Methods: In this systematic review, the databases Science Direct, Scopus, arXiv, Google Scholar, IEEE Xplore, PsycINFO, PubMed, and SpringerLink are employed for exploring the evolution of computational methods in human brain connectivity from 1990 to the present, focusing on graph theory. The Cochrane Collaboration's tool was used to assess the risk of bias in individual studies. Results: Our results show that graph theory and its implications in cognitive neuroscience have attracted the attention of researchers since 2009 (as the Human Connectome Project launched), because of their prominent capability in characterizing the behavior of complex brain systems. Although graph theoretical approach can be generally applied to either functional or effective connectivity patterns during rest or task performance, to date, most articles have focused on the resting-state functional connectivity. Conclusions: This review provides an insight into how to utilize graph theoretical measures to make neurobiological inferences regarding the mechanisms underlying human cognition and behavior as well as different brain disorders."
},
{
"pmid": "29867310",
"title": "Sparse Estimation of Resting-State Effective Connectivity From fMRI Cross-Spectra.",
"abstract": "In functional magnetic resonance imaging (fMRI), functional connectivity is conventionally characterized by correlations between fMRI time series, which are intrinsically undirected measures of connectivity. Yet, some information about the directionality of network connections can nevertheless be extracted from the matrix of pairwise temporal correlations between all considered time series, when expressed in the frequency-domain as a cross-spectral density matrix. Using a sparsity prior, it then becomes possible to determine a unique directed network topology that best explains the observed undirected correlations, without having to rely on temporal precedence relationships that may not be valid in fMRI. Applying this method on simulated data with 100 nodes yielded excellent retrieval of the underlying directed networks under a wide variety of conditions. Importantly, the method did not depend on temporal precedence to establish directionality, thus reducing susceptibility to hemodynamic variability. The computational efficiency of the algorithm was sufficient to enable whole-brain estimations, thus circumventing the problem of missing nodes that otherwise occurs in partial-brain analyses. Applying the method to real resting-state fMRI data acquired with a high temporal resolution, the inferred networks showed good consistency with structural connectivity obtained from diffusion tractography in the same subjects. Interestingly, this agreement could also be seen when considering high-frequency rather than low-frequency connectivity (average correlation: r = 0.26 for f < 0.3 Hz, r = 0.43 for 0.3 < f < 5 Hz). Moreover, this concordance was significantly better (p < 0.05) than for networks obtained with conventional functional connectivity based on correlations (average correlation r = 0.18). The presented methodology thus appears to be well-suited for fMRI, particularly given its lack of explicit dependence on temporal lag structure, and is readily applicable to whole-brain effective connectivity estimation."
},
{
"pmid": "29695961",
"title": "Altered Functional Connectivity of Insular Subregions in Alzheimer's Disease.",
"abstract": "Recent researches have demonstrated that the insula is the crucial hub of the human brain networks and most vulnerable region of Alzheimer's disease (AD). However, little is known about the changes of functional connectivity of insular subregions in the AD patients. In this study, we collected resting-state functional magnetic resonance imaging (fMRI) data including 32 AD patients and 38 healthy controls (HCs). By defining three subregions of insula, we mapped whole-brain resting-state functional connectivity (RSFC) and identified several distinct RSFC patterns of the insular subregions: For positive connectivity, three cognitive-related RSFC patterns were identified within insula that suggest anterior-to-posterior functional subdivisions: (1) an dorsal anterior zone of the insula that exhibits RSFC with executive control network (ECN); (2) a ventral anterior zone of insula, exhibits functional connectivity with the salience network (SN); (3) a posterior zone along the insula exhibits functional connectivity with the sensorimotor network (SMN). In addition, we found significant negative connectivities between the each insular subregion and several special default mode network (DMN) regions. Compared with controls, the AD patients demonstrated distinct disruption of positive RSFCs in the different network (ECN and SMN), suggesting the impairment of the functional integrity. There were no differences of the positive RSFCs in the SN between the two groups. On the other hand, several DMN regions showed increased negative RSFCs to the sub-region of insula in the AD patients, indicating compensatory plasticity. Furthermore, these abnormal insular subregions RSFCs are closely correlated with cognitive performances in the AD patients. Our findings suggested that different insular subregions presented distinct RSFC patterns with various functional networks, which are differently affected in the AD patients."
},
{
"pmid": "24140939",
"title": "Bayesian networks for fMRI: a primer.",
"abstract": "Bayesian network analysis is an attractive approach for studying the functional integration of brain networks, as it includes both the locations of connections between regions of the brain (functional connectivity) and more importantly the direction of the causal relationship between the regions (directed functional connectivity). Further, these approaches are more attractive than other functional connectivity analyses in that they can often operate on larger sets of nodes and run searches over a wide range of candidate networks. An important study by Smith et al. (2011) illustrated that many Bayesian network approaches did not perform well in identifying the directionality of connections in simulated single-subject data. Since then, new Bayesian network approaches have been developed that have overcome the failures in the Smith work. Additionally, an important discovery was made that shows a preprocessing step used in the Smith data puts some of the Bayesian network methods at a disadvantage. This work provides a review of Bayesian network analyses, focusing on the methods used in the Smith work as well as methods developed since 2011 that have improved estimation performance. Importantly, only approaches that have been specifically designed for fMRI data perform well, as they have been tailored to meet the challenges of fMRI data. Although this work does not suggest a single best model, it describes the class of models that perform best and highlights the features of these models that allow them to perform well on fMRI data. Specifically, methods that rely on non-Gaussianity to direct causal relationships in the network perform well."
},
{
"pmid": "16092131",
"title": "A Bayesian approach to determining connectivity of the human brain.",
"abstract": "Recent work regarding the analysis of brain imaging data has focused on examining functional and effective connectivity of the brain. We develop a novel descriptive and inferential method to analyze the connectivity of the human brain using functional MRI (fMRI). We assess the relationship between pairs of distinct brain regions by comparing expected joint and marginal probabilities of elevated activity of voxel pairs through a Bayesian paradigm, which allows for the incorporation of previously known anatomical and functional information. We define the relationship between two distinct brain regions by measures of functional connectivity and ascendancy. After assessing the relationship between all pairs of brain voxels, we are able to construct hierarchical functional networks from any given brain region and assess significant functional connectivity and ascendancy in these networks. We illustrate the use of our connectivity analysis using data from an fMRI study of social cooperation among women who played an iterated \"Prisoner's Dilemma\" game. Our analysis reveals a functional network that includes the amygdala, anterior insula cortex, and anterior cingulate cortex, and another network that includes the ventral striatum, orbitofrontal cortex, and anterior insula. Our method can be used to develop causal brain networks for use with structural equation modeling and dynamic causal models."
},
{
"pmid": "22099467",
"title": "Functional network organization of the human brain.",
"abstract": "Real-world complex systems may be mathematically modeled as graphs, revealing properties of the system. Here we study graphs of functional brain organization in healthy adults using resting state functional connectivity MRI. We propose two novel brain-wide graphs, one of 264 putative functional areas, the other a modification of voxelwise networks that eliminates potentially artificial short-distance relationships. These graphs contain many subgraphs in good agreement with known functional brain systems. Other subgraphs lack established functional identities; we suggest possible functional characteristics for these subgraphs. Further, graph measures of the areal network indicate that the default mode subgraph shares network properties with sensory and motor subgraphs: it is internally integrated but isolated from other subgraphs, much like a \"processing\" system. The modified voxelwise graph also reveals spatial motifs in the patterning of systems across the cortex."
},
{
"pmid": "20006713",
"title": "Impairment and compensation coexist in amnestic MCI default mode network.",
"abstract": "Mild cognitive impairment (MCI) is the transitional, heterogeneous continuum from healthy elderly to Alzheimer's disease (AD). Previous studies have shown that brain functional activity in the default mode network (DMN) is impaired in AD patients. However, altering DMN activity patterns in MCI patients remains largely unclear. The present study utilized resting-state functional magnetic resonance imaging (fMRI) and an independent component analysis (ICA) approach to investigate DMN activity in 14 amnestic MCI (aMCI) patients and 14 healthy elderly. Compared to the aMCI patients, the healthy elderly exhibited increased functional activity in the DMN regions, including the bilateral precuneus/posterior cingulate cortex, right inferior parietal lobule, and left fusiform gyrus, as well as a trend towards increased right medial temporal lobe activity. The aMCI patients exhibited increased activity in the left prefrontal cortex, inferior parietal lobule, and middle temporal gyrus compared to the healthy elderly. Increased frontal-parietal activity may indicate compensatory processes in the aMCI patients. These findings suggest that abnormal DMN activity could be useful as an imaging-based biomarker for the diagnosis and monitoring of aMCI patients."
},
{
"pmid": "19747552",
"title": "Six problems for causal inference from fMRI.",
"abstract": "Neuroimaging (e.g. fMRI) data are increasingly used to attempt to identify not only brain regions of interest (ROIs) that are especially active during perception, cognition, and action, but also the qualitative causal relations among activity in these regions (known as effective connectivity; Friston, 1994). Previous investigations and anatomical and physiological knowledge may somewhat constrain the possible hypotheses, but there often remains a vast space of possible causal structures. To find actual effective connectivity relations, search methods must accommodate indirect measurements of nonlinear time series dependencies, feedback, multiple subjects possibly varying in identified regions of interest, and unknown possible location-dependent variations in BOLD response delays. We describe combinations of procedures that under these conditions find feed-forward sub-structure characteristic of a group of subjects. The method is illustrated with an empirical data set and confirmed with simulations of time series of non-linear, randomly generated, effective connectivities, with feedback, subject to random differences of BOLD delays, with regions of interest missing at random for some subjects, measured with noise approximating the signal to noise ratio of the empirical data."
},
{
"pmid": "17995910",
"title": "Combining structural and functional neuroimaging data for studying brain connectivity: a review.",
"abstract": "Different brain areas are thought to be integrated into large-scale networks to support cognitive function. Recent approaches for investigating structural organization and functional coordination within these networks involve measures of connectivity among brain areas. We review studies combining in vivo structural and functional brain connectivity data, where (a) structural connectivity analysis, mostly based on diffusion tensor imaging is paired with voxel-wise analysis of functional neuroimaging data or (b) the measurement of functional connectivity based on covariance analysis is guided/aided by structural connectivity data. These studies provide insights into the relationships between brain structure and function. Promising trends involve (a) studies where both functional and anatomical connectivity data are collected using high-resolution neuroimaging methods and (b) the development of advanced quantitative models of integration."
},
{
"pmid": "19961876",
"title": "A MATLAB toolbox for Granger causal connectivity analysis.",
"abstract": "Assessing directed functional connectivity from time series data is a key challenge in neuroscience. One approach to this problem leverages a combination of Granger causality analysis and network theory. This article describes a freely available MATLAB toolbox--'Granger causal connectivity analysis' (GCCA)--which provides a core set of methods for performing this analysis on a variety of neuroscience data types including neuroelectric, neuromagnetic, functional MRI, and other neural signals. The toolbox includes core functions for Granger causality analysis of multivariate steady-state and event-related data, functions to preprocess data, assess statistical significance and validate results, and to compute and display network-level indices of causal connectivity including 'causal density' and 'causal flow'. The toolbox is deliberately small, enabling its easy assimilation into the repertoire of researchers. It is however readily extensible given proficiency with the MATLAB language."
},
{
"pmid": "20817103",
"title": "Network modelling methods for FMRI.",
"abstract": "There is great interest in estimating brain \"networks\" from FMRI data. This is often attempted by identifying a set of functional \"nodes\" (e.g., spatial ROIs or ICA maps) and then conducting a connectivity analysis between the nodes, based on the FMRI timeseries associated with the nodes. Analysis methods range from very simple measures that consider just two nodes at a time (e.g., correlation between two nodes' timeseries) to sophisticated approaches that consider all nodes simultaneously and estimate one global network model (e.g., Bayes net models). Many different methods are being used in the literature, but almost none has been carefully validated or compared for use on FMRI timeseries data. In this work we generate rich, realistic simulated FMRI data for a wide range of underlying networks, experimental protocols and problematic confounds in the data, in order to compare different connectivity estimation approaches. Our results show that in general correlation-based approaches can be quite successful, methods based on higher-order statistics are less sensitive, and lag-based approaches perform very poorly. More specifically: there are several methods that can give high sensitivity to network connection detection on good quality FMRI data, in particular, partial correlation, regularised inverse covariance estimation and several Bayes net methods; however, accurate estimation of connection directionality is more difficult to achieve, though Patel's τ can be reasonably successful. With respect to the various confounds added to the data, the most striking result was that the use of functionally inaccurate ROIs (when defining the network nodes and extracting their associated timeseries) is extremely damaging to network estimation; hence, results derived from inappropriate ROI definition (such as via structural atlases) should be regarded with great caution."
},
{
"pmid": "19523523",
"title": "Tractography-based priors for dynamic causal models.",
"abstract": "Functional integration in the brain rests on anatomical connectivity (the presence of axonal connections) and effective connectivity (the causal influences mediated by these connections). The deployment of anatomical connections provides important constraints on effective connectivity, but does not fully determine it, because synaptic connections can be expressed functionally in a dynamic and context-dependent fashion. Although it is generally assumed that anatomical connectivity data is important to guide the construction of neurobiologically realistic models of effective connectivity; the degree to which these models actually profit from anatomical constraints has not yet been formally investigated. Here, we use diffusion weighted imaging and probabilistic tractography to specify anatomically informed priors for dynamic causal models (DCMs) of fMRI data. We constructed 64 alternative DCMs, which embodied different mappings between the probability of an anatomical connection and the prior variance of the corresponding of effective connectivity, and fitted them to empirical fMRI data from 12 healthy subjects. Using Bayesian model selection, we show that the best model is one in which anatomical probability increases the prior variance of effective connectivity parameters in a nonlinear and monotonic (sigmoidal) fashion. This means that the higher the likelihood that a given connection exists anatomically, the larger one should set the prior variance of the corresponding coupling parameter; hence making it easier for the parameter to deviate from zero and represent a strong effective connection. To our knowledge, this study provides the first formal evidence that probabilistic knowledge of anatomical connectivity can improve models of functional integration."
},
{
"pmid": "22108139",
"title": "A review of multivariate methods for multimodal fusion of brain imaging data.",
"abstract": "The development of various neuroimaging techniques is rapidly improving the measurements of brain function/structure. However, despite improvements in individual modalities, it is becoming increasingly clear that the most effective research approaches will utilize multi-modal fusion, which takes advantage of the fact that each modality provides a limited view of the brain. The goal of multi-modal fusion is to capitalize on the strength of each modality in a joint analysis, rather than a separate analysis of each. This is a more complicated endeavor that must be approached more carefully and efficient methods should be developed to draw generalized and valid conclusions from high dimensional data with a limited number of subjects. Numerous research efforts have been reported in the field based on various statistical approaches, e.g. independent component analysis (ICA), canonical correlation analysis (CCA) and partial least squares (PLS). In this review paper, we survey a number of multivariate methods appearing in previous multimodal fusion reports, mostly fMRI with other modality, which were performed with or without prior information. A table for comparing optimization assumptions, purpose of the analysis, the need of priors, dimension reduction strategies and input data types is provided, which may serve as a valuable reference that helps readers understand the trade-offs of the 7 methods comprehensively. Finally, we evaluate 3 representative methods via simulation and give some suggestions on how to select an appropriate method based on a given research."
},
{
"pmid": "24102130",
"title": "ParceLiNGAM: a causal ordering method robust against latent confounders.",
"abstract": "We consider learning a causal ordering of variables in a linear nongaussian acyclic model called LiNGAM. Several methods have been shown to consistently estimate a causal ordering assuming that all the model assumptions are correct. But the estimation results could be distorted if some assumptions are violated. In this letter, we propose a new algorithm for learning causal orders that is robust against one typical violation of the model assumptions: latent confounders. The key idea is to detect latent confounders by testing independence between estimated external influences and find subsets (parcels) that include variables unaffected by latent confounders. We demonstrate the effectiveness of our method using artificial data and simulated brain imaging data."
},
{
"pmid": "19235882",
"title": "Functionally linked resting-state networks reflect the underlying structural connectivity architecture of the human brain.",
"abstract": "During rest, multiple cortical brain regions are functionally linked forming resting-state networks. This high level of functional connectivity within resting-state networks suggests the existence of direct neuroanatomical connections between these functionally linked brain regions to facilitate the ongoing interregional neuronal communication. White matter tracts are the structural highways of our brain, enabling information to travel quickly from one brain region to another region. In this study, we examined both the functional and structural connections of the human brain in a group of 26 healthy subjects, combining 3 Tesla resting-state functional magnetic resonance imaging time-series with diffusion tensor imaging scans. Nine consistently found functionally linked resting-state networks were retrieved from the resting-state data. The diffusion tensor imaging scans were used to reconstruct the white matter pathways between the functionally linked brain areas of these resting-state networks. Our results show that well-known anatomical white matter tracts interconnect at least eight of the nine commonly found resting-state networks, including the default mode network, the core network, primary motor and visual network, and two lateralized parietal-frontal networks. Our results suggest that the functionally linked resting-state networks reflect the underlying structural connectivity architecture of the human brain."
},
{
"pmid": "17133390",
"title": "Altered functional connectivity in early Alzheimer's disease: a resting-state fMRI study.",
"abstract": "Previous studies have led to the proposal that patients with Alzheimer's disease (AD) may have disturbed functional connectivity between different brain regions. Furthermore, recent resting-state functional magnetic resonance imaging (fMRI) studies have also shown that low-frequency (<0.08 Hz) fluctuations (LFF) of the blood oxygenation level-dependent signals were abnormal in several brain areas of AD patients. However, few studies have investigated disturbed LFF connectivity in AD patients. By using resting-state fMRI, this study sought to investigate the abnormal functional connectivities throughout the entire brain of early AD patients, and analyze the global distribution of these abnormalities. For this purpose, the authors divided the whole brain into 116 regions and identified abnormal connectivities by comparing the correlation coefficients of each pair. Compared with healthy controls, AD patients had decreased positive correlations between the prefrontal and parietal lobes, but increased positive correlations within the prefrontal lobe, parietal lobe, and occipital lobe. The AD patients also had decreased negative correlations (closer to zero) between two intrinsically anti-correlated networks that had previously been found in the resting brain. By using resting-state fMRI, our results supported previous studies that have reported an anterior-posterior disconnection phenomenon and increased within-lobe functional connectivity in AD patients. In addition, the results also suggest that AD may disturb the correlation/anti-correlation effect in the two intrinsically anti-correlated networks."
},
{
"pmid": "23422254",
"title": "A blind deconvolution approach to recover effective connectivity brain networks from resting state fMRI data.",
"abstract": "A great improvement to the insight on brain function that we can get from fMRI data can come from effective connectivity analysis, in which the flow of information between even remote brain regions is inferred by the parameters of a predictive dynamical model. As opposed to biologically inspired models, some techniques as Granger causality (GC) are purely data-driven and rely on statistical prediction and temporal precedence. While powerful and widely applicable, this approach could suffer from two main limitations when applied to BOLD fMRI data: confounding effect of hemodynamic response function (HRF) and conditioning to a large number of variables in presence of short time series. For task-related fMRI, neural population dynamics can be captured by modeling signal dynamics with explicit exogenous inputs; for resting-state fMRI on the other hand, the absence of explicit inputs makes this task more difficult, unless relying on some specific prior physiological hypothesis. In order to overcome these issues and to allow a more general approach, here we present a simple and novel blind-deconvolution technique for BOLD-fMRI signal. In a recent study it has been proposed that relevant information in resting-state fMRI can be obtained by inspecting the discrete events resulting in relatively large amplitude BOLD signal peaks. Following this idea, we consider resting fMRI as 'spontaneous event-related', we individuate point processes corresponding to signal fluctuations with a given signature, extract a region-specific HRF and use it in deconvolution, after following an alignment procedure. Coming to the second limitation, a fully multivariate conditioning with short and noisy data leads to computational problems due to overfitting. Furthermore, conceptual issues arise in presence of redundancy. We thus apply partial conditioning to a limited subset of variables in the framework of information theory, as recently proposed. Mixing these two improvements we compare the differences between BOLD and deconvolved BOLD level effective networks and draw some conclusions."
},
{
"pmid": "28559793",
"title": "Initial Validation for the Estimation of Resting-State fMRI Effective Connectivity by a Generalization of the Correlation Approach.",
"abstract": "Resting-state functional MRI (rs-fMRI) is widely used to noninvasively study human brain networks. Network functional connectivity is often estimated by calculating the timeseries correlation between blood-oxygen-level dependent (BOLD) signal from different regions of interest (ROIs). However, standard correlation cannot characterize the direction of information flow between regions. In this paper, we introduce and test a new concept, prediction correlation, to estimate effective connectivity in functional brain networks from rs-fMRI. In this approach, the correlation between two BOLD signals is replaced by a correlation between one BOLD signal and a prediction of this signal via a causal system driven by another BOLD signal. Three validations are described: (1) Prediction correlation performed well on simulated data where the ground truth was known, and outperformed four other methods. (2) On simulated data designed to display the \"common driver\" problem, prediction correlation did not introduce false connections between non-interacting driven ROIs. (3) On experimental data, prediction correlation recovered the previously identified network organization of human brain. Prediction correlation scales well to work with hundreds of ROIs, enabling it to assess whole brain interregional connectivity at the single subject level. These results provide an initial validation that prediction correlation can capture the direction of information flow and estimate the duration of extended temporal delays in information flow between regions of interest ROIs based on BOLD signal. This approach not only maintains the high sensitivity to network connectivity provided by the correlation analysis, but also performs well in the estimation of causal information flow in the brain."
},
{
"pmid": "25750621",
"title": "A multimodal approach for determining brain networks by jointly modeling functional and structural connectivity.",
"abstract": "Recent innovations in neuroimaging technology have provided opportunities for researchers to investigate connectivity in the human brain by examining the anatomical circuitry as well as functional relationships between brain regions. Existing statistical approaches for connectivity generally examine resting-state or task-related functional connectivity (FC) between brain regions or separately examine structural linkages. As a means to determine brain networks, we present a unified Bayesian framework for analyzing FC utilizing the knowledge of associated structural connections, which extends an approach by Patel et al. (2006a) that considers only functional data. We introduce an FC measure that rests upon assessments of functional coherence between regional brain activity identified from functional magnetic resonance imaging (fMRI) data. Our structural connectivity (SC) information is drawn from diffusion tensor imaging (DTI) data, which is used to quantify probabilities of SC between brain regions. We formulate a prior distribution for FC that depends upon the probability of SC between brain regions, with this dependence adhering to structural-functional links revealed by our fMRI and DTI data. We further characterize the functional hierarchy of functionally connected brain regions by defining an ascendancy measure that compares the marginal probabilities of elevated activity between regions. In addition, we describe topological properties of the network, which is composed of connected region pairs, by performing graph theoretic analyses. We demonstrate the use of our Bayesian model using fMRI and DTI data from a study of auditory processing. We further illustrate the advantages of our method by comparisons to methods that only incorporate functional information."
},
{
"pmid": "26731636",
"title": "Learning Discriminative Bayesian Networks from High-Dimensional Continuous Neuroimaging Data.",
"abstract": "Due to its causal semantics, Bayesian networks (BN) have been widely employed to discover the underlying data relationship in exploratory studies, such as brain research. Despite its success in modeling the probability distribution of variables, BN is naturally a generative model, which is not necessarily discriminative. This may cause the ignorance of subtle but critical network changes that are of investigation values across populations. In this paper, we propose to improve the discriminative power of BN models for continuous variables from two different perspectives. This brings two general discriminative learning frameworks for Gaussian Bayesian networks (GBN). In the first framework, we employ Fisher kernel to bridge the generative models of GBN and the discriminative classifiers of SVMs, and convert the GBN parameter learning to Fisher kernel learning via minimizing a generalization error bound of SVMs. In the second framework, we employ the max-margin criterion and build it directly upon GBN models to explicitly optimize the classification performance of the GBNs. The advantages and disadvantages of the two frameworks are discussed and experimentally compared. Both of them demonstrate strong power in learning discriminative parameters of GBNs for neuroimaging based brain network analysis, as well as maintaining reasonable representation capacity. The contributions of this paper also include a new Directed Acyclic Graph (DAG) constraint with theoretical guarantee to ensure the graph validity of GBN."
},
{
"pmid": "24103849",
"title": "Fusing DTI and fMRI data: a survey of methods and applications.",
"abstract": "The relationship between brain structure and function has been one of the centers of research in neuroimaging for decades. In recent years, diffusion tensor imaging (DTI) and functional magnetic resonance imaging (fMRI) techniques have been widely available and popular in cognitive and clinical neurosciences for examining the brain's white matter (WM) micro-structures and gray matter (GM) functions, respectively. Given the intrinsic integration of WM/GM and the complementary information embedded in DTI/fMRI data, it is natural and well-justified to combine these two neuroimaging modalities together to investigate brain structure and function and their relationships simultaneously. In the past decade, there have been remarkable achievements of DTI/fMRI fusion methods and applications in neuroimaging and human brain mapping community. This survey paper aims to review recent advancements on methodologies and applications in incorporating multimodal DTI and fMRI data, and offer our perspectives on future research directions. We envision that effective fusion of DTI/fMRI techniques will play increasingly important roles in neuroimaging and brain sciences in the years to come."
},
{
"pmid": "27247279",
"title": "Changes of intranetwork and internetwork functional connectivity in Alzheimer's disease and mild cognitive impairment.",
"abstract": "OBJECTIVE\nAlzheimer's disease (AD) is a serious neurodegenerative disorder characterized by deficits of working memory, attention, language and many other cognitive functions. Although different stages of the disease are relatively well characterized by clinical criteria, stage-specific pathological changes in the brain remain relatively poorly understood, especially at the level of large-scale functional networks. In this study, we aimed to characterize the potential disruptions of large-scale functional brain networks based on a sample including amnestic mild cognition impairment (aMCI) and AD patients to help delineate the underlying stage-dependent AD pathology.\n\n\nAPPROACH\nWe sought to identify the neural connectivity mechanisms of aMCI and AD through examination of both intranetwork and internetwork interactions among four of the brain's key networks, namely dorsal attention network (DAN), default mode network (DMN), executive control network (ECN) and salience network (SAL). We analyzed functional connectivity based on resting-state functional magnetic resonance imaging (rs-fMRI) data from 25 Alzheimer's disease patients, 20 aMCI patients and 35 elderly normal controls (NC).\n\n\nMAIN RESULTS\nIntranetwork functional disruptions within the DAN and ECN were detected in both aMCI and AD patients. Disrupted intranetwork connectivity of DMN and anti-correlation between DAN and DMN were observed in AD patients. Moreover, aMCI-specific alterations in the internetwork functional connectivity of SAL were observed.\n\n\nSIGNIFICANCE\nOur results confirmed previous findings that AD pathology was related to dysconnectivity both within and between resting-state networks but revealed more spatial details. Moreover, the SAL network, reportedly flexibly coupling either with the DAN or DMN networks during different brain states, demonstrated interesting alterations specifically in the early stage of the disease."
}
] |
Biomolecules | 31717703 | PMC6921016 | 10.3390/biom9110656 | A Computational Framework for Predicting Direct Contacts and Substructures within Protein Complexes | Understanding the physical arrangement of subunits within protein complexes potentially provides valuable clues about how the subunits work together and how the complexes function. The majority of recent research focuses on identifying protein complexes as a whole and seldom studies the inner structures within complexes. In this study, we propose a computational framework to predict direct contacts and substructures within protein complexes. In this framework, we first train a supervised learning model of l2-regularized logistic regression to learn the patterns of direct and indirect interactions within complexes, from where physical subunit interaction networks are predicted. Then, to infer substructures within complexes, we apply a graph clustering method (i.e., maximum modularity clustering (MMC)) and a gene ontology (GO) semantic similarity based functional clustering on partially- and fully-connected networks, respectively. Computational results show that the proposed framework achieves fairly good performance of cross validation and independent test in terms of detecting direct contacts between subunits. Functional analyses further demonstrate the rationality of partitioning the subunits into substructures via the MMC algorithm and functional clustering. | 3.3. Comparison with the Related Work3.3.1. Predicting Physical Interactions within ComplexesTo our knowledge, there are only two studies on inferring direct contacts and substructures within complexes [11,12]. Both methods first predict physical subunit interactions within complexes. Different from this proposed framework, the two methods [11,12] use the interactome-scale physical protein–protein interactions as positive training data to reconstruct genome-scale physical PPIs, which are further mapped into complexes to infer direct contacts between subunits. However, the patterns of direct and indirect interactions within complexes are potentially quite different. In this proposed framework, the direct and indirect interactions in the training data were both restricted within complexes, so that the trained model was more biologically sound and interpretable.The two methods [11,12] do not provide the performance metrics of cross validation such as precision, recall, MCC and AUC scores. Furthermore, they neither provide the performance of independent test. Friedel et al. [12] report 49.1% true positive rate at 13.6% false positive rate. As shown in Figure 2A, the proposed framework achieved nearly 80% true positive rate at 13.6% false positive rate. This result showed that the proposed framework outperformed the related work in identifying direct contacts within complexes.3.3.2. Inferring Substructures within ComplexesThe two related studies [11,12] divide the direct-contact subunits into sub-complexes without considering the hierarchical or overlap substructures within complexes. Similar to complexes identification, sub-complexes discovery also needs sophisticated graph clustering techniques. For the fully-connected complexes with connection degrees equal to or very close to one, the two related studies [11,12] cannot identify the inner substructures, but this proposed framework explicitly solved the problem via GO semantic similarity based functional clustering. To our knowledge, no experimentally verified sub-complexes are available to evaluate the performance of the proposed framework and related methods.Nevertheless, we still compared the maximum modularity clustering method (MMC) [17] used by this proposed framework with the well-accepted Markov clustering (MCL) method [16] on the complexes from CORUM [4] and HPRD [16]. We first binarized the complexes from CORUM [4] and HPRD [5] into co-complex networks and then compared MMC with MCL to find out which method could best recover the known complexes from the co-complex networks. As shown in Table 2, 11.71% and 11.78% of the reference complexes from CORUM [4] and HPRD [5] were exactly predicted by MMC [17] (ξ=1, recall metric), respectively; and 32.57% and 16.67% of the predicted clusters exactly matched the reference complexes from CORUM [4] and HPRD [5] (ξ=1, precision metric), respectively. However, MCL [14] at most predicted 1.16% of the reference complexes from CORUM [4] and HPRD [5] and yielded a large number of singleton clusters accounting for at least 50% of the entire clusters.If the Jaccard index threshold ξ was set 0.5, 52.34% and 54.26% of the reference complexes from CORUM [4] and HPRD [5] matched the predicted clusters (ξ=0.5, recall metric), respectively; and 80.99% and 77.68% of the predicted clusters matched the reference complexes from CORUM [4] and HPRD [5] (ξ=0.5, precision metric), respectively. These results showed that the MMC method [17] excelled the commonly-used MCL method [16] and was a good solution to identifying substructures within complexes. | [
"16429126",
"17344885",
"14681354",
"19884131",
"18988627",
"20482850",
"26656494",
"25913176",
"29023445",
"19505940",
"18829707",
"25428363",
"24234451",
"26476454",
"22057159",
"9254694",
"12520024",
"18957448",
"27117309",
"27858158"
] | [
{
"pmid": "16429126",
"title": "Proteome survey reveals modularity of the yeast cell machinery.",
"abstract": "Protein complexes are key molecular entities that integrate multiple gene products to perform cellular functions. Here we report the first genome-wide screen for complexes in an organism, budding yeast, using affinity purification and mass spectrometry. Through systematic tagging of open reading frames (ORFs), the majority of complexes were purified several times, suggesting screen saturation. The richness of the data set enabled a de novo characterization of the composition and organization of the cellular machinery. The ensemble of cellular proteins partitions into 491 complexes, of which 257 are novel, that differentially combine with additional attachment proteins or protein modules to enable a diversification of potential functions. Support for this modular organization of the proteome comes from integration with available data on expression, localization, function, evolutionary conservation, protein structure and binary interactions. This study provides the largest collection of physically determined eukaryotic cellular machines so far and a platform for biological data integration and modelling."
},
{
"pmid": "17344885",
"title": "A human phenome-interactome network of protein complexes implicated in genetic disorders.",
"abstract": "We performed a systematic, large-scale analysis of human protein complexes comprising gene products implicated in many different categories of human disease to create a phenome-interactome network. This was done by integrating quality-controlled interactions of human proteins with a validated, computationally derived phenotype similarity score, permitting identification of previously unknown complexes likely to be associated with disease. Using a phenomic ranking of protein complexes linked to human disease, we developed a Bayesian predictor that in 298 of 669 linkage intervals correctly ranks the known disease-causing protein as the top candidate, and in 870 intervals with no identified disease-causing gene, provides novel candidates implicated in disorders such as retinitis pigmentosa, epithelial ovarian cancer, inflammatory bowel disease, amyotrophic lateral sclerosis, Alzheimer disease, type 2 diabetes and coronary heart disease. Our publicly available draft of protein complexes associated with pathology comprises 506 complexes, which reveal functional relationships between disease-promoting genes that will inform future experimentation."
},
{
"pmid": "14681354",
"title": "MIPS: analysis and annotation of proteins from whole genomes.",
"abstract": "The Munich Information Center for Protein Sequences (MIPS-GSF), Neuherberg, Germany, provides protein sequence-related information based on whole-genome analysis. The main focus of the work is directed toward the systematic organization of sequence-related attributes as gathered by a variety of algorithms, primary information from experimental data together with information compiled from the scientific literature. MIPS maintains automatically generated and manually annotated genome-specific databases, develops systematic classification schemes for the functional annotation of protein sequences and provides tools for the comprehensive analysis of protein sequences. This report updates the information on the yeast genome (CYGD), the Neurospora crassa genome (MNCDB), the database of complete cDNAs (German Human Genome Project, NGFN), the database of mammalian protein-protein interactions (MPPI), the database of FASTA homologies (SIMAP), and the interface for the fast retrieval of protein-associated information (QUIPOS). The Arabidopsis thaliana database, the rice database, the plant EST databases (MATDB, MOsDB, SPUTNIK), as well as the databases for the comprehensive set of genomes (PEDANT genomes) are described elsewhere in the 2003 and 2004 NAR database issues, respectively. All databases described, and the detailed descriptions of our projects can be accessed through the MIPS web server (http://mips.gsf.de)."
},
{
"pmid": "19884131",
"title": "CORUM: the comprehensive resource of mammalian protein complexes--2009.",
"abstract": "CORUM is a database that provides a manually curated repository of experimentally characterized protein complexes from mammalian organisms, mainly human (64%), mouse (16%) and rat (12%). Protein complexes are key molecular entities that integrate multiple gene products to perform cellular functions. The new CORUM 2.0 release encompasses 2837 protein complexes offering the largest and most comprehensive publicly available dataset of mammalian protein complexes. The CORUM dataset is built from 3198 different genes, representing approximately 16% of the protein coding genes in humans. Each protein complex is described by a protein complex name, subunit composition, function as well as the literature reference that characterizes the respective protein complex. Recent developments include mapping of functional annotation to Gene Ontology terms as well as cross-references to Entrez Gene identifiers. In addition, a 'Phylogenetic Conservation' analysis tool was implemented that analyses the potential occurrence of orthologous protein complex subunits in mammals and other selected groups of organisms. This allows one to predict the occurrence of protein complexes in different phylogenetic groups. CORUM is freely accessible at (http://mips.helmholtz-muenchen.de/genre/proj/corum/index.html)."
},
{
"pmid": "18988627",
"title": "Human Protein Reference Database--2009 update.",
"abstract": "Human Protein Reference Database (HPRD--http://www.hprd.org/), initially described in 2003, is a database of curated proteomic information pertaining to human proteins. We have recently added a number of new features in HPRD. These include PhosphoMotif Finder, which allows users to find the presence of over 320 experimentally verified phosphorylation motifs in proteins of interest. Another new feature is a protein distributed annotation system--Human Proteinpedia (http://www.humanproteinpedia.org/)--through which laboratories can submit their data, which is mapped onto protein entries in HPRD. Over 75 laboratories involved in proteomics research have already participated in this effort by submitting data for over 15,000 human proteins. The submitted data includes mass spectrometry and protein microarray-derived data, among other data types. Finally, HPRD is also linked to a compendium of human signaling pathways developed by our group, NetPath (http://www.netpath.org/), which currently contains annotations for several cancer and immune signaling pathways. Since the last update, more than 5500 new protein sequences have been added, making HPRD a comprehensive resource for studying the human proteome."
},
{
"pmid": "20482850",
"title": "A human functional protein interaction network and its application to cancer data analysis.",
"abstract": "BACKGROUND\nOne challenge facing biologists is to tease out useful information from massive data sets for further analysis. A pathway-based analysis may shed light by projecting candidate genes onto protein functional relationship networks. We are building such a pathway-based analysis system.\n\n\nRESULTS\nWe have constructed a protein functional interaction network by extending curated pathways with non-curated sources of information, including protein-protein interactions, gene coexpression, protein domain interaction, Gene Ontology (GO) annotations and text-mined protein interactions, which cover close to 50% of the human proteome. By applying this network to two glioblastoma multiforme (GBM) data sets and projecting cancer candidate genes onto the network, we found that the majority of GBM candidate genes form a cluster and are closer than expected by chance, and the majority of GBM samples have sequence-altered genes in two network modules, one mainly comprising genes whose products are localized in the cytoplasm and plasma membrane, and another comprising gene products in the nucleus. Both modules are highly enriched in known oncogenes, tumor suppressors and genes involved in signal transduction. Similar network patterns were also found in breast, colorectal and pancreatic cancers.\n\n\nCONCLUSIONS\nWe have built a highly reliable functional interaction network upon expert-curated pathways and applied this network to the analysis of two genome-wide GBM and several other cancer data sets. The network patterns revealed from our results suggest common mechanisms in the cancer biology. Our system should provide a foundation for a network or pathway-based analysis platform for cancer and other diseases."
},
{
"pmid": "26656494",
"title": "The Reactome pathway Knowledgebase.",
"abstract": "The Reactome Knowledgebase (www.reactome.org) provides molecular details of signal transduction, transport, DNA replication, metabolism and other cellular processes as an ordered network of molecular transformations-an extended version of a classic metabolic map, in a single consistent data model. Reactome functions both as an archive of biological processes and as a tool for discovering unexpected functional relationships in data such as gene expression pattern surveys or somatic mutation catalogues from tumour cells. Over the last two years we redeveloped major components of the Reactome web interface to improve usability, responsiveness and data visualization. A new pathway diagram viewer provides a faster, clearer interface and smooth zooming from the entire reaction network to the details of individual reactions. Tool performance for analysis of user datasets has been substantially improved, now generating detailed results for genome-wide expression datasets within seconds. The analysis module can now be accessed through a RESTFul interface, facilitating its inclusion in third party applications. A new overview module allows the visualization of analysis results on a genome-wide Reactome pathway hierarchy using a single screen page. The search interface now provides auto-completion as well as a faceted search to narrow result lists efficiently."
},
{
"pmid": "25913176",
"title": "Methods for protein complex prediction and their contributions towards understanding the organisation, function and dynamics of complexes.",
"abstract": "Complexes of physically interacting proteins constitute fundamental functional units responsible for driving biological processes within cells. A faithful reconstruction of the entire set of complexes is therefore essential to understand the functional organisation of cells. In this review, we discuss the key contributions of computational methods developed till date (approximately between 2003 and 2015) for identifying complexes from the network of interacting proteins (PPI network). We evaluate in depth the performance of these methods on PPI datasets from yeast, and highlight their limitations and challenges, in particular at detecting sparse and small or sub-complexes and discerning overlapping complexes. We describe methods for integrating diverse information including expression profiles and 3D structures of proteins with PPI networks to understand the dynamics of complex formation, for instance, of time-based assembly of complex subunits and formation of fuzzy complexes from intrinsically disordered proteins. Finally, we discuss methods for identifying dysfunctional complexes in human diseases, an application that is proving invaluable to understand disease mechanisms and to discover novel therapeutic targets. We hope this review aptly commemorates a decade of research on computational prediction of complexes and constitutes a valuable reference for further advancements in this exciting area."
},
{
"pmid": "29023445",
"title": "Identifying direct contacts between protein complex subunits from their conditional dependence in proteomics datasets.",
"abstract": "Determining the three dimensional arrangement of proteins in a complex is highly beneficial for uncovering mechanistic function and interpreting genetic variation in coding genes comprising protein complexes. There are several methods for determining co-complex interactions between proteins, among them co-fractionation / mass spectrometry (CF-MS), but it remains difficult to identify directly contacting subunits within a multi-protein complex. Correlation analysis of CF-MS profiles shows promise in detecting protein complexes as a whole but is limited in its ability to infer direct physical contacts among proteins in sub-complexes. To identify direct protein-protein contacts within human protein complexes we learn a sparse conditional dependency graph from approximately 3,000 CF-MS experiments on human cell lines. We show substantial performance gains in estimating direct interactions compared to correlation analysis on a benchmark of large protein complexes with solved three-dimensional structures. We demonstrate the method's value in determining the three dimensional arrangement of proteins by making predictions for complexes without known structure (the exocyst and tRNA multi-synthetase complex) and by establishing evidence for the structural position of a recently discovered component of the core human EKC/KEOPS complex, GON7/C14ORF142, providing a more complete 3D model of the complex. Direct contact prediction provides easily calculable additional structural information for large-scale protein complex mapping studies and should be broadly applicable across organisms as more CF-MS datasets become available."
},
{
"pmid": "19505940",
"title": "Identifying the topology of protein complexes from affinity purification assays.",
"abstract": "MOTIVATION\nRecent advances in high-throughput technologies have made it possible to investigate not only individual protein interactions, but also the association of these proteins in complexes. So far the focus has been on the prediction of complexes as sets of proteins from the experimental results. The modular substructure and the physical interactions within the protein complexes have been mostly ignored.\n\n\nRESULTS\nWe present an approach for identifying the direct physical interactions and the subcomponent structure of protein complexes predicted from affinity purification assays. Our algorithm calculates the union of all maximum spanning trees from scoring networks for each protein complex to extract relevant interactions. In a subsequent step this network is extended to interactions which are not accounted for by alternative indirect paths. We show that the interactions identified with this approach are more accurate in predicting experimentally derived physical interactions than baseline approaches. Based on these networks, the subcomponent structure of the complexes can be resolved more satisfactorily and subcomplexes can be identified. The usefulness of our method is illustrated on the RNA polymerases for which the modular substructure can be successfully reconstructed.\n\n\nAVAILABILITY\nA Java implementation of the prediction methods and supplementary material are available at http://www.bio.ifi.lmu.de/Complexes/Substructures/."
},
{
"pmid": "18829707",
"title": "Physical protein-protein interactions predicted from microarrays.",
"abstract": "MOTIVATION\nMicroarray expression data reveal functionally associated proteins. However, most proteins that are associated are not actually in direct physical contact. Predicting physical interactions directly from microarrays is both a challenging and important task that we addressed by developing a novel machine learning method optimized for this task.\n\n\nRESULTS\nWe validated our support vector machine-based method on several independent datasets. At the same levels of accuracy, our method recovered more experimentally observed physical interactions than a conventional correlation-based approach. Pairs predicted by our method to very likely interact were close in the overall network of interaction, suggesting our method as an aid for functional annotation. We applied the method to predict interactions in yeast (Saccharomyces cerevisiae). A Gene Ontology function annotation analysis and literature search revealed several probable and novel predictions worthy of future experimental validation. We therefore hope our new method will improve the annotation of interactions as one component of multi-source integrated systems.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "25428363",
"title": "The BioGRID interaction database: 2015 update.",
"abstract": "The Biological General Repository for Interaction Datasets (BioGRID: http://thebiogrid.org) is an open access database that houses genetic and protein interactions curated from the primary biomedical literature for all major model organism species and humans. As of September 2014, the BioGRID contains 749,912 interactions as drawn from 43,149 publications that represent 30 model organisms. This interaction count represents a 50% increase compared to our previous 2013 BioGRID update. BioGRID data are freely distributed through partner model organism databases and meta-databases and are directly downloadable in a variety of formats. In addition to general curation of the published literature for the major model species, BioGRID undertakes themed curation projects in areas of particular relevance for biomedical sciences, such as the ubiquitin-proteasome system and various human disease-associated interaction networks. BioGRID curation is coordinated through an Interaction Management System (IMS) that facilitates the compilation interaction records through structured evidence codes, phenotype ontologies, and gene annotation. The BioGRID architecture has been improved in order to support a broader range of interaction and post-translational modification types, to allow the representation of more complex multi-gene/protein interactions, to account for cellular phenotypes through structured ontologies, to expedite curation through semi-automated text-mining approaches, and to enhance curation quality control."
},
{
"pmid": "24234451",
"title": "The MIntAct project--IntAct as a common curation platform for 11 molecular interaction databases.",
"abstract": "IntAct (freely available at http://www.ebi.ac.uk/intact) is an open-source, open data molecular interaction database populated by data either curated from the literature or from direct data depositions. IntAct has developed a sophisticated web-based curation tool, capable of supporting both IMEx- and MIMIx-level curation. This tool is now utilized by multiple additional curation teams, all of whom annotate data directly into the IntAct database. Members of the IntAct team supply appropriate levels of training, perform quality control on entries and take responsibility for long-term data maintenance. Recently, the MINT and IntAct databases decided to merge their separate efforts to make optimal use of limited developer resources and maximize the curation output. All data manually curated by the MINT curators have been moved into the IntAct database at EMBL-EBI and are merged with the existing IntAct dataset. Both IntAct and MINT are active contributors to the IMEx consortium (http://www.imexconsortium.org)."
},
{
"pmid": "26476454",
"title": "KEGG as a reference resource for gene and protein annotation.",
"abstract": "KEGG (http://www.kegg.jp/ or http://www.genome.jp/kegg/) is an integrated database resource for biological interpretation of genome sequences and other high-throughput data. Molecular functions of genes and proteins are associated with ortholog groups and stored in the KEGG Orthology (KO) database. The KEGG pathway maps, BRITE hierarchies and KEGG modules are developed as networks of KO nodes, representing high-level functions of the cell and the organism. Currently, more than 4000 complete genomes are annotated with KOs in the KEGG GENES database, which can be used as a reference data set for KO assignment and subsequent reconstruction of KEGG pathways and other molecular networks. As an annotation resource, the following improvements have been made. First, each KO record is re-examined and associated with protein sequence data used in experiments of functional characterization. Second, the GENES database now includes viruses, plasmids, and the addendum category for functionally characterized proteins that are not represented in complete genomes. Third, new automatic annotation servers, BlastKOALA and GhostKOALA, are made available utilizing the non-redundant pangenome data set generated from the GENES database. As a resource for translational bioinformatics, various data sets are created for antimicrobial resistance and drug interaction networks."
},
{
"pmid": "22057159",
"title": "Gene Ontology-driven inference of protein-protein interactions using inducers.",
"abstract": "MOTIVATION\nProtein-protein interactions (PPIs) are pivotal for many biological processes and similarity in Gene Ontology (GO) annotation has been found to be one of the strongest indicators for PPI. Most GO-driven algorithms for PPI inference combine machine learning and semantic similarity techniques. We introduce the concept of inducers as a method to integrate both approaches more effectively, leading to superior prediction accuracies.\n\n\nRESULTS\nAn inducer (ULCA) in combination with a Random Forest classifier compares favorably to several sequence-based methods, semantic similarity measures and multi-kernel approaches. On a newly created set of high-quality interaction data, the proposed method achieves high cross-species prediction accuracies (Area under the ROC curve ≤ 0.88), rendering it a valuable companion to sequence-based methods.\n\n\nAVAILABILITY\nSoftware and datasets are available at http://bioinformatics.org.au/go2ppi/\n\n\nCONTACT\[email protected]."
},
{
"pmid": "9254694",
"title": "Gapped BLAST and PSI-BLAST: a new generation of protein database search programs.",
"abstract": "The BLAST programs are widely used tools for searching protein and DNA databases for sequence similarities. For protein comparisons, a variety of definitional, algorithmic and statistical refinements described here permits the execution time of the BLAST programs to be decreased substantially while enhancing their sensitivity to weak similarities. A new criterion for triggering the extension of word hits, combined with a new heuristic for generating gapped alignments, yields a gapped BLAST program that runs at approximately three times the speed of the original. In addition, a method is introduced for automatically combining statistically significant alignments produced by BLAST into a position-specific score matrix, and searching the database using this matrix. The resulting Position-Specific Iterated BLAST (PSI-BLAST) program runs at approximately the same speed per iteration as gapped BLAST, but in many cases is much more sensitive to weak but biologically relevant sequence similarities. PSI-BLAST is used to uncover several new and interesting members of the BRCT superfamily."
},
{
"pmid": "12520024",
"title": "The SWISS-PROT protein knowledgebase and its supplement TrEMBL in 2003.",
"abstract": "The SWISS-PROT protein knowledgebase (http://www.expasy.org/sprot/ and http://www.ebi.ac.uk/swissprot/) connects amino acid sequences with the current knowledge in the Life Sciences. Each protein entry provides an interdisciplinary overview of relevant information by bringing together experimental results, computed features and sometimes even contradictory conclusions. Detailed expertise that goes beyond the scope of SWISS-PROT is made available via direct links to specialised databases. SWISS-PROT provides annotated entries for all species, but concentrates on the annotation of entries from human (the HPI project) and other model organisms to ensure the presence of high quality annotation for representative members of all protein families. Part of the annotation can be transferred to other family members, as is already done for microbes by the High-quality Automated and Manual Annotation of microbial Proteomes (HAMAP) project. Protein families and groups of proteins are regularly reviewed to keep up with current scientific findings. Complementarily, TrEMBL strives to comprise all protein sequences that are not yet represented in SWISS-PROT, by incorporating a perpetually increasing level of mostly automated annotation. Researchers are welcome to contribute their knowledge to the scientific community by submitting relevant findings to SWISS-PROT at [email protected]."
},
{
"pmid": "18957448",
"title": "The GOA database in 2009--an integrated Gene Ontology Annotation resource.",
"abstract": "The Gene Ontology Annotation (GOA) project at the EBI (http://www.ebi.ac.uk/goa) provides high-quality electronic and manual associations (annotations) of Gene Ontology (GO) terms to UniProt Knowledgebase (UniProtKB) entries. Annotations created by the project are collated with annotations from external databases to provide an extensive, publicly available GO annotation resource. Currently covering over 160 000 taxa, with greater than 32 million annotations, GOA remains the largest and most comprehensive open-source contributor to the GO Consortium (GOC) project. Over the last five years, the group has augmented the number and coverage of their electronic pipelines and a number of new manual annotation projects and collaborations now further enhance this resource. A range of files facilitate the download of annotations for particular species, and GO term information and associated annotations can also be viewed and downloaded from the newly developed GOA QuickGO tool (http://www.ebi.ac.uk/QuickGO), which allows users to precisely tailor their annotation set."
},
{
"pmid": "27117309",
"title": "Protein-protein interaction inference based on semantic similarity of Gene Ontology terms.",
"abstract": "Identifying protein-protein interactions is important in molecular biology. Experimental methods to this issue have their limitations, and computational approaches have attracted more and more attentions from the biological community. The semantic similarity derived from the Gene Ontology (GO) annotation has been regarded as one of the most powerful indicators for protein interaction. However, conventional methods based on GO similarity fail to take advantage of the specificity of GO terms in the ontology graph. We proposed a GO-based method to predict protein-protein interaction by integrating different kinds of similarity measures derived from the intrinsic structure of GO graph. We extended five existing methods to derive the semantic similarity measures from the descending part of two GO terms in the GO graph, then adopted a feature integration strategy to combines both the ascending and the descending similarity scores derived from the three sub-ontologies to construct various kinds of features to characterize each protein pair. Support vector machines (SVM) were employed as discriminate classifiers, and five-fold cross validation experiments were conducted on both human and yeast protein-protein interaction datasets to evaluate the performance of different kinds of integrated features, the experimental results suggest the best performance of the feature that combines information from both the ascending and the descending parts of the three ontologies. Our method is appealing for effective prediction of protein-protein interaction."
},
{
"pmid": "27858158",
"title": "Structure of centromere chromatin: from nucleosome to chromosomal architecture.",
"abstract": "The centromere is essential for the segregation of chromosomes, as it serves as attachment site for microtubules to mediate chromosome segregation during mitosis and meiosis. In most organisms, the centromere is restricted to one chromosomal region that appears as primary constriction on the condensed chromosome and is partitioned into two chromatin domains: The centromere core is characterized by the centromere-specific histone H3 variant CENP-A (also called cenH3) and is required for specifying the centromere and for building the kinetochore complex during mitosis. This core region is generally flanked by pericentric heterochromatin, characterized by nucleosomes containing H3 methylated on lysine 9 (H3K9me) that are bound by heterochromatin proteins. During mitosis, these two domains together form a three-dimensional structure that exposes CENP-A-containing chromatin to the surface for interaction with the kinetochore and microtubules. At the same time, this structure supports the tension generated during the segregation of sister chromatids to opposite poles. In this review, we discuss recent insight into the characteristics of the centromere, from the specialized chromatin structures at the centromere core and the pericentromere to the three-dimensional organization of these regions that make up the functional centromere."
}
] |
BMC Medical Informatics and Decision Making | 31856818 | PMC6921386 | 10.1186/s12911-019-0967-9 | EEG-based image classification via a region-level stacked bi-directional deep learning framework | BackgroundAs a physiological signal, EEG data cannot be subjectively changed or hidden. Compared with other physiological signals, EEG signals are directly related to human cortical activities with excellent temporal resolution. After the rapid development of machine learning and artificial intelligence, the analysis and calculation of EEGs has made great progress, leading to a significant boost in performances for content understanding and pattern recognition of brain activities across the areas of both neural science and computer vision. While such an enormous advance has attracted wide range of interests among relevant research communities, EEG-based classification of brain activities evoked by images still demands efforts for further improvement with respect to its accuracy, generalization, and interpretation, yet some characters of human brains have been relatively unexplored.MethodsWe propose a region-level stacked bi-directional deep learning framework for EEG-based image classification. Inspired by the hemispheric lateralization of human brains, we propose to extract additional information at regional level to strengthen and emphasize the differences between two hemispheres. The stacked bi-directional long short-term memories are used to capture the dynamic correlations hidden from both the past and the future to the current state in EEG sequences.ResultsExtensive experiments are carried out and our results demonstrate the effectiveness of our proposed framework. Compared with the existing state-of-the-arts, our framework achieves outstanding performances in EEG-based classification of brain activities evoked by images. In addition, we find that the signals of Gamma band are not only useful for achieving good performances for EEG-based image classification, but also play a significant role in capturing relationships between the neural activations and the specific emotional states.ConclusionsOur proposed framework provides an improved solution for the problem that, given an image used to stimulate brain activities, we should be able to identify which class the stimuli image comes from by analyzing the EEG signals. The region-level information is extracted to preserve and emphasize the hemispheric lateralization for neural functions or cognitive processes of human brains. Further, stacked bi-directional LSTMs are used to capture the dynamic correlations hidden in EEG data. Extensive experiments on standard EEG-based image classification dataset validate that our framework outperforms the existing state-of-the-arts under various contexts and experimental setups. | Related WorkIn general, the EEG data analysis and processing method mainly includes two steps: feature extraction and pattern recognition or machine learning-based methods to complete the signal analysis[40, 41]. Before the popularity of deep learning, the primary approaches for feature extraction mainly included time-frequency features extracted by signal analysis methods, such as power spectral density [42], bandpower [43], independent components [44], and differential entropy [45]. The widely researched pattern recognition and machine learning methods include artificial neural networks [46, 47], naive Bayes [48], support vector machines (SVM) [49, 50], etc. With the extensive application and in-depth promotion of deep learning, an ever-increasing number of brain science and neuroscience research teams are exploiting its strength in designing algorithms to achieve intelligent understanding and analysis of brain activities via EEGs, leading to propose an end-to-end model by integrating feature extraction and classification/clustering.Jiao et al. [23] proposed a multi-channel deep convolution network to classify mental loads. Wang et al. [24] used LSTM network to classify motor imagery tasks, and used a one-dimensional aggregation approximation method to extract the network’s effective features.Cole et al. [25] used a predictive modelling approach based on CNN for predicting brain ages. Their analysis showed that the brain-predicted age is highly reliable. Gao et al. [26] proposed a spatiotemporal deep convolution model, which significantly improved the accuracy of detecting driver fatigue by emphasizing the importance of spatial information and time dependence of EEGs. Yuan et al. [27] proposed an end-to-end multi-view deep learning framework to automatically detect epileptic seizures in EEG signals. Li et al. [28] tried to incorporate transfer learning into the construction of convolutional neural networks and successfully applied the model to the clinical diagnosis of mild depression. Dong et al. [20] used a rectified linear unit (ReLU) activation function and a mixed neural network of LSTM on time-frequency-domain features to classify sleep stages. Lawhern et al. [29] proposed a compact full convolutional network as the EEG-specific model (EEGNet) and applied it to four different brain-machine interface classification tasks. Zhang et al. [30] proposed a cascaded and parallel convolution recurrent neural network model to accurately identify human expected motion instructions by effectively learning the spatio-temporal representation of the original EEG signal. Tan et al. [31] converted EEG data into EEG-based video and optical flow information, classified them by CNN and RNN, and established an effective rehabilitation support system based on BCI.Multimedia data, which contain a large amount of content information and rich visual characteristics, are considered to be a very suitable stimuli material and widely used in the acquisition and analysis of EEG signals [9, 18, 51]. Researchers tried to identify and classify the content information of multimedia data viewed by users through the analysis of EEG signals [15, 52, 53]. Spampinato et al. [18], used LSTM network to learn an EEG data representation based on image stimuli and constructed a mapping relationship from natural image features to EEG representation. Finally, they used the new representation of EEG signals for classification of natural images. Compared with traditional methods, these deep learning-based approaches have achieved outstanding classification results.Recent studies have shown that it is possible to reconstruct multimedia content information itself by mining EEG data. Kavasidis et al. [9] proposed a method for reconstructing visual stimuli content information through EEGs. By using a variable-valued autoencoder (VAE) and generative adversarial networks (GANs), they found that EEG data contain patterns related to visual content, and the content can be used to generate images that are semantically consistent with the input visual stimuli. While these methods have demonstrated the capability of using deep learning framework for EEG-based image classification, the original EEG data or the extracted time-frequency features based on signal analysis algorithms are often used as the input, and some characteristics of human brains have not been seriously considered, such as hemispheric lateralization, and the classification accuracy achieved to date by Spampinato et al. was 82.9% [18], leaving significant space for further research and improvement. | [
"21176975",
"30850003",
"20567055",
"18829969",
"11567610",
"20302949",
"28765056",
"30640634",
"30778842",
"26426534",
"9445333",
"14607146",
"26502455",
"19015119",
"9377276",
"19171515",
"10576487",
"7146906",
"23033323"
] | [
{
"pmid": "21176975",
"title": "Learning to move machines with the mind.",
"abstract": "Brain-computer interfaces (BCIs) extract signals from neural activity to control remote devices ranging from computer cursors to limb-like robots. They show great potential to help patients with severe motor deficits perform everyday tasks without the constant assistance of caregivers. Understanding the neural mechanisms by which subjects use BCI systems could lead to improved designs and provide unique insights into normal motor control and skill acquisition. However, reports vary considerably about how much training is required to use a BCI system, the degree to which performance improves with practice and the underlying neural mechanisms. This review examines these diverse findings, their potential relationship with motor learning during overt arm movements, and other outstanding questions concerning the volitional control of BCI systems."
},
{
"pmid": "30850003",
"title": "Sleep and Circadian Rhythm in Critical Illness.",
"abstract": "This article is one of ten reviews selected from the Annual Update in Intensive Care and Emergency Medicine 2019. Other selected articles can be found online at https://www.biomedcentral.com/collections/annualupdate2019 . Further information about the Annual Update in Intensive Care and Emergency Medicine is available from http://www.springer.com/series/8901 ."
},
{
"pmid": "20567055",
"title": "Convolutional neural networks for P300 detection with application to brain-computer interfaces.",
"abstract": "A Brain-Computer Interface (BCI) is a specific type of human-computer interface that enables the direct communication between human and computers by analyzing brain measurements. Oddball paradigms are used in BCI to generate event-related potentials (ERPs), like the P300 wave, on targets selected by the user. A P300 speller is based on this principle, where the detection of P300 waves allows the user to write characters. The P300 speller is composed of two classification problems. The first classification is to detect the presence of a P300 in the electroencephalogram (EEG). The second one corresponds to the combination of different P300 responses for determining the right character to spell. A new method for the detection of P300 waves is presented. This model is based on a convolutional neural network (CNN). The topology of the network is adapted to the detection of P300 waves in the time domain. Seven classifiers based on the CNN are proposed: four single classifiers with different features set and three multiclassifiers. These models are tested and compared on the Data set II of the third BCI competition. The best result is obtained with a multiclassifier solution with a recognition rate of 95.5 percent, without channel selection before the classification. The proposed approach provides also a new way for analyzing brain activities due to the receptive field of the CNN models."
},
{
"pmid": "18829969",
"title": "Perceived shape similarity among unfamiliar objects and the organization of the human object vision pathway.",
"abstract": "Humans rely heavily on shape similarity among objects for object categorization and identification. Studies using functional magnetic resonance imaging (fMRI) have shown that a large region in human occipitotemporal cortex processes the shape of meaningful as well as unfamiliar objects. Here, we investigate whether the functional organization of this region as measured with fMRI is related to perceived shape similarity. We found that unfamiliar object classes that are rated as having a similar shape were associated with a very similar response pattern distributed across object-selective cortex, whereas object classes that were rated as being very different in shape were associated with a more different response pattern. Human observers, as well as object-selective cortex, were very sensitive to differences in shape features of the objects such as straight versus curved versus \"spiky\" edges, more so than to differences in overall shape envelope. Response patterns in retinotopic areas V1, V2, and V4 were not found to be related to perceived shape. The functional organization in area V3 was partially related to perceived shape but without a stronger sensitivity for shape features relative to overall shape envelope. Thus, for unfamiliar objects, the organization of human object-selective cortex is strongly related to perceived shape, and this shape-based organization emerges gradually throughout the object vision pathway."
},
{
"pmid": "11567610",
"title": "The neural basis of perceptual learning.",
"abstract": "Perceptual learning is a lifelong process. We begin by encoding information about the basic structure of the natural world and continue to assimilate information about specific patterns with which we become familiar. The specificity of the learning suggests that all areas of the cerebral cortex are plastic and can represent various aspects of learned information. The neural substrate of perceptual learning relates to the nature of the neural code itself, including changes in cortical maps, in the temporal characteristics of neuronal responses, and in modulation of contextual influences. Top-down control of these representations suggests that learning involves an interaction between multiple cortical areas."
},
{
"pmid": "20302949",
"title": "Predicting variations of perceptual performance across individuals from neural activity using pattern classifiers.",
"abstract": "Within the past decade computational approaches adopted from the field of machine learning have provided neuroscientists with powerful new tools for analyzing neural data. For instance, previous studies have applied pattern classification algorithms to electroencephalography data to predict the category of presented visual stimuli, human observer decision choices and task difficulty. Here, we quantitatively compare the ability of pattern classifiers and three ERP metrics (peak amplitude, mean amplitude, and onset latency of the face-selective N170) to predict variations across individuals' behavioral performance in a difficult perceptual task identifying images of faces and cars embedded in noise. We investigate three different pattern classifiers (Classwise Principal Component Analysis, CPCA; Linear Discriminant Analysis, LDA; and Support Vector Machine, SVM), five training methods differing in the selection of training data sets and three analyses procedures for the ERP measures. We show that all three pattern classifier algorithms surpass traditional ERP measurements in their ability to predict individual differences in performance. Although the differences across pattern classifiers were not large, the CPCA method with training data sets restricted to EEG activity for trials in which observers expressed high confidence about their decisions performed the highest at predicting perceptual performance of observers. We also show that the neural activity predicting the performance across individuals was distributed through time starting at 120ms, and unlike the face-selective ERP response, sustained for more than 400ms after stimulus presentation, indicating that both early and late components contain information correlated with observers' behavioral performance. Together, our results further demonstrate the potential of pattern classifiers compared to more traditional ERP techniques as an analysis tool for modeling spatiotemporal dynamics of the human brain and relating neural activity to behavior."
},
{
"pmid": "28765056",
"title": "Predicting brain age with deep learning from raw imaging data results in a reliable and heritable biomarker.",
"abstract": "Machine learning analysis of neuroimaging data can accurately predict chronological age in healthy people. Deviations from healthy brain ageing have been associated with cognitive impairment and disease. Here we sought to further establish the credentials of 'brain-predicted age' as a biomarker of individual differences in the brain ageing process, using a predictive modelling approach based on deep learning, and specifically convolutional neural networks (CNN), and applied to both pre-processed and raw T1-weighted MRI data. Firstly, we aimed to demonstrate the accuracy of CNN brain-predicted age using a large dataset of healthy adults (N = 2001). Next, we sought to establish the heritability of brain-predicted age using a sample of monozygotic and dizygotic female twins (N = 62). Thirdly, we examined the test-retest and multi-centre reliability of brain-predicted age using two samples (within-scanner N = 20; between-scanner N = 11). CNN brain-predicted ages were generated and compared to a Gaussian Process Regression (GPR) approach, on all datasets. Input data were grey matter (GM) or white matter (WM) volumetric maps generated by Statistical Parametric Mapping (SPM) or raw data. CNN accurately predicted chronological age using GM (correlation between brain-predicted age and chronological age r = 0.96, mean absolute error [MAE] = 4.16 years) and raw (r = 0.94, MAE = 4.65 years) data. This was comparable to GPR brain-predicted age using GM data (r = 0.95, MAE = 4.66 years). Brain-predicted age was a heritable phenotype for all models and input data (h2 ≥ 0.5). Brain-predicted age showed high test-retest reliability (intraclass correlation coefficient [ICC] = 0.90-0.99). Multi-centre reliability was more variable within high ICCs for GM (0.83-0.96) and poor-moderate levels for WM and raw data (0.51-0.77). Brain-predicted age represents an accurate, highly reliable and genetically-influenced phenotype, that has potential to be used as a biomarker of brain ageing. Moreover, age predictions can be accurately generated on raw T1-MRI data, substantially reducing computation time for novel data, bringing the process closer to giving real-time information on brain health in clinical settings."
},
{
"pmid": "30640634",
"title": "EEG-Based Spatio-Temporal Convolutional Neural Network for Driver Fatigue Evaluation.",
"abstract": "Driver fatigue evaluation is of great importance for traffic safety and many intricate factors would exacerbate the difficulty. In this paper, based on the spatial-temporal structure of multichannel electroencephalogram (EEG) signals, we develop a novel EEG-based spatial-temporal convolutional neural network (ESTCNN) to detect driver fatigue. First, we introduce the core block to extract temporal dependencies from EEG signals. Then, we employ dense layers to fuse spatial features and realize classification. The developed network could automatically learn valid features from EEG signals, which outperforms the classical two-step machine learning algorithms. Importantly, we carry out fatigue driving experiments to collect EEG signals from eight subjects being alert and fatigue states. Using 2800 samples under within-subject splitting, we compare the effectiveness of ESTCNN with eight competitive methods. The results indicate that ESTCNN fulfills a better classification accuracy of 97.37% than these compared methods. Furthermore, the spatial-temporal structure of this framework advantages in computational efficiency and reference time, which allows further implementations in the brain-computer interface online systems."
},
{
"pmid": "30778842",
"title": "EEG-based mild depression recognition using convolutional neural network.",
"abstract": "Electroencephalography (EEG)-based studies focus on depression recognition using data mining methods, while those on mild depression are yet in infancy, especially in effective monitoring and quantitative measure aspects. Aiming at mild depression recognition, this study proposed a computer-aided detection (CAD) system using convolutional neural network (ConvNet). However, the architecture of ConvNet derived by trial and error and the CAD system used in clinical practice should be built on the basis of the local database; we therefore applied transfer learning when constructing ConvNet architecture. We also focused on the role of different aspects of EEG, i.e., spectral, spatial, and temporal information, in the recognition of mild depression and found that the spectral information of EEG played a major role and the temporal information of EEG provided a statistically significant improvement to accuracy. The proposed system provided the accuracy of 85.62% for recognition of mild depression and normal controls with 24-fold cross-validation (the training and test sets are divided based on the subjects). Thus, the system can be clinically used for the objective, accurate, and rapid diagnosis of mild depression. Graphical abstract The EEG power of theta, alpha, and beta bands is calculated separately under trial-wise and frame-wise strategies and is organized into three input forms of deep neural networks: feature vector, images without electrode location (spatial information), and images with electrode location. The role of EEG's spectral and spatial information in mild depression recognition is investigated through ConvNet, and the role of EEG's temporal information is investigated using different architectures to aggregate temporal features from multiple frames. The ConvNet and models for aggregating temporal features are transferred from the state-of-the-art model in mental load classification."
},
{
"pmid": "26426534",
"title": "Hemispheric lateralization in reasoning.",
"abstract": "A growing body of evidence suggests that reasoning in humans relies on a number of related processes whose neural loci are largely lateralized to one hemisphere or the other. A recent review of this evidence concluded that the patterns of lateralization observed are organized according to two complementary tendencies. The left hemisphere attempts to reduce uncertainty by drawing inferences or creating explanations, even at the cost of ignoring conflicting evidence or generating implausible explanations. Conversely, the right hemisphere aims to reduce conflict by rejecting or refining explanations that are no longer tenable in the face of new evidence. In healthy adults, the hemispheres work together to achieve a balance between certainty and consistency, and a wealth of neuropsychological research supports the notion that upsetting this balance results in various failures in reasoning, including delusions. However, support for this model from the neuroimaging literature is mixed. Here, we examine the evidence for this framework from multiple research domains, including an activation likelihood estimation analysis of functional magnetic resonance imaging studies of reasoning. Our results suggest a need to either revise this model as it applies to healthy adults or to develop better tools for assessing lateralization in these individuals."
},
{
"pmid": "9445333",
"title": "Noninvasive determination of language lateralization by functional transcranial Doppler sonography: a comparison with the Wada test.",
"abstract": "BACKGROUND AND PURPOSE\nFunctional transcranial Doppler ultrasonography (fTCD) can assess event-related changes in cerebral blood flow velocities and, by comparison between sides, can provide a measure of hemispheric perfusional lateralization. It is easily applicable, insensitive to movement artifacts, and can be used in patients with less than perfect cooperation. In the present study we investigated the validity of fTCD in determining the hemispheric dominance for language by direct comparison of fTCD with intracarotid amobarbital anesthesia (Wada test).\n\n\nMETHODS\nfTCD and the Wada test were performed in 19 patients evaluated for epilepsy surgery. By the Wada test, 13 patients were classified as left-hemisphere dominant and 6 as right-hemisphere dominant for language. fTCD was based on the continuous bilateral measurements of blood flow velocities in the middle cerebral arteries and event-related averaging during a cued word generation task previously shown to activate lateralized language areas in normal adults.\n\n\nRESULTS\nIn 4 patients fTCD assessment was not possible because of lack of an acoustic temporal bone window. In the remaining 15 candidates, determination of language dominance was concordant with the Wada test in every case. Moreover, the correlation of the lateralization measures from both procedures was highly significant (r=.92, P<.0001).\n\n\nCONCLUSIONS\nThis strong correlation validates fTCD as a noninvasive and practical tool for the determination of language lateralization that can be applied for clinical and investigative purposes."
},
{
"pmid": "14607146",
"title": "Hemispheric lateralization of the EEG during wakefulness and REM sleep in young healthy adults.",
"abstract": "EEG recordings confirm hemispheric lateralization of brain activity during cognitive tasks. The aim of the present study was to investigate spontaneous EEG lateralization under two conditions, waking and REM sleep. Bilateral monopolar EEG was recorded in eight participants using a 12-electrode montage, before the night (5 min eyes closed) and during REM sleep. Spectral analysis (0.75-19.75 Hz) revealed left prefrontal lateralization on total spectrum amplitude power and right occipital lateralization in Delta activity during waking. In contrast, during REM sleep, right frontal lateralization in Theta and Beta activities and right lateralization in occipital Delta activity was observed. These results suggest that spontaneous EEG activities generated during waking and REM sleep are supported in part by a common thalamo-cortical neural network (right occipital Delta dominance) while additional, possibly neuro-cognitive factors modulate waking left prefrontal dominance and REM sleep right frontal dominance."
},
{
"pmid": "26502455",
"title": "A Global Gait Asymmetry Index.",
"abstract": "High levels of gait asymmetry are associated with many pathologies. Our long-term goal is to improve gait symmetry through real-time biofeedback of a symmetry index. Symmetry is often reported as a single metric or a collective signature of multiple discrete measures. While this is useful for assessment, incorporating multiple feedback metrics presents too much information for most subjects to use as visual feedback for gait retraining. The aim of this article was to develop a global gait asymmetry (GGA) score that could be used as a biofeedback metric for gait retraining and to test the effectiveness of the GGA for classifying artificially-induced asymmetry. Eighteen participants (11 males; age 26.9 y [SD = 7.7]; height 1.8 m [SD = 0.1]; body mass 72.7 kg [SD = 8.9]) walked on a treadmill in 3 symmetry conditions, induced by wearing custom-made sandals: a symmetric condition (identical sandals) and 2 asymmetric conditions (different sandals). The GGA score was calculated, based on several joint angles, and compared between conditions. Significant differences were found among all conditions (P < .001), meaning that the GGA score is sensitive to different levels of asymmetry, and may be useful for rehabilitation and assessment."
},
{
"pmid": "19015119",
"title": "Rapid influence of emotional scenes on encoding of facial expressions: an ERP study.",
"abstract": "In daily life, we perceive a person's facial reaction as part of the natural environment surrounding it. Because most studies have investigated how facial expressions are recognized by using isolated faces, it is unclear what role the context plays. Although it has been observed that the N170 for facial expressions is modulated by the emotional context, it was not clear whether individuals use context information on this stage of processing to discriminate between facial expressions. The aim of the present study was to investigate how the early stages of face processing are affected by emotional scenes when explicit categorizations of fearful and happy facial expressions are made. Emotion effects were found for the N170, with larger amplitudes for faces in fearful scenes as compared to faces in happy and neutral scenes. Critically, N170 amplitudes were significantly increased for fearful faces in fearful scenes as compared to fearful faces in happy scenes and expressed in left-occipito-temporal scalp topography differences. Our results show that the information provided by the facial expression is combined with the scene context during the early stages of face processing."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "19171515",
"title": "Recognition of abstract objects via neural oscillators: interaction among topological organization, associative memory and gamma band synchronization.",
"abstract": "Synchronization of neural activity in the gamma band is assumed to play a significant role not only in perceptual processing, but also in higher cognitive functions. Here, we propose a neural network of Wilson-Cowan oscillators to simulate recognition of abstract objects, each represented as a collection of four features. Features are ordered in topological maps of oscillators connected via excitatory lateral synapses, to implement a similarity principle. Experience on previous objects is stored in long-range synapses connecting the different topological maps, and trained via timing dependent Hebbian learning (previous knowledge principle). Finally, a downstream decision network detects the presence of a reliable object representation, when all features are oscillating in synchrony. Simulations performed giving various simultaneous objects to the network (from 1 to 4), with some missing and/or modified properties suggest that the network can reconstruct objects, and segment them from the other simultaneously present objects, even in case of deteriorated information, noise, and moderate correlation among the inputs (one common feature). The balance between sensitivity and specificity depends on the strength of the Hebbian learning. Achieving a correct reconstruction in all cases, however, requires ad hoc selection of the oscillation frequency. The model represents an attempt to investigate the interactions among topological maps, autoassociative memory, and gamma-band synchronization, for recognition of abstract objects."
},
{
"pmid": "10576487",
"title": "Processing of affective pictures modulates right-hemispheric gamma band EEG activity.",
"abstract": "The present study was designed to test differential hemispheric activation induced by emotional stimuli in the gamma band range (30-90 Hz). Subjects viewed slides with differing emotional content (from the International Affective Picture System). A significant valence by hemisphere interaction emerged in the gamma band from 30-50 Hz. Other bands, including alpha and beta, did not show such an interaction. Previous hypotheses suggested that the left hemisphere is more involved in positive affective processing as compared to the right hemisphere, while the latter dominates during negative emotions. Contrary to this expectation, the 30-50 Hz band showed relatively more power for negative valence over the left temporal region as compared to the right and a laterality shift towards the right hemisphere for positive valence. In addition, emotional processing enhanced gamma band power at right frontal electrodes regardless of the particular valence as compared to processing neutral pictures. The extended distribution of specific activity in the gamma band may be the signature of cell assemblies with members in limbic, temporal and frontal neocortical structures that differ in spatial distribution depending on the particular type of emotional processing."
},
{
"pmid": "7146906",
"title": "Asymmetrical brain activity discriminates between positive and negative affective stimuli in human infants.",
"abstract": "Ten-month-old infants viewed videotape segments of an actress spontaneously generating a happy or sad facial expression. Brain activity was recorded from the left and right frontal and parietal scalp regions. In two studies, infants showed greater activation of the left frontal than of the right frontal area in response to the happy segments. Parietal asymmetry failed to discriminate between the conditions. Differential lateralization of the hemispheres for affective processes seems to be established by 10 months of age."
},
{
"pmid": "23033323",
"title": "Toward an EEG-based recognition of music liking using time-frequency analysis.",
"abstract": "Affective phenomena, as reflected through brain activity, could constitute an effective index for the detection of music preference. In this vein, this paper focuses on the discrimination between subjects' electroencephalogram (EEG) responses to self-assessed liked or disliked music, acquired during an experimental procedure, by evaluating different feature extraction approaches and classifiers to this end. Feature extraction is based on time-frequency (TF) analysis by implementing three TF techniques, i.e., spectrogram, Zhao-Atlas-Marks distribution and Hilbert-Huang spectrum (HHS). Feature estimation also accounts for physiological parameters that relate to EEG frequency bands, reference states, time intervals, and hemispheric asymmetries. Classification is performed by employing four classifiers, i.e., support vector machines, k-nearest neighbors (k -NN), quadratic and Mahalanobis distance-based discriminant analyses. According to the experimental results across nine subjects, best classification accuracy {86.52 (±0.76)%} was achieved using k-NN and HHS-based feature vectors ( FVs) representing a bilateral average activity, referred to a resting period, in β (13-30 Hz) and γ (30-49 Hz) bands. Activity in these bands may point to a connection between music preference and emotional arousal phenomena. Furthermore, HHS-based FVs were found to be robust against noise corruption. The outcomes of this study provide early evidence and pave the way for the development of a generalized brain computer interface for music preference recognition."
}
] |
BMC Medical Informatics and Decision Making | 31856806 | PMC6921390 | 10.1186/s12911-019-0961-2 | Incorporating medical code descriptions for diagnosis prediction in healthcare | BackgroundDiagnosis aims to predict the future health status of patients according to their historical electronic health records (EHR), which is an important yet challenging task in healthcare informatics. Existing diagnosis prediction approaches mainly employ recurrent neural networks (RNN) with attention mechanisms to make predictions. However, these approaches ignore the importance of code descriptions, i.e., the medical definitions of diagnosis codes. We believe that taking diagnosis code descriptions into account can help the state-of-the-art models not only to learn meaning code representations, but also to improve the predictive performance, especially when the EHR data are insufficient.MethodsWe propose a simple, but general diagnosis prediction framework, which includes two basic components: diagnosis code embedding and predictive model. To learn the interpretable code embeddings, we apply convolutional neural networks (CNN) to model medical descriptions of diagnosis codes extracted from online medical websites. The learned medical embedding matrix is used to embed the input visits into vector representations, which are fed into the predictive models. Any existing diagnosis prediction approach (referred to as the base model) can be cast into the proposed framework as the predictive model (called the enhanced model).ResultsWe conduct experiments on two real medical datasets: the MIMIC-III dataset and the Heart Failure claim dataset. Experimental results show that the enhanced diagnosis prediction approaches significantly improve the prediction performance. Moreover, we validate the effectiveness of the proposed framework with insufficient EHR data. Finally, we visualize the learned medical code embeddings to show the interpretability of the proposed framework.ConclusionsGiven the historical visit records of a patient, the proposed framework is able to predict the next visit information by incorporating medical code descriptions. | Related WorkIn this section, we briefly survey the work related to diagnosis prediction task. We first provide a general introduction about mining healthcare related data with deep learning techniques, and then survey the work of diagnosis prediction.Deep Learning for EHRSeveral machine learning approaches are proposed to mine medical knowledge from EHR data [1, 6–10]. Among them, deep learning-based models have achieved better performance compared with traditional machine learning approaches [11–13]. To detect the characteristic patterns of physiology in clinical time series data, stacked denoising autoencoders (SDA) are used in [14]. Convolutional neural networks (CNN) are applied to predict unplanned readmission [15], sleep stages [16], diseases [17, 18] and risk [19–21] with EHR data. To capture the temporal characteristics of healthcare related data, recurrent neural networks (RNN) are widely used for modeling disease progression [22, 23], mining time series healthcare data with missing values [24, 25], and diagnosis classification [26] and prediction [2–4, 27].Diagnosis PredictionDiagnosis prediction is one of the core research tasks in EHR data mining, which aims to predict the future visit information according to the historical visit records. Med2Vec [28] is the first unsupervised method to learn the interpretable embeddings of medical codes, but it ignores long-term dependencies of medical codes among visits. RETAIN [4] is the first interpretable model to mathematically calculate the contribution of each medical code to the current prediction by employing a reverse time attention mechanism in an RNN for binary prediction task. Dipole [2] is the first work to adopt bidirectional recurrent neural networks (BRNN) and different attention mechanisms to improve the prediction accuracy. GRAM [3] is the first work to apply graph-based attention mechanism on the given medical ontology to learn robust medical code embeddings even when lack of training data, and an RNN is used to model patient visits. KAME [29] uses high-level knowledge to improve the predictive performance, which is build upon GRAM.However, different from all the aforementioned diagnosis prediction models, the proposed diagnosis prediction framework incorporates the descriptions of diagnosis codes to learn embeddings, which greatly improves the prediction accuracy and provide interpretable prediction results against the state-of-the-art approaches. | [
"29994534",
"9377276",
"27219127"
] | [
{
"pmid": "29994534",
"title": "Deep Patient Similarity Learning for Personalized Healthcare.",
"abstract": "Predicting patients' risk of developing certain diseases is an important research topic in healthcare. Accurately identifying and ranking the similarity among patients based on their historical records is a key step in personalized healthcare. The electric health records (EHRs), which are irregularly sampled and have varied patient visit lengths, cannot be directly used to measure patient similarity due to the lack of an appropriate representation. Moreover, there needs an effective approach to measure patient similarity on EHRs. In this paper, we propose two novel deep similarity learning frameworks which simultaneously learn patient representations and measure pairwise similarity. We use a convolutional neural network (CNN) to capture local important information in EHRs and then feed the learned representation into triplet loss or softmax cross entropy loss. After training, we can obtain pairwise distances and similarity scores. Utilizing the similarity information, we then perform disease predictions and patient clustering. Experimental results show that CNN can better represent the longitudinal EHR sequences, and our proposed frameworks outperform state-of-the-art distance metric learning methods."
},
{
"pmid": "9377276",
"title": "Long short-term memory.",
"abstract": "Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We briefly review Hochreiter's (1991) analysis of this problem, then address it by introducing a novel, efficient, gradient-based method called long short-term memory (LSTM). Truncating the gradient where this does not do harm, LSTM can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units. Multiplicative gate units learn to open and close access to the constant error flow. LSTM is local in space and time; its computational complexity per time step and weight is O(1). Our experiments with artificial data involve local, distributed, real-valued, and noisy pattern representations. In comparisons with real-time recurrent learning, back propagation through time, recurrent cascade correlation, Elman nets, and neural sequence chunking, LSTM leads to many more successful runs, and learns much faster. LSTM also solves complex, artificial long-time-lag tasks that have never been solved by previous recurrent network algorithms."
},
{
"pmid": "27219127",
"title": "MIMIC-III, a freely accessible critical care database.",
"abstract": "MIMIC-III ('Medical Information Mart for Intensive Care') is a large, single-center database comprising information relating to patients admitted to critical care units at a large tertiary care hospital. Data includes vital signs, medications, laboratory measurements, observations and notes charted by care providers, fluid balance, procedure codes, diagnostic codes, imaging reports, hospital length of stay, survival data, and more. The database supports applications including academic and industrial research, quality improvement initiatives, and higher education coursework."
}
] |
BMC Medical Informatics and Decision Making | 31856819 | PMC6921442 | 10.1186/s12911-019-0965-y | Evaluating global and local sequence alignment methods for comparing patient medical records | BackgroundSequence alignment is a way of arranging sequences (e.g., DNA, RNA, protein, natural language, financial data, or medical events) to identify the relatedness between two or more sequences and regions of similarity. For Electronic Health Records (EHR) data, sequence alignment helps to identify patients of similar disease trajectory for more relevant and precise prognosis, diagnosis and treatment of patients.MethodsWe tested two cutting-edge global sequence alignment methods, namely dynamic time warping (DTW) and Needleman-Wunsch algorithm (NWA), together with their local modifications, DTW for Local alignment (DTWL) and Smith-Waterman algorithm (SWA), for aligning patient medical records. We also used 4 sets of synthetic patient medical records generated from a large real-world EHR database as gold standard data, to objectively evaluate these sequence alignment algorithms.ResultsFor global sequence alignments, 47 out of 80 DTW alignments and 11 out of 80 NWA alignments had superior similarity scores than reference alignments while the rest 33 DTW alignments and 69 NWA alignments had the same similarity scores as reference alignments. Forty-six out of 80 DTW alignments had better similarity scores than NWA alignments with the rest 34 cases having the equal similarity scores from both algorithms. For local sequence alignments, 70 out of 80 DTWL alignments and 68 out of 80 SWA alignments had larger coverage and higher similarity scores than reference alignments while the rest DTWL alignments and SWA alignments received the same coverage and similarity scores as reference alignments. Six out of 80 DTWL alignments showed larger coverage and higher similarity scores than SWA alignments. Thirty DTWL alignments had the equal coverage but better similarity scores than SWA. DTWL and SWA received the equal coverage and similarity scores for the rest 44 cases.ConclusionsDTW, NWA, DTWL and SWA outperformed the reference alignments. DTW (or DTWL) seems to align better than NWA (or SWA) by inserting new daily events and identifying more similarities between patient medical records. The evaluation results could provide valuable information on the strengths and weakness of these sequence alignment methods for future development of sequence alignment methods and patient similarity-based studies. | Related workDynamic Time Warping (DTW) is one of the leading matching algorithms for globally aligning two temporal sequences of different speeds and measuring similarity [8, 18]. Specifically DTW determines the optimal alignment between two given temporal sequences based on the following restrictions and rules:
Every index in one sequence must match one or more indices in the other sequence. The 1-to-n or n-to-1 index matching denotes the warping in the time dimension.The first indices in the two sequences must match.The last indices in the two sequences must match.The mapping of the indices in the two sequences must be monotonically increasing.Given two temporal event sequences of two patients X ([X1, X2, …, Xi, …, Xn]) and Y ([Y1, Y2, …, Yj, …, Ym]), DTW calculates an accumulated score matrix A(n + 1) x (m + 1) by updating the matrix element Ai, j according to the following equation,
1\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ {A}_{i,j}=\Big\{{\displaystyle \begin{array}{cc}0& i=0,j=0\\ {}-\infty & i=0,j>0\\ {}-\infty & i>0,j=0\\ {}\max \left(s\left({X}_i,{Y}_j\right)+{A}_{i-1,j-1},s\left({X}_i,{Y}_j\right)+{A}_{i-1,j},s\left({X}_i,{Y}_j\right)+{A}_{i,j-1}\right)& i>0,j>0\end{array}}\operatorname{}\kern0.5em $$\end{document}Ai,j={0i=0,j=0−∞i=0,j>0−∞i>0,j=0max(s(Xi,Yj)+Ai−1,j−1,s(Xi,Yj)+Ai−1,j,s(Xi,Yj)+Ai,j−1)i>0,j>0
where s (Xi, Yj) denotes the distance between two elements Xi and Yj in the sequence of X and Y. In our experiment, we define s (Xi, Yj) according to the scoring system shown in Fig. 1(B).DTW then tracks back from the matrix element A(n + 1), (m + 1) to find the optimal alignment path by maximizing the accumulated score in the accumulated score matrix.Needleman-Wunsch Algorithm (NWA) was firstly developed by Saul B. Needleman and Christian D. Wunsch in 1970 [10]. It was one of the first application of dynamic programming to align and compare protein and nucleotide sequences. As a global alignment method, NWA introduces a gap rather than warping and filling in an adjacent element when aligning sequences. Therefore, every index in one sequence matches another index or a gap in the other sequence, and the monotonic increase of the mapping indices is maintained.Mathematically, given two temporal sequences of medical events X ([X1, X2, …, Xi, …, Xn]) and Y ([Y1, Y2, …, Yj, …, Ym]), NWA calculates an accumulated score matrix A(n + 1) x (m + 1) by updating the matrix element Ai, j according to the following equation,
2\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ {A}_{i,j}=\Big\{{\displaystyle \begin{array}{cc}0& i=0,j=0\\ {}j\ast gp& i=0,j>0\\ {}i\ast gp& i>0,j=0\\ {}\max \left({A}_{i-1,j-1}+s\left({X}_i,{Y}_j\right),{A}_{i-1,j}+\mathrm{gp},{A}_{i,j-1}+\mathrm{gp}\right)& i>0,j>0\end{array}}\operatorname{} $$\end{document}Ai,j={0i=0,j=0j∗gpi=0,j>0i∗gpi>0,j=0max(Ai−1,j−1+s(Xi,Yj),Ai−1,j+gp,Ai,j−1+gp)i>0,j>0Where gp stands for a gap penalty; s (Xi, Yj) denotes the simialrity between two elements Xi and Yj in the sequence of X and Y, and is calculated using a scoring system shown in Fig. 1(B).NWA also identifies an optimal alignment path relative to a given scoring system including gap penalty by tracking back from the matrix element A(n + 1), (m + 1) and maximizing the accumulated scores along the path.Smith-Waterman Algorithm (SWA) is a local sequence alignment algorithm developed by Temple F. Smith and Michael S. Waterman in 1981 [12], which is a variation of NWA for local sequence alignment. SWA has been commonly used for aligning biological sequence, such as DNA, RNA or protein sequences [13, 14].Given two temporal sequences of medical events X ([X1, X2, …, Xi, …, Xn]) and Y ([Y1, Y2, …, Yj, …, Ym]), SWA calculates an accumulated score matrix A(n + 1) x (m + 1) by updating the matrix element Ai, j according to the following equation,
3\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$ {A}_{i,j}=\Big\{{\displaystyle \begin{array}{cc}0& i=0\mathrm{or}j=0\\ {}\max \left({A}_{i-1,j-1}+s\left({X}_i,{Y}_j\right),{A}_{i-1,j}+\mathrm{gp},{A}_{i,j-1}+\mathrm{gp},0\right)& i>0,j>0\end{array}}\operatorname{} $$\end{document}Ai,j={0i=0orj=0max(Ai−1,j−1+s(Xi,Yj),Ai−1,j+gp,Ai,j−1+gp,0)i>0,j>0Where gp stands for a gap penalty; s (Xi, Yj) denotes the similarity between two elements Xi and Yj in the sequence of X and Y, and is calculated using a scoring system shown in Fig. 1(B).The main difference from NWA is that the matrix element with negative accumulated score is set to zero, which is used to mask certain mismatched alignments and render locally matched alignments visible. Sequentially, by starting at the element with the highest accumulated score, the algorithm identifies the local alignment path with the highest similarity by tracking back and choosing the path affiliated with maximal accumulated score until the matrix element with an accumulated score of zero is encountered. The algorithm is also guaranteed to find the optimal local alignment with respect to the predefined scoring system. | [
"25762458",
"25978419",
"28258046",
"29864490",
"5420325",
"7265238",
"29800571",
"25402007",
"17703042",
"29993581",
"29346555",
"23159830",
"21430193",
"28686612",
"26252133",
"29739741",
"31038462"
] | [
{
"pmid": "25762458",
"title": "An electronic medical record system with treatment recommendations based on patient similarity.",
"abstract": "As the core of health information technology (HIT), electronic medical record (EMR) systems have been changing to meet health care demands. To construct a new-generation EMR system framework with the capability of self-learning and real-time feedback, thus adding intelligence to the EMR system itself, this paper proposed a novel EMR system framework by constructing a direct pathway between the EMR workflow and EMR data. A prototype of this framework was implemented based on patient similarity learning. Patient diagnoses, demographic data, vital signs and structured lab test results were considered for similarity calculations. Real hospitalization data from 12,818 patients were substituted, and Precision @ Position measurements were used to validate self-learning performance. Our EMR system changed the way in which orders are placed by establishing recommendation order menu and shortcut applications. Two learning modes (EASY MODE and COMPLEX MODE) were provided, and the precision values @ position 5 of both modes were 0.7458 and 0.8792, respectively. The precision performance of COMPLEX MODE was better than that of EASY MODE (tested using a paired Wilcoxon-Mann-Whitney test, p < 0.001). Applying the proposed framework, the EMR data value was directly demonstrated in the clinical workflow, and intelligence was added to the EMR system, which could improve system usability, reliability and the physician's work efficiency. This self-learning mechanism is based on dynamic learning models and is not limited to a specific disease or clinical scenario, thus decreasing maintenance costs in real world applications and increasing its adaptability."
},
{
"pmid": "25978419",
"title": "Personalized mortality prediction driven by electronic medical data and a patient similarity metric.",
"abstract": "BACKGROUND\nClinical outcome prediction normally employs static, one-size-fits-all models that perform well for the average patient but are sub-optimal for individual patients with unique characteristics. In the era of digital healthcare, it is feasible to dynamically personalize decision support by identifying and analyzing similar past patients, in a way that is analogous to personalized product recommendation in e-commerce. Our objectives were: 1) to prove that analyzing only similar patients leads to better outcome prediction performance than analyzing all available patients, and 2) to characterize the trade-off between training data size and the degree of similarity between the training data and the index patient for whom prediction is to be made.\n\n\nMETHODS AND FINDINGS\nWe deployed a cosine-similarity-based patient similarity metric (PSM) to an intensive care unit (ICU) database to identify patients that are most similar to each patient and subsequently to custom-build 30-day mortality prediction models. Rich clinical and administrative data from the first day in the ICU from 17,152 adult ICU admissions were analyzed. The results confirmed that using data from only a small subset of most similar patients for training improves predictive performance in comparison with using data from all available patients. The results also showed that when too few similar patients are used for training, predictive performance degrades due to the effects of small sample sizes. Our PSM-based approach outperformed well-known ICU severity of illness scores. Although the improved prediction performance is achieved at the cost of increased computational burden, Big Data technologies can help realize personalized data-driven decision support at the point of care.\n\n\nCONCLUSIONS\nThe present study provides crucial empirical evidence for the promising potential of personalized data-driven decision support systems. With the increasing adoption of electronic medical record (EMR) systems, our novel medical data analytics contributes to meaningful use of EMR data."
},
{
"pmid": "28258046",
"title": "Patient Similarity in Prediction Models Based on Health Data: A Scoping Review.",
"abstract": "BACKGROUND\nPhysicians and health policy makers are required to make predictions during their decision making in various medical problems. Many advances have been made in predictive modeling toward outcome prediction, but these innovations target an average patient and are insufficiently adjustable for individual patients. One developing idea in this field is individualized predictive analytics based on patient similarity. The goal of this approach is to identify patients who are similar to an index patient and derive insights from the records of similar patients to provide personalized predictions..\n\n\nOBJECTIVE\nThe aim is to summarize and review published studies describing computer-based approaches for predicting patients' future health status based on health data and patient similarity, identify gaps, and provide a starting point for related future research.\n\n\nMETHODS\nThe method involved (1) conducting the review by performing automated searches in Scopus, PubMed, and ISI Web of Science, selecting relevant studies by first screening titles and abstracts then analyzing full-texts, and (2) documenting by extracting publication details and information on context, predictors, missing data, modeling algorithm, outcome, and evaluation methods into a matrix table, synthesizing data, and reporting results.\n\n\nRESULTS\nAfter duplicate removal, 1339 articles were screened in abstracts and titles and 67 were selected for full-text review. In total, 22 articles met the inclusion criteria. Within included articles, hospitals were the main source of data (n=10). Cardiovascular disease (n=7) and diabetes (n=4) were the dominant patient diseases. Most studies (n=18) used neighborhood-based approaches in devising prediction models. Two studies showed that patient similarity-based modeling outperformed population-based predictive methods.\n\n\nCONCLUSIONS\nInterest in patient similarity-based predictive modeling for diagnosis and prognosis has been growing. In addition to raw/coded health data, wavelet transform and term frequency-inverse document frequency methods were employed to extract predictors. Selecting predictors with potential to highlight special cases and defining new patient similarity metrics were among the gaps identified in the existing literature that provide starting points for future work. Patient status prediction models based on patient similarity and health data offer exciting potential for personalizing and ultimately improving health care, leading to better patient outcomes."
},
{
"pmid": "29864490",
"title": "Patient similarity for precision medicine: A systematic review.",
"abstract": "Evidence-based medicine is the most prevalent paradigm adopted by physicians. Clinical practice guidelines typically define a set of recommendations together with eligibility criteria that restrict their applicability to a specific group of patients. The ever-growing size and availability of health-related data is currently challenging the broad definitions of guideline-defined patient groups. Precision medicine leverages on genetic, phenotypic, or psychosocial characteristics to provide precise identification of patient subsets for treatment targeting. Defining a patient similarity measure is thus an essential step to allow stratification of patients into clinically-meaningful subgroups. The present review investigates the use of patient similarity as a tool to enable precision medicine. 279 articles were analyzed along four dimensions: data types considered, clinical domains of application, data analysis methods, and translational stage of findings. Cancer-related research employing molecular profiling and standard data analysis techniques such as clustering constitute the majority of the retrieved studies. Chronic and psychiatric diseases follow as the second most represented clinical domains. Interestingly, almost one quarter of the studies analyzed presented a novel methodology, with the most advanced employing data integration strategies and being portable to different clinical domains. Integration of such techniques into decision support systems constitutes and interesting trend for future research."
},
{
"pmid": "29800571",
"title": "Pairwise alignment for very long nucleic acid sequences.",
"abstract": "Sequence alignment is one of the fundamental problems in computational biology and has numerous applications. The Smith-Waterman algorithm generates optimal local alignment for pairwise alignment task and has become a standard algorithm in its field. However, the current version of the Smith-Waterman algorithm demands a significant amount of memory and is not suitable for alignment of very long sequences. On the hand, the recent DNA sequencing technologies have produced vast amounts of biological sequences. Some nucleic acid sequences are very long and cannot employ the Smith-Waterman algorithm. To this end, this study proposes the PAAVLS algorithm that follows the dynamic programming technique employed by the Smith-Waterman algorithm and largely reduces the demand of memory. The proposed PAAVLS algorithm can be employed for alignment of very long sequences, i.e., sequences contain more than 100,000,000 nucleotides, on a personal computer. Additionally, the running time of the proposed PAAVLS algorithm is comparable with the running time of the standard Smith-Waterman algorithm."
},
{
"pmid": "25402007",
"title": "Fast and sensitive protein alignment using DIAMOND.",
"abstract": "The alignment of sequencing reads against a protein reference database is a major computational bottleneck in metagenomics and data-intensive evolutionary projects. Although recent tools offer improved performance over the gold standard BLASTX, they exhibit only a modest speedup or low sensitivity. We introduce DIAMOND, an open-source algorithm based on double indexing that is 20,000 times faster than BLASTX on short reads and has a similar degree of sensitivity."
},
{
"pmid": "29993581",
"title": "MfeCNN: Mixture Feature Embedding Convolutional Neural Network for Data Mapping.",
"abstract": "Data mapping plays an important role in data integration and exchanges among institutions and organizations with different data standards. However, traditional rule-based approaches and machine learning methods fail to achieve satisfactory results for the data mapping problem. In this paper, we propose a novel and sophisticated deep learning framework for data mapping called mixture feature embedding convolutional neural network (MfeCNN). The MfeCNN model converts the data mapping task to a multiple classification problem. In the model, we incorporated multimodal learning and multiview embedding into a CNN for mixture feature tensor generation and classification prediction. Multimodal features were extracted from various linguistic spaces with a medical natural language processing package. Then, powerful feature embeddings were learned by using the CNN. As many as 10 classes could be simultaneously classified by a softmax prediction layer based on multiview embedding. MfeCNN achieved the best results on unbalanced data (average F1 score, 82.4%) among the traditional state-of-the-art machine learning models and CNN without mixture feature embedding. Our model also outperformed a very deep CNN with 29 layers, which took free texts as inputs. The combination of mixture feature embedding and a deep neural network can achieve high accuracy for data mapping and multiple classification."
},
{
"pmid": "23159830",
"title": "Data resource profile: the Rochester Epidemiology Project (REP) medical records-linkage system.",
"abstract": "The Rochester Epidemiology Project (REP) medical records-linkage system was established in 1966 to capture health care information for the entire population of Olmsted County, MN, USA. The REP includes a dynamic cohort of 502 820 unique individuals who resided in Olmsted County at some point between 1966 and 2010, and received health care for any reason at a health care provider within the system. The data available electronically (electronic REP indexes) include demographic characteristics, medical diagnostic codes, surgical procedure codes and death information (including causes of death). In addition, for each resident, the system keeps a complete list of all paper records, electronic records and scanned documents that are available in full text for in-depth review and abstraction. The REP serves as the research infrastructure for studies of virtually all diseases that come to medical attention, and has supported over 2000 peer-reviewed publications since 1966. The system covers residents of all ages and both sexes, regardless of socio-economic status, ethnicity or insurance status. For further information regarding the use of the REP for a specific study, please visit our website at www.rochesterproject.org or contact us at [email protected]. Our website also provides access to an introductory video in English and Spanish."
},
{
"pmid": "21430193",
"title": "Use of a medical records linkage system to enumerate a dynamic population over time: the Rochester epidemiology project.",
"abstract": "The Rochester Epidemiology Project (REP) is a unique research infrastructure in which the medical records of virtually all persons residing in Olmsted County, Minnesota, for over 40 years have been linked and archived. In the present article, the authors describe how the REP links medical records from multiple health care institutions to specific individuals and how residency is confirmed over time. Additionally, the authors provide evidence for the validity of the REP Census enumeration. Between 1966 and 2008, 1,145,856 medical records were linked to 486,564 individuals in the REP. The REP Census was found to be valid when compared with a list of residents obtained from random digit dialing, a list of residents of nursing homes and senior citizen complexes, a commercial list of residents, and a manual review of records. In addition, the REP Census counts were comparable to those of 4 decennial US censuses (e.g., it included 104.1% of 1970 and 102.7% of 2000 census counts). The duration for which each person was captured in the system varied greatly by age and calendar year; however, the duration was typically substantial. Comprehensive medical records linkage systems like the REP can be used to maintain a continuously updated census and to provide an optimal sampling framework for epidemiologic studies."
},
{
"pmid": "28686612",
"title": "Evaluating phecodes, clinical classification software, and ICD-9-CM codes for phenome-wide association studies in the electronic health record.",
"abstract": "OBJECTIVE\nTo compare three groupings of Electronic Health Record (EHR) billing codes for their ability to represent clinically meaningful phenotypes and to replicate known genetic associations. The three tested coding systems were the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes, the Agency for Healthcare Research and Quality Clinical Classification Software for ICD-9-CM (CCS), and manually curated \"phecodes\" designed to facilitate phenome-wide association studies (PheWAS) in EHRs.\n\n\nMETHODS AND MATERIALS\nWe selected 100 disease phenotypes and compared the ability of each coding system to accurately represent them without performing additional groupings. The 100 phenotypes included 25 randomly-chosen clinical phenotypes pursued in prior genome-wide association studies (GWAS) and another 75 common disease phenotypes mentioned across free-text problem lists from 189,289 individuals. We then evaluated the performance of each coding system to replicate known associations for 440 SNP-phenotype pairs.\n\n\nRESULTS\nOut of the 100 tested clinical phenotypes, phecodes exactly matched 83, compared to 53 for ICD-9-CM and 32 for CCS. ICD-9-CM codes were typically too detailed (requiring custom groupings) while CCS codes were often not granular enough. Among 440 tested known SNP-phenotype associations, use of phecodes replicated 153 SNP-phenotype pairs compared to 143 for ICD-9-CM and 139 for CCS. Phecodes also generally produced stronger odds ratios and lower p-values for known associations than ICD-9-CM and CCS. Finally, evaluation of several SNPs via PheWAS identified novel potential signals, some seen in only using the phecode approach. Among them, rs7318369 in PEPD was associated with gastrointestinal hemorrhage.\n\n\nCONCLUSION\nOur results suggest that the phecode groupings better align with clinical diseases mentioned in clinical practice or for genomic studies. ICD-9-CM, CCS, and phecode groupings all worked for PheWAS-type studies, though the phecode groupings produced superior results."
},
{
"pmid": "29739741",
"title": "Public Opinions Toward Diseases: Infodemiological Study on News Media Data.",
"abstract": "BACKGROUND\nSociety always has limited resources to expend on health care, or anything else. What are the unmet medical needs? How do we allocate limited resources to maximize the health and welfare of the people? These challenging questions might be re-examined systematically within an infodemiological frame on a much larger scale, leveraging the latest advancement in information technology and data science.\n\n\nOBJECTIVE\nWe expanded our previous work by investigating news media data to reveal the coverage of different diseases and medical conditions, together with their sentiments and topics in news articles over three decades. We were motivated to do so since news media plays a significant role in politics and affects the public policy making.\n\n\nMETHODS\nWe analyzed over 3.5 million archive news articles from Reuters media during the periods of 1996/1997, 2008 and 2016, using summary statistics, sentiment analysis, and topic modeling. Summary statistics illustrated the coverage of various diseases and medical conditions during the last 3 decades. Sentiment analysis and topic modeling helped us automatically detect the sentiments of news articles (ie, positive versus negative) and topics (ie, a series of keywords) associated with each disease over time.\n\n\nRESULTS\nThe percentages of news articles mentioning diseases and medical conditions were 0.44%, 0.57% and 0.81% in the three time periods, suggesting that news media or the public has gradually increased its interests in medicine since 1996. Certain diseases such as other malignant neoplasm (34%), other infectious diseases (20%), and influenza (11%) represented the most covered diseases. Two hundred and twenty-six diseases and medical conditions (97.8%) were found to have neutral or negative sentiments in the news articles. Using topic modeling, we identified meaningful topics on these diseases and medical conditions. For instance, the smoking theme appeared in the news articles on other malignant neoplasm only during 1996/1997. The topic phrases HIV and Zika virus were linked to other infectious diseases during 1996/1997 and 2016, respectively.\n\n\nCONCLUSIONS\nThe multi-dimensional analysis of news media data allows the discovery of focus, sentiments and topics of news media in terms of diseases and medical conditions. These infodemiological discoveries could shed light on unmet medical needs and research priorities for future and provide guidance for the decision making in public policy."
},
{
"pmid": "31038462",
"title": "Technological Innovations in Disease Management: Text Mining US Patent Data From 1995 to 2017.",
"abstract": "BACKGROUND\nPatents are important intellectual property protecting technological innovations that inspire efficient research and development in biomedicine. The number of awarded patents serves as an important indicator of economic growth and technological innovation. Researchers have mined patents to characterize the focuses and trends of technological innovations in many fields.\n\n\nOBJECTIVE\nTo expand patent mining to biomedicine and facilitate future resource allocation in biomedical research for the United States, we analyzed US patent documents to determine the focuses and trends of protected technological innovations across the entire disease landscape.\n\n\nMETHODS\nWe analyzed more than 5 million US patent documents between 1995 and 2017, using summary statistics and dynamic topic modeling. More specifically, we investigated the disease coverage and latent topics in patent documents over time. We also incorporated the patent data into the calculation of our recently developed Research Opportunity Index (ROI) and Public Health Index (PHI), to recalibrate the resource allocation in biomedical research.\n\n\nRESULTS\nOur analysis showed that protected technological innovations have been primarily focused on socioeconomically critical diseases such as \"other cancers\" (malignant neoplasm of head, face, neck, abdomen, pelvis, or limb; disseminated malignant neoplasm; Merkel cell carcinoma; and malignant neoplasm, malignant carcinoid tumors, neuroendocrine tumor, and carcinoma in situ of an unspecified site), diabetes mellitus, and obesity. The United States has significantly improved resource allocation to biomedical research and development over the past 17 years, as illustrated by the decreasing PHI. Diseases with positive ROI, such as ankle and foot fracture, indicate potential research opportunities for the future. Development of novel chemical or biological drugs and electrical devices for diagnosis and disease management is the dominating topic in patented inventions.\n\n\nCONCLUSIONS\nThis multifaceted analysis of patent documents provides a deep understanding of the focuses and trends of technological innovations in disease management in patents. Our findings offer insights into future research and innovation opportunities and provide actionable information to facilitate policy makers, payers, and investors to make better evidence-based decisions regarding resource allocation in biomedicine."
}
] |
JMIR Serious Games | 31804185 | PMC6923759 | 10.2196/13861 | Treating Children With Speech Sound Disorders: Development of a Tangible Artefact Prototype | BackgroundA prototype of a tangible user interface (TUI) for a fishing game, which is intended to be used by children with speech sound disorders (SSD), speech and language therapists (SLTs), and kindergarten teachers and assistants (KTAs) and parents alike, has been developed and tested.ObjectiveThe aim of this study was to answer the following question: How can TUIs be used as a tool to help in interventions for children with SSD?MethodsTo obtain feedback and to ensure that the prototype was being developed according to the needs of the identified target users, an exploratory test was prepared and carried out. During this test using an ethnographic approach, an observation grid, a semistructured questionnaire, and interviews were used to gather data. A total of 4 different types of stakeholders (sample size of 10) tested the prototype: 2 SLTs, 2 KTAs, and 6 children.ResultsThe analysis of quantitative and qualitative data revealed that the prototype addresses the existing needs of SLTs and KTAs, and it revealed that 5 out of 6 (83%) children enjoyed the activity. Results also revealed a high replay value, with all children saying they would play more.ConclusionsSerious games and tangible interaction for learning and problem solving serve both teachers and children, as children enjoy playing, and, through a playful approach, learning is facilitated. A clear pattern was observed: Children enjoyed playing, and numerous valid indicators showed the transposition of the traditional game into the TUI artefact was successful. The game is varied and rich enough to be attractive and fun. There is a clear need and interest in similar objects from SLTs and educators. However, the process should be even more iterative, with a multidisciplinary team, and all end users should be able to participate as co-designers. | Related WorkSome examples of good practices or cases of success can be found in the literature [35-39], but not one is an exact fit in terms of technological requirements, target population, or intervention area of the current project. A total of 3 projects were considered relevant for the conceptualization of FGTI: first, the table-to-tablet (T2T) intervention materials, designed to be a reliable and valid solution [40] to be used by Portuguese SLTs when treating children with SSD. It has a physical and a digital version, and SLTs can use them interchangeably, but one does not communicate with the other. Second, the LinguaBytes materials, from the Netherlands [41], comprises a full set of exercises and varied activities that are mediated by tangible artefacts. The aim of LinguaBytes was to be a tangible language learning system for toddlers with some form of motor disability. Third, Jabberstamp [42], developed by a team at the MIT Media Lab (Tangible Media Group), is a tool that allows children to add sound to their drawings, collages, or paintings, enabling them to communicate more effectively before developing or mastering writing skills. | [
"10912250",
"28710926",
"15541632",
"30312634",
"22995337",
"7394282",
"26685287",
"27421244",
"31644381"
] | [
{
"pmid": "10912250",
"title": "Prevalence and natural history of primary speech and language delay: findings from a systematic review of the literature.",
"abstract": "The prevalence and the natural history of primary speech and language delays were two of four domains covered in a systematic review of the literature related to screening for speech and language delay carried out for the NHS in the UK. The structure and process of the full literature review is introduced and criteria for inclusion in the two domains are specified. The resulting data set gave 16 prevalence estimates generated from 21 publications and 12 natural history studies generated from 18 publications. Results are summarized for six subdivisions of primary speech and language delays: (1) speech and/or language, (2) language only, (3) speech only, (4) expression with comprehension, (5) expression only and (6) comprehension only. Combination of the data suggests that both concurrent and predictive case definition can be problematic. Prediction improves if language is taken independently of speech and if expressive and receptive language are taken together. The results are discussed in terms of the need to develop a model of prevalence based on risk of subsequent difficulties."
},
{
"pmid": "28710926",
"title": "Speech and language therapy service delivery: overcoming limited provision for children.",
"abstract": "OBJECTIVES\nTo test an alternative Speech and Language Therapy (SLT) service delivery model based on partnerships between a University and local schools and charities, and to report on the impact and feasibility of intervention based on long-term outcome measures and three case studies with individual analysis of Reliable Change.\n\n\nSTUDY DESIGN\nThe following six-step model was tested: 1-establishing partnerships; 2-flagging children; 3-pre-treatment SLT assessment; 4-reporting and discussion with parents and teachers; 5-treatment; 6-post-treatment assessment. Case studies are presented.\n\n\nMETHODS\nA partnership was established with one kindergarten in a pre-test and a total of 25 kindergartens during the second phase of the process. A group of 139 children were then flagged and assessed. The following long-term outcomes (18 months post-therapy) were investigated: phonetic-phonological standardised test percentiles and raw scores; receptive and expressive language percentiles and raw scores according to a standardised language test; percentage of syllables stuttered; duration of stuttering moments; academic achievement in norm-tests' core areas (mathematics, Portuguese language and social studies). Case studies and a 95% credible interval analysis to assess Reliable Change are presented.\n\n\nRESULTS\nSeventy five (54%) children needed SLT support. Fifty (67%) of those children returned to the clinic for long-term assessments and the analysis of all outcome measures showed significant improvements in their performance, 18 months post-therapy. Case Studies Reliable Change analysis revealed a statistically significant improvement, which also clearly shows the feasibility and the positive impact of the intervention.\n\n\nCONCLUSIONS\nThis specialised and differentiated care network constitutes an alternative delivery system of SLT services that addresses the lack of support currently experienced by children and their families. The long-term outcome measures and the 95% credible interval analysis are reliable methods to determine the impact of interventions."
},
{
"pmid": "15541632",
"title": "Visual-auditory integration during speech imitation in autism.",
"abstract": "Children with autistic spectrum disorder (ASD) may have poor audio-visual integration, possibly reflecting dysfunctional 'mirror neuron' systems which have been hypothesised to be at the core of the condition. In the present study, a computer program, utilizing speech synthesizer software and a 'virtual' head (Baldi), delivered speech stimuli for identification in auditory, visual or bimodal conditions. Children with ASD were poorer than controls at recognizing stimuli in the unimodal conditions, but once performance on this measure was controlled for, no group difference was found in the bimodal condition. A group of participants with ASD were also trained to develop their speech-reading ability. Training improved visual accuracy and this also improved the children's ability to utilize visual information in their processing of speech. Overall results were compared to predictions from mathematical models based on integration and non-integration, and were most consistent with the integration model. We conclude that, whilst they are less accurate in recognizing stimuli in the unimodal condition, children with ASD show normal integration of visual and auditory speech stimuli. Given that training in recognition of visual speech was effective, children with ASD may benefit from multi-modal approaches in imitative therapy and language training."
},
{
"pmid": "30312634",
"title": "The Dynamic Duo: Combining noninvasive brain stimulation with cognitive interventions.",
"abstract": "Pharmacotherapy, psychotherapy, and non-invasive brain stimulation (NIBS)1 each show efficacy in the treatment of psychiatric disorders; however, more efficacious interventions are needed as reflected by an overall unmet need in mental health care. While each modality has typically been studied and developed as a monotherapy, in practice they are typically used in combination. Research has begun to emerge studying the potential synergistic actions of multi-modal, combination therapies. For example, NIBS combined with rehabilitation strategies have demonstrated some success for speech and motor rehabilitation in stroke patients. In this review we present evidence suggesting that combining NIBS with targeted, cognitive interventions offers a potentially powerful new approach to treating neuropsychiatric disorders. Here we focus on NIBS studies using transcranial direct current stimulation (tDCS)2 and transcranial magnetic stimulation (TMS)3 given that these modalities are relatively safe, noninvasive, and can be performed simultaneously with neurocognitive interventions. We review the concept of \"state dependent\" effects of NIBS and highlight how simultaneous or sequential cognitive interventions could help optimize NIBS therapy by providing further control of ongoing neural activity in targeted neural networks. This review spans a range of neuropsychiatric disorders including major depressive disorder, schizophrenia, generalized anxiety, and autism. For each disorder, we emphasize neuroanatomical circuitry that could be engaged with combination therapy and critically discuss the literature that has begun to emerge. Finally, we present possible underlying mechanisms and propose future research strategies that may further refine the potential of combination therapies."
},
{
"pmid": "22995337",
"title": "\"When he's around his brothers … he's not so quiet\": the private and public worlds of school-aged children with speech sound disorder.",
"abstract": "UNLABELLED\nChildren interact with people in context: including home, school, and in the community. Understanding children's relationships within context is important for supporting children's development. Using child-friendly methodologies, the purpose of this research was to understand the lives of children with speech sound disorder (SSD) in context. Thirty-four interviews were undertaken with six school-aged children identified with SSD, and their siblings, friends, parents, grandparents, and teachers. Interview transcripts, questionnaires, and children's drawings were analyzed to reveal that these children experienced the world in context dependent ways (private vs. public worlds). Family and close friends typically provided a safe, supportive environment where children could be themselves and participate in typical childhoods. In contrast, when out of these familiar contexts, the children often were frustrated, embarrassed, and withdrawn, their relationships changed, and they were unable to get their message across in public contexts. Speech-language pathology assessment and intervention could be enhanced by interweaving the valuable insights of children, siblings, friends, parents, teachers, and other adults within children's worlds to more effectively support these children in context.\n\n\nLEARNING OUTCOMES\n1. Recognize that children with SSD experience the world in different ways, depending on whether they are in private or public contexts. 2. Describe the changes in the roles of family and friends when children with SSD are in public contexts. 3. Discover the position of the child as central in Bronfenbrenner’s bioecological model. 4. Identify principles of child-friendly research. 5. Recognize the importance of considering the child in context during speech-language pathology assessment and intervention."
},
{
"pmid": "26685287",
"title": "Engaging Elderly People in Telemedicine Through Gamification.",
"abstract": "BACKGROUND\nTelemedicine can alleviate the increasing demand for elderly care caused by the rapidly aging population. However, user adherence to technology in telemedicine interventions is low and decreases over time. Therefore, there is a need for methods to increase adherence, specifically of the elderly user. A strategy that has recently emerged to address this problem is gamification. It is the application of game elements to nongame fields to motivate and increase user activity and retention.\n\n\nOBJECTIVE\nThis research aims to (1) provide an overview of existing theoretical frameworks for gamification and explore methods that specifically target the elderly user and (2) explore user classification theories for tailoring game content to the elderly user. This knowledge will provide a foundation for creating a new framework for applying gamification in telemedicine applications to effectively engage the elderly user by increasing and maintaining adherence.\n\n\nMETHODS\nWe performed a broad Internet search using scientific and nonscientific search engines and included information that described either of the following subjects: the conceptualization of gamification, methods to engage elderly users through gamification, or user classification theories for tailored game content.\n\n\nRESULTS\nOur search showed two main approaches concerning frameworks for gamification: from business practices, which mostly aim for more revenue, emerge an applied approach, while academia frameworks are developed incorporating theories on motivation while often aiming for lasting engagement. The search provided limited information regarding the application of gamification to engage elderly users, and a significant gap in knowledge on the effectiveness of a gamified application in practice. Several approaches for classifying users in general were found, based on archetypes and reasons to play, and we present them along with their corresponding taxonomies. The overview we created indicates great connectivity between these taxonomies.\n\n\nCONCLUSIONS\nGamification frameworks have been developed from different backgrounds-business and academia-but rarely target the elderly user. The effectiveness of user classifications for tailored game content in this context is not yet known. As a next step, we propose the development of a framework based on the hypothesized existence of a relation between preference for game content and personality."
},
{
"pmid": "27421244",
"title": "Gamification of Cognitive Assessment and Cognitive Training: A Systematic Review of Applications and Efficacy.",
"abstract": "BACKGROUND\nCognitive tasks are typically viewed as effortful, frustrating, and repetitive, which often leads to participant disengagement. This, in turn, may negatively impact data quality and/or reduce intervention effects. However, gamification may provide a possible solution. If game design features can be incorporated into cognitive tasks without undermining their scientific value, then data quality, intervention effects, and participant engagement may be improved.\n\n\nOBJECTIVES\nThis systematic review aims to explore and evaluate the ways in which gamification has already been used for cognitive training and assessment purposes. We hope to answer 3 questions: (1) Why have researchers opted to use gamification? (2) What domains has gamification been applied in? (3) How successful has gamification been in cognitive research thus far?\n\n\nMETHODS\nWe systematically searched several Web-based databases, searching the titles, abstracts, and keywords of database entries using the search strategy (gamif* OR game OR games) AND (cognit* OR engag* OR behavi* OR health* OR attention OR motiv*). Searches included papers published in English between January 2007 and October 2015.\n\n\nRESULTS\nOur review identified 33 relevant studies, covering 31 gamified cognitive tasks used across a range of disorders and cognitive domains. We identified 7 reasons for researchers opting to gamify their cognitive training and testing. We found that working memory and general executive functions were common targets for both gamified assessment and training. Gamified tests were typically validated successfully, although mixed-domain measurement was a problem. Gamified training appears to be highly engaging and does boost participant motivation, but mixed effects of gamification on task performance were reported.\n\n\nCONCLUSIONS\nHeterogeneous study designs and typically small sample sizes highlight the need for further research in both gamified training and testing. Nevertheless, careful application of gamification can provide a way to develop engaging and yet scientifically valid cognitive assessments, and it is likely worthwhile to continue to develop gamified cognitive tasks in the future."
},
{
"pmid": "31644381",
"title": "Comparing Traditional and Tablet-Based Intervention for Children With Speech Sound Disorders: A Randomized Controlled Trial.",
"abstract": "Purpose This article reports on the effectiveness of a novel tablet-based approach to phonological intervention and compares it to a traditional tabletop approach, targeting children with phonologically based speech sound disorders (SSD). Method Twenty-two Portuguese children with phonologically based SSD were randomly assigned to 1 of 2 interventions, tabletop or tablet (11 children in each group), and received intervention based on the same activities, with the only difference being the delivery. All children were treated by the same speech-language pathologist over 2 blocks of 6 weekly sessions, for 12 sessions of intervention. Participants were assessed at 3 time points: baseline; pre-intervention, after a 3-month waiting period; and post-intervention. Outcome measures included percentage of consonants correct, percentage of vowels correct, and percentage of phonemes correct. A generalization of target sounds was also explored. Results Both tabletop and tablet-based interventions were effective in improving percentage of consonants correct and percentage of phonemes correct scores, with an intervention effect only evident for percentage of vowels correct in the tablet group. Change scores across both interventions were significantly greater after the intervention, compared to baseline, indicating that the change was due to the intervention. High levels of generalization (60% and above for the majority of participants) were obtained across both tabletop and tablet groups. Conclusions The software proved to be as effective as a traditional tabletop approach in treating children with phonologically based SSD. These findings provide new evidence regarding the use of digital materials in improving speech in children with SSD. Supplemental Material https://doi.org/10.23641/asha.9989816."
}
] |
Health Information Science and Systems | 31915523 | PMC6928168 | 10.1007/s13755-019-0091-3 | Convolutional neural networks based efficient approach for classification of lung diseases | Treatment of lung diseases, which are the third most common cause of death in the world, is of great importance in the medical field. Many studies using lung sounds recorded with stethoscope have been conducted in the literature in order to diagnose the lung diseases with artificial intelligence-compatible devices and to assist the experts in their diagnosis. In this paper, ICBHI 2017 database which includes different sample frequencies, noise and background sounds was used for the classification of lung sounds. The lung sound signals were initially converted to spectrogram images by using time–frequency method. The short time Fourier transform (STFT) method was considered as time–frequency transformation. Two deep learning based approaches were used for lung sound classification. In the first approach, a pre-trained deep convolutional neural networks (CNN) model was used for feature extraction and a support vector machine (SVM) classifier was used in classification of the lung sounds. In the second approach, the pre-trained deep CNN model was fine-tuned (transfer learning) via spectrogram images for lung sound classification. The accuracies of the proposed methods were tested by using the ten-fold cross validation. The accuracies for the first and second proposed methods were 65.5% and 63.09%, respectively. The obtained accuracies were then compared with some of the existing results and it was seen that obtained scores were better than the other results. | Related worksIn [2], a data set consisting of crackle and non-crackle classes and a total of 6000 audio files were used for lung sound classification. Two feature extraction methods which use time–frequency (TF) and time-scale (TS) analysis were preferred for recognition of respiratory crackles. In the classification stage, k-Nearest Neighbors (k-NN), Support Vector Machine (SVM) and multi-layer sensor methods were used and the best accuracy was obtained with SVM classifier where the obtained accuracy score was 97.5%. In [3], two datasets namely continuous adventitious sound (CAS) and tracheal breath sound (TBS) were considered. TBS and CAS datasets were further divided into two sections: inspiratory and expiratory. TBS and CAS dataset have the following class labels; wheezing, stridor, rhonchi and mixture lung sounds. Distinction function, instantaneous kurtosis, and SampEn were used for feature extraction. The reported accuracy scores were in the range of 97.7% and 98.8% that were obtained with SVM classifier using the Radial Basis Function (RBF) kernel. In [4], MFCC was used for feature extraction of normal and wheeze sound files. Then, the method was trained and tested with the Gaussian Mixture Model (GMM), and the reported best accuracy was 94.2%. In [5], genetic algorithm and Fisher’s discriminant ratio were used to reduce dimension, and Higher Order Statistics (HOS) were used to extract features from respiratory sounds which consist of normal, coarse crackle, fine crackle, monophonic and polyphonic wheezes. The obtained accuracy score was 94.6%. In [6], the authors used the ICBHI 2017 challenge database which has normal, wheezes, crackles and wheezes plus crackles class labels. The ICBHI 2017 is a challenging database, since there are noises, background sounds and different sampling frequencies (4 kHz, 10 kHz, 44.1 kHz). In [7], spectral features and Decision Tree were chosen for feature extraction and classification, respectively. In [8], it was used MFCC at the stage of feature extraction, and was developed a method that uses Gaussian mixture models (GMM) and hidden Markov models (HMM) classifiers together at the stage of classification. In [5], authors chose short time Fourier transform (STFT) and STFT + Wavelet to extract features and principal component analysis (PCA) to reduce the process load while testing the algorithm performance with the SVM classifier.In this paper, it was worked to boost the classification performance for ICBHI 2017 database which is quite challenging. In this context, spectrogram images were utilized to create time–frequency transformation from the lung sounds. These spectrogram images were used as input to the deep feature extraction and transfer learning. SVM and softmax classifiers were used for deep features and transfer learning approaches, respectively. The performances of proposed methods are evaluated by accuracy, sensitivity and specificity scores. The results were also compared with some of the existing results. The proposed schemes improved the classification performance of the lung sound discrimination. | [
"27286184",
"19631934",
"24680639",
"30279988",
"29147563"
] | [
{
"pmid": "27286184",
"title": "Lung sound classification using cepstral-based statistical features.",
"abstract": "Lung sounds convey useful information related to pulmonary pathology. In this paper, short-term spectral characteristics of lung sounds are studied to characterize the lung sounds for the identification of associated diseases. Motivated by the success of cepstral features in speech signal classification, we evaluate five different cepstral features to recognize three types of lung sounds: normal, wheeze and crackle. Subsequently for fast and efficient classification, we propose a new feature set computed from the statistical properties of cepstral coefficients. Experiments are conducted on a dataset of 30 subjects using the artificial neural network (ANN) as a classifier. Results show that the statistical features extracted from mel-frequency cepstral coefficients (MFCCs) of lung sounds outperform commonly used wavelet-based features as well as standard cepstral coefficients including MFCCs. Further, we experimentally optimize different control parameters of the proposed feature extraction algorithm. Finally, we evaluate the features for noisy lung sound recognition. We have found that our newly investigated features are more robust than existing features and show better recognition accuracy even in low signal-to-noise ratios (SNRs)."
},
{
"pmid": "19631934",
"title": "Pattern recognition methods applied to respiratory sounds classification into normal and wheeze classes.",
"abstract": "In this paper, we present the pattern recognition methods proposed to classify respiratory sounds into normal and wheeze classes. We evaluate and compare the feature extraction techniques based on Fourier transform, linear predictive coding, wavelet transform and Mel-frequency cepstral coefficients (MFCC) in combination with the classification methods based on vector quantization, Gaussian mixture models (GMM) and artificial neural networks, using receiver operating characteristic curves. We propose the use of an optimized threshold to discriminate the wheezing class from the normal one. Also, post-processing filter is employed to considerably improve the classification accuracy. Experimental results show that our approach based on MFCC coefficients combined to GMM is well adapted to classify respiratory sounds in normal and wheeze classes. McNemar's test demonstrated significant difference between results obtained by the presented classifiers (p<0.05)."
},
{
"pmid": "24680639",
"title": "Assessment of time-frequency representation techniques for thoracic sounds analysis.",
"abstract": "A step forward in the knowledge about the underlying physiological phenomena of thoracic sounds requires a reliable estimate of their time-frequency behavior that overcomes the disadvantages of the conventional spectrogram. A more detailed time-frequency representation could lead to a better feature extraction for diseases classification and stratification purposes, among others. In this respect, the aim of this study was to look for an omnibus technique to obtain the time-frequency representation (TFR) of thoracic sounds by comparing generic goodness-of-fit criteria in different simulated thoracic sounds scenarios. The performance of ten TFRs for heart, normal tracheal and adventitious lung sounds was assessed using time-frequency patterns obtained by mathematical functions of the thoracic sounds. To find the best TFR performance measures, such as the 2D local (ρ(mean)) and global (ρ) central correlation, the normalized root-mean-square error (NRMSE), the cross-correlation coefficient (ρ(IF)) and the time-frequency resolution (res(TF)) were used. Simulation results pointed out that the Hilbert-Huang spectrum (HHS) had a superior performance as compared with other techniques and then, it can be considered as a reliable TFR for thoracic sounds. Furthermore, the goodness of HHS was assessed using noisy simulated signals. Additionally, HHS was applied to first and second heart sounds taken from a young healthy male subject, to tracheal sound from a middle-age healthy male subject, and to abnormal lung sounds acquired from a male patient with diffuse interstitial pneumonia. It is expected that the results of this research could be used to obtain a better signature of thoracic sounds for pattern recognition purpose, among other tasks."
},
{
"pmid": "30279988",
"title": "Transfer learning based histopathologic image classification for breast cancer detection.",
"abstract": "Breast cancer is one of the leading cancer type among women in worldwide. Many breast cancer patients die every year due to the late diagnosis and treatment. Thus, in recent years, early breast cancer detection systems based on patient's imagery are in demand. Deep learning attracts many researchers recently and many computer vision applications have come out in various environments. Convolutional neural network (CNN) which is known as deep learning architecture, has achieved impressive results in many applications. CNNs generally suffer from tuning a huge number of parameters which bring a great amount of complexity to the system. In addition, the initialization of the weights of the CNN is another handicap that needs to be handle carefully. In this paper, transfer learning and deep feature extraction methods are used which adapt a pre-trained CNN model to the problem at hand. AlexNet and Vgg16 models are considered in the presented work for feature extraction and AlexNet is used for further fine-tuning. The obtained features are then classified by support vector machines (SVM). Extensive experiments on a publicly available histopathologic breast cancer dataset are carried out and the accuracy scores are calculated for performance evaluation. The evaluation results show that the transfer learning produced better result than deep feature extraction and SVM classification."
},
{
"pmid": "29147563",
"title": "A novel microaneurysms detection approach based on convolutional neural networks with reinforcement sample learning algorithm.",
"abstract": "Microaneurysms (MAs) are known as early signs of diabetic-retinopathy which are called red lesions in color fundus images. Detection of MAs in fundus images needs highly skilled physicians or eye angiography. Eye angiography is an invasive and expensive procedure. Therefore, an automatic detection system to identify the MAs locations in fundus images is in demand. In this paper, we proposed a system to detect the MAs in colored fundus images. The proposed method composed of three stages. In the first stage, a series of pre-processing steps are used to make the input images more convenient for MAs detection. To this end, green channel decomposition, Gaussian filtering, median filtering, back ground determination, and subtraction operations are applied to input colored fundus images. After pre-processing, a candidate MAs extraction procedure is applied to detect potential regions. A five-stepped procedure is adopted to get the potential MA locations. Finally, deep convolutional neural network (DCNN) with reinforcement sample learning strategy is used to train the proposed system. The DCNN is trained with color image patches which are collected from ground-truth MA locations and non-MA locations. We conducted extensive experiments on ROC dataset to evaluate of our proposal. The results are encouraging."
}
] |
JMIR Medical Informatics | 31815673 | PMC6928703 | 10.2196/13430 | Impact of Automatic Query Generation and Quality Recognition Using Deep Learning to Curate Evidence From Biomedical Literature: Empirical Study | BackgroundThe quality of health care is continuously improving and is expected to improve further because of the advancement of machine learning and knowledge-based techniques along with innovation and availability of wearable sensors. With these advancements, health care professionals are now becoming more interested and involved in seeking scientific research evidence from external sources for decision making relevant to medical diagnosis, treatments, and prognosis. Not much work has been done to develop methods for unobtrusive and seamless curation of data from the biomedical literature.ObjectiveThis study aimed to design a framework that can enable bringing quality publications intelligently to the users’ desk to assist medical practitioners in answering clinical questions and fulfilling their informational needs.MethodsThe proposed framework consists of methods for efficient biomedical literature curation, including the automatic construction of a well-built question, the recognition of evidence quality by proposing extended quality recognition model (E-QRM), and the ranking and summarization of the extracted evidence.ResultsUnlike previous works, the proposed framework systematically integrates the echelons of biomedical literature curation by including methods for searching queries, content quality assessments, and ranking and summarization. Using an ensemble approach, our high-impact classifier E-QRM obtained significantly improved accuracy than the existing quality recognition model (1723/1894, 90.97% vs 1462/1894, 77.21%).ConclusionsOur proposed methods and evaluation demonstrate the validity and rigorousness of the results, which can be used in different applications, including evidence-based medicine, precision medicine, and medical education. | Related Work on Finding High-Quality Articles in the LiteratureA decent set of approaches is available that had improvised the results of literature searching with respect to quality of studies. The PubMed Clinical Queries (CQ) [14] is one of the most prominent endeavors to retrieve scientifically sound studies from the biomedical literature. Afterward, supervised ML approaches were introduced mainly to improve the precision of the results in terms of quality checking for methodological rigorousness. Similarly, to find high-quality papers in MEDLINE, Wilczynski et al [15] developed CQ filters, which were later adapted by PubMed for use as CQ. The data collection used in the CQ filters is annotated across the following 4 dimensions: the format, the human health care, the purpose, and the scientific rigor. The experimental studies [16,17] introduced ML (supervised learning) classification models to differentiate between the methodologically rigorous and the nonrigorous articles. In an article about evidence quality prediction [18], the authors addressed the problem of automatic grading of evidence on a chosen discrete scale. The authors experimented many features, such as publication year, avenue, and type to evaluate the quality of the evidence. They found that the publication type is the most eminent feature to consider for evaluation of the evidence quality results. A DL neural network known as the Convolutional Neural Network approach [19] was very recently tried to further improve accuracy over the existing approaches of PubMed CQ and McMaster’s text word search in terms of precision. | [
"25460529",
"27807747",
"25830358",
"26573247",
"12597509",
"18528510",
"22480327",
"30545485",
"29763706",
"23019242",
"15969765",
"15561789",
"18952929",
"25983133",
"29941415",
"17653438",
"27454608",
"28766402",
"27989816",
"20470429",
"23899909",
"17202161",
"25398906",
"26582918",
"28815124",
"29728351",
"27652374"
] | [
{
"pmid": "25460529",
"title": "Health care 2020: reengineering health care delivery to combat chronic disease.",
"abstract": "Chronic disease has become the great epidemic of our times, responsible for 75% of total health care costs and the majority of deaths in the US. Our current delivery model is poorly constructed to manage chronic disease, as evidenced by low adherence to quality indicators and poor control of treatable conditions. New technologies have emerged that can engage patients and offer additional modalities in the treatment of chronic disease. Modifying our delivery model to include team-based care in concert with patient-centered technologies offers great promise in managing the chronic disease epidemic."
},
{
"pmid": "27807747",
"title": "Text Mining for Precision Medicine: Bringing Structure to EHRs and Biomedical Literature to Understand Genes and Health.",
"abstract": "The key question of precision medicine is whether it is possible to find clinically actionable granularity in diagnosing disease and classifying patient risk. The advent of next-generation sequencing and the widespread adoption of electronic health records (EHRs) have provided clinicians and researchers a wealth of data and made possible the precise characterization of individual patient genotypes and phenotypes. Unstructured text-found in biomedical publications and clinical notes-is an important component of genotype and phenotype knowledge. Publications in the biomedical literature provide essential information for interpreting genetic data. Likewise, clinical notes contain the richest source of phenotype information in EHRs. Text mining can render these texts computationally accessible and support information extraction and hypothesis generation. This chapter reviews the mechanics of text mining in precision medicine and discusses several specific use cases, including database curation for personalized cancer medicine, patient outcome prediction from EHR-derived cohorts, and pharmacogenomic research. Taken as a whole, these use cases demonstrate how text mining enables effective utilization of existing knowledge sources and thus promotes increased value for patients and healthcare systems. Text mining is an indispensable tool for translating genotype-phenotype data into effective clinical care that will undoubtedly play an important role in the eventual realization of precision medicine."
},
{
"pmid": "25830358",
"title": "Mapping publication trends and identifying hot spots of research on Internet health information seeking behavior: a quantitative and co-word biclustering analysis.",
"abstract": "BACKGROUND\nThe Internet has become an established source of health information for people seeking health information. In recent years, research on the health information seeking behavior of Internet users has become an increasingly important scholarly focus. However, there have been no long-term bibliometric studies to date on Internet health information seeking behavior.\n\n\nOBJECTIVE\nThe purpose of this study was to map publication trends and explore research hot spots of Internet health information seeking behavior.\n\n\nMETHODS\nA bibliometric analysis based on PubMed was conducted to investigate the publication trends of research on Internet health information seeking behavior. For the included publications, the annual publication number, the distribution of countries, authors, languages, journals, and annual distribution of highly frequent major MeSH (Medical Subject Headings) terms were determined. Furthermore, co-word biclustering analysis of highly frequent major MeSH terms was utilized to detect the hot spots in this field.\n\n\nRESULTS\nA total of 533 publications were included. The research output was gradually increasing. There were five authors who published four or more articles individually. A total of 271 included publications (50.8%) were written by authors from the United States, and 516 of the 533 articles (96.8%) were published in English. The eight most active journals published 34.1% (182/533) of the publications on this topic. Ten research hot spots were found: (1) behavior of Internet health information seeking about HIV infection or sexually transmitted diseases, (2) Internet health information seeking behavior of students, (3) behavior of Internet health information seeking via mobile phone and its apps, (4) physicians' utilization of Internet medical resources, (5) utilization of social media by parents, (6) Internet health information seeking behavior of patients with cancer (mainly breast cancer), (7) trust in or satisfaction with Web-based health information by consumers, (8) interaction between Internet utilization and physician-patient communication or relationship, (9) preference and computer literacy of people using search engines or other Web-based systems, and (10) attitude of people (especially adolescents) when seeking health information via the Internet.\n\n\nCONCLUSIONS\nThe 10 major research hot spots could provide some hints for researchers when launching new projects. The output of research on Internet health information seeking behavior is gradually increasing. Compared to the United States, the relatively small number of publications indexed by PubMed from other developed and developing countries indicates to some extent that the field might be still underdeveloped in many countries. More studies on Internet health information seeking behavior could give some references for health information providers."
},
{
"pmid": "26573247",
"title": "Data-driven knowledge acquisition, validation, and transformation into HL7 Arden Syntax.",
"abstract": "OBJECTIVE\nThe objective of this study is to help a team of physicians and knowledge engineers acquire clinical knowledge from existing practices datasets for treatment of head and neck cancer, to validate the knowledge against published guidelines, to create refined rules, and to incorporate these rules into clinical workflow for clinical decision support.\n\n\nMETHODS AND MATERIALS\nA team of physicians (clinical domain experts) and knowledge engineers adapt an approach for modeling existing treatment practices into final executable clinical models. For initial work, the oral cavity is selected as the candidate target area for the creation of rules covering a treatment plan for cancer. The final executable model is presented in HL7 Arden Syntax, which helps the clinical knowledge be shared among organizations. We use a data-driven knowledge acquisition approach based on analysis of real patient datasets to generate a predictive model (PM). The PM is converted into a refined-clinical knowledge model (R-CKM), which follows a rigorous validation process. The validation process uses a clinical knowledge model (CKM), which provides the basis for defining underlying validation criteria. The R-CKM is converted into a set of medical logic modules (MLMs) and is evaluated using real patient data from a hospital information system.\n\n\nRESULTS\nWe selected the oral cavity as the intended site for derivation of all related clinical rules for possible associated treatment plans. A team of physicians analyzed the National Comprehensive Cancer Network (NCCN) guidelines for the oral cavity and created a common CKM. Among the decision tree algorithms, chi-squared automatic interaction detection (CHAID) was applied to a refined dataset of 1229 patients to generate the PM. The PM was tested on a disjoint dataset of 739 patients, which gives 59.0% accuracy. Using a rigorous validation process, the R-CKM was created from the PM as the final model, after conforming to the CKM. The R-CKM was converted into four candidate MLMs, and was used to evaluate real data from 739 patients, yielding efficient performance with 53.0% accuracy.\n\n\nCONCLUSION\nData-driven knowledge acquisition and validation against published guidelines were used to help a team of physicians and knowledge engineers create executable clinical knowledge. The advantages of the R-CKM are twofold: it reflects real practices and conforms to standard guidelines, while providing optimal accuracy comparable to that of a PM. The proposed approach yields better insight into the steps of knowledge acquisition and enhances collaboration efforts of the team of physicians and knowledge engineers."
},
{
"pmid": "12597509",
"title": "Evidence-based practice revisited.",
"abstract": "The evidence-based practice (EBP) movement has gathered considerable momentum both locally and abroad since first promoted a decade ago. This paper presents an updated narrative overview of EBP from the clinical and public health perspectives. First, the origins and definition of EBP and how clinicians should go about incorporating it into routine practice are described. Reasons for adopting the EBP philosophy are outlined and the way to learn the process described. The latter can be summarised as the five A's of EBP--assess, ask, acquire, appraise and apply. Limitations of the approach and misperceptions about its weaknesses are also discussed. Potential solutions are offered and areas for future work in the discipline of EBP are highlighted, with particular reference to Hong Kong's situation and that of elsewhere in Asia."
},
{
"pmid": "18528510",
"title": "An integrated approach to computer-based decision support at the point of care.",
"abstract": "Information needs that arise when clinicians use clinical information systems often go unresolved, forcing clinicians to defer decisions or make them with incomplete knowledge. My research characterizes these needs in order to build information systems that can help clinicians get timely answers to their questions. My colleagues and I have developed \"Infobuttons\", which are links between clinical information systems and on-line knowledge resources, and have developed an \"Infobutton Manager\" (IM) that attempts to determine the information need based on the context of what the user is doing. The IM presents users with a set of questions, each of which is a link to an online information resource that will answer the question. The Infobutton Manager has been successfully deployed in five systems at four institutions and provides users with over 1,000 accesses to on-line health information each month, with a positive impact on patient care."
},
{
"pmid": "22480327",
"title": "CDAPubMed: a browser extension to retrieve EHR-based biomedical literature.",
"abstract": "BACKGROUND\nOver the last few decades, the ever-increasing output of scientific publications has led to new challenges to keep up to date with the literature. In the biomedical area, this growth has introduced new requirements for professionals, e.g., physicians, who have to locate the exact papers that they need for their clinical and research work amongst a huge number of publications. Against this backdrop, novel information retrieval methods are even more necessary. While web search engines are widespread in many areas, facilitating access to all kinds of information, additional tools are required to automatically link information retrieved from these engines to specific biomedical applications. In the case of clinical environments, this also means considering aspects such as patient data security and confidentiality or structured contents, e.g., electronic health records (EHRs). In this scenario, we have developed a new tool to facilitate query building to retrieve scientific literature related to EHRs.\n\n\nRESULTS\nWe have developed CDAPubMed, an open-source web browser extension to integrate EHR features in biomedical literature retrieval approaches. Clinical users can use CDAPubMed to: (i) load patient clinical documents, i.e., EHRs based on the Health Level 7-Clinical Document Architecture Standard (HL7-CDA), (ii) identify relevant terms for scientific literature search in these documents, i.e., Medical Subject Headings (MeSH), automatically driven by the CDAPubMed configuration, which advanced users can optimize to adapt to each specific situation, and (iii) generate and launch literature search queries to a major search engine, i.e., PubMed, to retrieve citations related to the EHR under examination.\n\n\nCONCLUSIONS\nCDAPubMed is a platform-independent tool designed to facilitate literature searching using keywords contained in specific EHRs. CDAPubMed is visually integrated, as an extension of a widespread web browser, within the standard PubMed interface. It has been tested on a public dataset of HL7-CDA documents, returning significantly fewer citations since queries are focused on characteristics identified within the EHR. For instance, compared with more than 200,000 citations retrieved by breast neoplasm, fewer than ten citations were retrieved when ten patient features were added using CDAPubMed. This is an open source tool that can be freely used for non-profit purposes and integrated with other existing systems."
},
{
"pmid": "30545485",
"title": "ProvCaRe: Characterizing scientific reproducibility of biomedical research studies using semantic provenance metadata.",
"abstract": "OBJECTIVE\nReproducibility of research studies is key to advancing biomedical science by building on sound results and reducing inconsistencies between published results and study data. We propose that the available data from research studies combined with provenance metadata provide a framework for evaluating scientific reproducibility. We developed the ProvCaRe platform to model, extract, and query semantic provenance information from 435, 248 published articles.\n\n\nMETHODS\nThe ProvCaRe platform consists of: (1) the S3 model and a formal ontology; (2) a provenance-focused text processing workflow to generate provenance triples consisting of subject, predicate, and object using metadata extracted from articles; and (3) the ProvCaRe knowledge repository that supports \"provenance-aware\" hypothesis-driven search queries. A new provenance-based ranking algorithm is used to rank the articles in the search query results.\n\n\nRESULTS\nThe ProvCaRe knowledge repository contains 48.9 million provenance triples. Seven research hypotheses were used as search queries for evaluation and the resulting provenance triples were analyzed using five categories of provenance terms. The highest number of terms (34%) described provenance related to population cohort followed by 29% of terms describing statistical data analysis methods, and only 5% of the terms described the measurement instruments used in a study. In addition, the analysis showed that some articles included a higher number of provenance terms across multiple provenance categories suggesting a higher potential for reproducibility of these research studies.\n\n\nCONCLUSION\nThe ProvCaRe knowledge repository (https://provcare.\n\n\nCASE\nedu/) is one of the largest provenance resources for biomedical research studies that combines intuitive search functionality with a new provenance-based ranking feature to list articles related to a search query."
},
{
"pmid": "29763706",
"title": "Exploiting semantic patterns over biomedical knowledge graphs for predicting treatment and causative relations.",
"abstract": "BACKGROUND\nIdentifying new potential treatment options for medical conditions that cause human disease burden is a central task of biomedical research. Since all candidate drugs cannot be tested with animal and clinical trials, in vitro approaches are first attempted to identify promising candidates. Likewise, identifying different causal relations between biomedical entities is also critical to understand biomedical processes. Generally, natural language processing (NLP) and machine learning are used to predict specific relations between any given pair of entities using the distant supervision approach.\n\n\nOBJECTIVE\nTo build high accuracy supervised predictive models to predict previously unknown treatment and causative relations between biomedical entities based only on semantic graph pattern features extracted from biomedical knowledge graphs.\n\n\nMETHODS\nWe used 7000 treats and 2918 causes hand-curated relations from the UMLS Metathesaurus to train and test our models. Our graph pattern features are extracted from simple paths connecting biomedical entities in the SemMedDB graph (based on the well-known SemMedDB database made available by the U.S. National Library of Medicine). Using these graph patterns connecting biomedical entities as features of logistic regression and decision tree models, we computed mean performance measures (precision, recall, F-score) over 100 distinct 80-20% train-test splits of the datasets. For all experiments, we used a positive:negative class imbalance of 1:10 in the test set to model relatively more realistic scenarios.\n\n\nRESULTS\nOur models predict treats and causes relations with high F-scores of 99% and 90% respectively. Logistic regression model coefficients also help us identify highly discriminative patterns that have an intuitive interpretation. We are also able to predict some new plausible relations based on false positives that our models scored highly based on our collaborations with two physician co-authors. Finally, our decision tree models are able to retrieve over 50% of treatment relations from a recently created external dataset.\n\n\nCONCLUSIONS\nWe employed semantic graph patterns connecting pairs of candidate biomedical entities in a knowledge graph as features to predict treatment/causative relations between them. We provide what we believe is the first evidence in direct prediction of biomedical relations based on graph features. Our work complements lexical pattern based approaches in that the graph patterns can be used as additional features for weakly supervised relation prediction."
},
{
"pmid": "23019242",
"title": "MEDLINE clinical queries are robust when searching in recent publishing years.",
"abstract": "OBJECTIVE\nTo determine if the PubMed and Ovid MEDLINE clinical queries (which were developed in the publishing year 2000, for the purpose categories therapy, diagnosis, prognosis, etiology, and clinical prediction guides) perform as well when searching in current publishing years.\n\n\nMETHODS\nA gold standard database of recently published research literature was created using the McMaster health knowledge refinery (http://hiru.mcmaster.ca/hiru/HIRU_McMaster_HKR.aspx) and its continuously updated database, McMaster PLUS (http://hiru.mcmaster.ca/hiru/HIRU_McMaster_PLUS_projects.aspx). This database contains articles from over 120 clinical journals that are tagged for meeting or not meeting criteria for scientific merit and clinical relevance. The clinical queries sensitive ('broad') and specific ('narrow') search filters were tested in this gold standard database, and sensitivity and specificity were calculated and compared with those originally reported for the clinical queries.\n\n\nRESULTS\nIn all cases, the sensitivity of the highly sensitive search filters and the specificity of the highly specific search filters did not differ substantively when comparing results derived in 2000 with those derived in a more current database. In addition, in all cases, the specificities for the highly sensitive search filters and the sensitivities for the highly specific search filters remained above 50% when testing them in the current database.\n\n\nDISCUSSION\nThese results are reassuring for modern-day searchers. The clinical queries that were derived in the year 2000 perform equally well a decade later.\n\n\nCONCLUSION\nThe PubMed and Ovid MEDLINE clinical queries have been revalidated and remain a useful public resource for searching the world's medical literature for research that is most relevant to clinical care."
},
{
"pmid": "15969765",
"title": "An overview of the design and methods for retrieving high-quality studies for clinical care.",
"abstract": "BACKGROUND\nWith the information explosion, the retrieval of the best clinical evidence from large, general purpose, bibliographic databases such as MEDLINE can be difficult. Both researchers conducting systematic reviews and clinicians faced with a patient care question are confronted with the daunting task of searching for the best medical literature in electronic databases. Many have advocated the use of search filters or \"hedges\" to assist with the searching process. The purpose of this report is to describe the design and methods of a study that set out to develop optimal search strategies for retrieving sound clinical studies of health disorders in large electronics databases.\n\n\nOBJECTIVE\nTo describe the design and methods of a study that set out to develop optimal search strategies for retrieving sound clinical studies of health disorders in large electronic databases.\n\n\nDESIGN\nAn analytic survey comparing hand searches of 170 journals in the year 2000 with retrievals from MEDLINE, EMBASE, CINAHL, and PsycINFO for candidate search terms and combinations. The sensitivity, specificity, precision, and accuracy of unique search terms and combinations of search terms were calculated.\n\n\nCONCLUSION\nA study design modeled after a diagnostic testing procedure with a gold standard (the hand search of the literature) and a test (the search terms) is an effective way of developing, testing, and validating search strategies for use in large electronic databases."
},
{
"pmid": "15561789",
"title": "Text categorization models for high-quality article retrieval in internal medicine.",
"abstract": "OBJECTIVE Finding the best scientific evidence that applies to a patient problem is becoming exceedingly difficult due to the exponential growth of medical publications. The objective of this study was to apply machine learning techniques to automatically identify high-quality, content-specific articles for one time period in internal medicine and compare their performance with previous Boolean-based PubMed clinical query filters of Haynes et al. DESIGN The selection criteria of the ACP Journal Club for articles in internal medicine were the basis for identifying high-quality articles in the areas of etiology, prognosis, diagnosis, and treatment. Naive Bayes, a specialized AdaBoost algorithm, and linear and polynomial support vector machines were applied to identify these articles. MEASUREMENTS The machine learning models were compared in each category with each other and with the clinical query filters using area under the receiver operating characteristic curves, 11-point average recall precision, and a sensitivity/specificity match method. RESULTS In most categories, the data-induced models have better or comparable sensitivity, specificity, and precision than the clinical query filters. The polynomial support vector machine models perform the best among all learning methods in ranking the articles as evaluated by area under the receiver operating curve and 11-point average recall precision. CONCLUSION This research shows that, using machine learning methods, it is possible to automatically build models for retrieving high-quality, content-specific articles using inclusion or citation by the ACP Journal Club as a gold standard in a given time period in internal medicine that perform better than the 1994 PubMed clinical query filters."
},
{
"pmid": "18952929",
"title": "Towards automatic recognition of scientifically rigorous clinical research evidence.",
"abstract": "The growing numbers of topically relevant biomedical publications readily available due to advances in document retrieval methods pose a challenge to clinicians practicing evidence-based medicine. It is increasingly time consuming to acquire and critically appraise the available evidence. This problem could be addressed in part if methods were available to automatically recognize rigorous studies immediately applicable in a specific clinical situation. We approach the problem of recognizing studies containing useable clinical advice from retrieved topically relevant articles as a binary classification problem. The gold standard used in the development of PubMed clinical query filters forms the basis of our approach. We identify scientifically rigorous studies using supervised machine learning techniques (Naïve Bayes, support vector machine (SVM), and boosting) trained on high-level semantic features. We combine these methods using an ensemble learning method (stacking). The performance of learning methods is evaluated using precision, recall and F(1) score, in addition to area under the receiver operating characteristic (ROC) curve (AUC). Using a training set of 10,000 manually annotated MEDLINE citations, and a test set of an additional 2,000 citations, we achieve 73.7% precision and 61.5% recall in identifying rigorous, clinically relevant studies, with stacking over five feature-classifier combinations and 82.5% precision and 84.3% recall in recognizing rigorous studies with treatment focus using stacking over word + metadata feature vector. Our results demonstrate that a high quality gold standard and advanced classification methods can help clinicians acquire best evidence from the medical literature."
},
{
"pmid": "25983133",
"title": "Automatic evidence quality prediction to support evidence-based decision making.",
"abstract": "BACKGROUND\nEvidence-based medicine practice requires practitioners to obtain the best available medical evidence, and appraise the quality of the evidence when making clinical decisions. Primarily due to the plethora of electronically available data from the medical literature, the manual appraisal of the quality of evidence is a time-consuming process. We present a fully automatic approach for predicting the quality of medical evidence in order to aid practitioners at point-of-care.\n\n\nMETHODS\nOur approach extracts relevant information from medical article abstracts and utilises data from a specialised corpus to apply supervised machine learning for the prediction of the quality grades. Following an in-depth analysis of the usefulness of features (e.g., publication types of articles), they are extracted from the text via rule-based approaches and from the meta-data associated with the articles, and then applied in the supervised classification model. We propose the use of a highly scalable and portable approach using a sequence of high precision classifiers, and introduce a simple evaluation metric called average error distance (AED) that simplifies the comparison of systems. We also perform elaborate human evaluations to compare the performance of our system against human judgments.\n\n\nRESULTS\nWe test and evaluate our approaches on a publicly available, specialised, annotated corpus containing 1132 evidence-based recommendations. Our rule-based approach performs exceptionally well at the automatic extraction of publication types of articles, with F-scores of up to 0.99 for high-quality publication types. For evidence quality classification, our approach obtains an accuracy of 63.84% and an AED of 0.271. The human evaluations show that the performance of our system, in terms of AED and accuracy, is comparable to the performance of humans on the same data.\n\n\nCONCLUSIONS\nThe experiments suggest that our structured text classification framework achieves evaluation results comparable to those of human performance. Our overall classification approach and evaluation technique are also highly portable and can be used for various evidence grading scales."
},
{
"pmid": "29941415",
"title": "A Deep Learning Method to Automatically Identify Reports of Scientifically Rigorous Clinical Research from the Biomedical Literature: Comparative Analytic Study.",
"abstract": "BACKGROUND\nA major barrier to the practice of evidence-based medicine is efficiently finding scientifically sound studies on a given clinical topic.\n\n\nOBJECTIVE\nTo investigate a deep learning approach to retrieve scientifically sound treatment studies from the biomedical literature.\n\n\nMETHODS\nWe trained a Convolutional Neural Network using a noisy dataset of 403,216 PubMed citations with title and abstract as features. The deep learning model was compared with state-of-the-art search filters, such as PubMed's Clinical Query Broad treatment filter, McMaster's textword search strategy (no Medical Subject Heading, MeSH, terms), and Clinical Query Balanced treatment filter. A previously annotated dataset (Clinical Hedges) was used as the gold standard.\n\n\nRESULTS\nThe deep learning model obtained significantly lower recall than the Clinical Queries Broad treatment filter (96.9% vs 98.4%; P<.001); and equivalent recall to McMaster's textword search (96.9% vs 97.1%; P=.57) and Clinical Queries Balanced filter (96.9% vs 97.0%; P=.63). Deep learning obtained significantly higher precision than the Clinical Queries Broad filter (34.6% vs 22.4%; P<.001) and McMaster's textword search (34.6% vs 11.8%; P<.001), but was significantly lower than the Clinical Queries Balanced filter (34.6% vs 40.9%; P<.001).\n\n\nCONCLUSIONS\nDeep learning performed well compared to state-of-the-art search filters, especially when citations were not indexed. Unlike previous machine learning approaches, the proposed deep learning model does not require feature engineering, or time-sensitive or proprietary features, such as MeSH terms and bibliometrics. Deep learning is a promising approach to identifying reports of scientifically rigorous clinical research. Further work is needed to optimize the deep learning model and to assess generalizability to other areas, such as diagnosis, etiology, and prognosis."
},
{
"pmid": "17653438",
"title": "The PICO strategy for the research question construction and evidence search.",
"abstract": "Evidence based practice is the use of the best scientific evidence to support the clinical decision making. The identification of the best evidence requires the construction of an appropriate research question and review of the literature. This article describes the use of the PICO strategy for the construction of the research question and bibliographical search."
},
{
"pmid": "27454608",
"title": "The Mining Minds digital health and wellness framework.",
"abstract": "BACKGROUND\nThe provision of health and wellness care is undergoing an enormous transformation. A key element of this revolution consists in prioritizing prevention and proactivity based on the analysis of people's conducts and the empowerment of individuals in their self-management. Digital technologies are unquestionably destined to be the main engine of this change, with an increasing number of domain-specific applications and devices commercialized every year; however, there is an apparent lack of frameworks capable of orchestrating and intelligently leveraging, all the data, information and knowledge generated through these systems.\n\n\nMETHODS\nThis work presents Mining Minds, a novel framework that builds on the core ideas of the digital health and wellness paradigms to enable the provision of personalized support. Mining Minds embraces some of the most prominent digital technologies, ranging from Big Data and Cloud Computing to Wearables and Internet of Things, as well as modern concepts and methods, such as context-awareness, knowledge bases or analytics, to holistically and continuously investigate on people's lifestyles and provide a variety of smart coaching and support services.\n\n\nRESULTS\nThis paper comprehensively describes the efficient and rational combination and interoperation of these technologies and methods through Mining Minds, while meeting the essential requirements posed by a framework for personalized health and wellness support. Moreover, this work presents a realization of the key architectural components of Mining Minds, as well as various exemplary user applications and expert tools to illustrate some of the potential services supported by the proposed framework.\n\n\nCONCLUSIONS\nMining Minds constitutes an innovative holistic means to inspect human behavior and provide personalized health and wellness support. The principles behind this framework uncover new research ideas and may serve as a reference for similar initiatives."
},
{
"pmid": "28766402",
"title": "Context-aware grading of quality evidences for evidence-based decision-making.",
"abstract": "Processing huge repository of medical literature for extracting relevant and high-quality evidences demands efficient evidence support methods. We aim at developing methods to automate the process of finding quality evidences from a plethora of literature documents and grade them according to the context (local condition). We propose a two-level methodology for quality recognition and grading of evidences. First, quality is recognized using quality recognition model; second, context-aware grading of evidences is accomplished. Using 10-fold cross-validation, the proposed quality recognition model achieved an accuracy of 92.14 percent and improved the baseline system accuracy by about 24 percent. The proposed context-aware grading method graded 808 out of 1354 test evidences as highly beneficial for treatment purpose. This infers that around 60 percent evidences shall be given more importance as compared to the other 40 percent evidences. The inclusion of context in recommendation of evidence makes the process of evidence-based decision-making \"situation-aware.\""
},
{
"pmid": "27989816",
"title": "Extractive text summarization system to aid data extraction from full text in systematic review development.",
"abstract": "OBJECTIVES\nExtracting data from publication reports is a standard process in systematic review (SR) development. However, the data extraction process still relies too much on manual effort which is slow, costly, and subject to human error. In this study, we developed a text summarization system aimed at enhancing productivity and reducing errors in the traditional data extraction process.\n\n\nMETHODS\nWe developed a computer system that used machine learning and natural language processing approaches to automatically generate summaries of full-text scientific publications. The summaries at the sentence and fragment levels were evaluated in finding common clinical SR data elements such as sample size, group size, and PICO values. We compared the computer-generated summaries with human written summaries (title and abstract) in terms of the presence of necessary information for the data extraction as presented in the Cochrane review's study characteristics tables.\n\n\nRESULTS\nAt the sentence level, the computer-generated summaries covered more information than humans do for systematic reviews (recall 91.2% vs. 83.8%, p<0.001). They also had a better density of relevant sentences (precision 59% vs. 39%, p<0.001). At the fragment level, the ensemble approach combining rule-based, concept mapping, and dictionary-based methods performed better than individual methods alone, achieving an 84.7% F-measure.\n\n\nCONCLUSION\nComputer-generated summaries are potential alternative information sources for data extraction in systematic review development. Machine learning and natural language processing are promising approaches to the development of such an extractive summarization system."
},
{
"pmid": "20470429",
"title": "Combining classifiers for robust PICO element detection.",
"abstract": "BACKGROUND\nFormulating a clinical information need in terms of the four atomic parts which are Population/Problem, Intervention, Comparison and Outcome (known as PICO elements) facilitates searching for a precise answer within a large medical citation database. However, using PICO defined items in the information retrieval process requires a search engine to be able to detect and index PICO elements in the collection in order for the system to retrieve relevant documents.\n\n\nMETHODS\nIn this study, we tested multiple supervised classification algorithms and their combinations for detecting PICO elements within medical abstracts. Using the structural descriptors that are embedded in some medical abstracts, we have automatically gathered large training/testing data sets for each PICO element.\n\n\nRESULTS\nCombining multiple classifiers using a weighted linear combination of their prediction scores achieves promising results with an f-measure score of 86.3% for P, 67% for I and 56.6% for O.\n\n\nCONCLUSIONS\nOur experiments on the identification of PICO elements showed that the task is very challenging. Nevertheless, the performance achieved by our identification method is competitive with previously published results and shows that this task can be achieved with a high accuracy for the P element but lower ones for I and O elements."
},
{
"pmid": "23899909",
"title": "PICO element detection in medical text without metadata: are first sentences enough?",
"abstract": "Efficient identification of patient, intervention, comparison, and outcome (PICO) components in medical articles is helpful in evidence-based medicine. The purpose of this study is to clarify whether first sentences of these components are good enough to train naive Bayes classifiers for sentence-level PICO element detection. We extracted 19,854 structured abstracts of randomized controlled trials with any P/I/O label from PubMed for naive Bayes classifiers training. Performances of classifiers trained by first sentences of each section (CF) and those trained by all sentences (CA) were compared using all sentences by ten-fold cross-validation. The results measured by recall, precision, and F-measures show that there are no significant differences in performance between CF and CA for detection of O-element (F-measure=0.731±0.009 vs. 0.738±0.010, p=0.123). However, CA perform better for I-elements, in terms of recall (0.752±0.012 vs. 0.620±0.007, p<0.001) and F-measures (0.728±0.006 vs. 0.662±0.007, p<0.001). For P-elements, CF have higher precision (0.714±0.009 vs. 0.665±0.010, p<0.001), but lower recall (0.766±0.013 vs. 0.811±0.012, p<0.001). CF are not always better than CA in sentence-level PICO element detection. Their performance varies in detecting different elements."
},
{
"pmid": "17202161",
"title": "GenBank.",
"abstract": "GenBank (R) is a comprehensive database that contains publicly available nucleotide sequences for more than 240 000 named organisms, obtained primarily through submissions from individual laboratories and batch submissions from large-scale sequencing projects. Most submissions are made using the web-based BankIt or standalone Sequin programs and accession numbers are assigned by GenBank staff upon receipt. Daily data exchange with the EMBL Data Library in Europe and the DNA Data Bank of Japan ensures worldwide coverage. GenBank is accessible through NCBI's retrieval system, Entrez, which integrates data from the major DNA and protein sequence databases along with taxonomy, genome, mapping, protein structure and domain information, and the biomedical journal literature via PubMed. BLAST provides sequence similarity searches of GenBank and other sequence databases. Complete bimonthly releases and daily updates of the GenBank database are available by FTP. To access GenBank and its related retrieval and analysis services, begin at the NCBI Homepage (www.ncbi.nlm.nih.gov)."
},
{
"pmid": "25398906",
"title": "Database resources of the National Center for Biotechnology Information.",
"abstract": "The National Center for Biotechnology Information (NCBI) provides a large suite of online resources for biological information and data, including the GenBank(®) nucleic acid sequence database and the PubMed database of citations and abstracts for published life science journals. Additional NCBI resources focus on literature (Bookshelf, PubMed Central (PMC) and PubReader); medical genetics (ClinVar, dbMHC, the Genetic Testing Registry, HIV-1/Human Protein Interaction Database and MedGen); genes and genomics (BioProject, BioSample, dbSNP, dbVar, Epigenomics, Gene, Gene Expression Omnibus (GEO), Genome, HomoloGene, the Map Viewer, Nucleotide, PopSet, Probe, RefSeq, Sequence Read Archive, the Taxonomy Browser, Trace Archive and UniGene); and proteins and chemicals (Biosystems, COBALT, the Conserved Domain Database (CDD), the Conserved Domain Architecture Retrieval Tool (CDART), the Molecular Modeling Database (MMDB), Protein Clusters, Protein and the PubChem suite of small molecule databases). The Entrez system provides search and retrieval operations for many of these databases. Augmenting many of the Web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of these resources can be accessed through the NCBI home page at http://www.ncbi.nlm.nih.gov."
},
{
"pmid": "26582918",
"title": "ClinVar: public archive of interpretations of clinically relevant variants.",
"abstract": "ClinVar (https://www.ncbi.nlm.nih.gov/clinvar/) at the National Center for Biotechnology Information (NCBI) is a freely available archive for interpretations of clinical significance of variants for reported conditions. The database includes germline and somatic variants of any size, type or genomic location. Interpretations are submitted by clinical testing laboratories, research laboratories, locus-specific databases, OMIM®, GeneReviews™, UniProt, expert panels and practice guidelines. In NCBI's Variation submission portal, submitters upload batch submissions or use the Submission Wizard for single submissions. Each submitted interpretation is assigned an accession number prefixed with SCV. ClinVar staff review validation reports with data types such as HGVS (Human Genome Variation Society) expressions; however, clinical significance is reported directly from submitters. Interpretations are aggregated by variant-condition combination and assigned an accession number prefixed with RCV. Clinical significance is calculated for the aggregate record, indicating consensus or conflict in the submitted interpretations. ClinVar uses data standards, such as HGVS nomenclature for variants and MedGen identifiers for conditions. The data are available on the web as variant-specific views; the entire data set can be downloaded via ftp. Programmatic access for ClinVar records is available through NCBI's E-utilities. Future development includes providing a variant-centric XML archive and a web page for details of SCV submissions."
},
{
"pmid": "28815124",
"title": "PheKnow-Cloud: A Tool for Evaluating High-Throughput Phenotype Candidates using Online Medical Literature.",
"abstract": "As the adoption of Electronic Healthcare Records has grown, the need to transform manual processes that extract and characterize medical data into automatic and high-throughput processes has also grown. Recently, researchers have tackled the problem of automatically extracting candidate phenotypes from EHR data. Since these phenotypes are usually generated using unsupervised or semi-supervised methods, it is necessary to examine and validate the clinical relevance of the generated \"candidate\" phenotypes. We present PheKnow-Cloud, a framework that uses co-occurrence analysis on the publicly available, online repository ofjournal articles, PubMed, to build sets of evidence for user-supplied candidate phenotypes. PheKnow-Cloud works in an interactive manner to present the results of the candidate phenotype analysis. This tool seeks to help researchers and clinical professionals evaluate the automatically generated phenotypes so they may tune their processes and understand the candidate phenotypes."
},
{
"pmid": "29728351",
"title": "Phenotype Instance Verification and Evaluation Tool (PIVET): A Scaled Phenotype Evidence Generation Framework Using Web-Based Medical Literature.",
"abstract": "BACKGROUND\nResearchers are developing methods to automatically extract clinically relevant and useful patient characteristics from raw healthcare datasets. These characteristics, often capturing essential properties of patients with common medical conditions, are called computational phenotypes. Being generated by automated or semiautomated, data-driven methods, such potential phenotypes need to be validated as clinically meaningful (or not) before they are acceptable for use in decision making.\n\n\nOBJECTIVE\nThe objective of this study was to present Phenotype Instance Verification and Evaluation Tool (PIVET), a framework that uses co-occurrence analysis on an online corpus of publically available medical journal articles to build clinical relevance evidence sets for user-supplied phenotypes. PIVET adopts a conceptual framework similar to the pioneering prototype tool PheKnow-Cloud that was developed for the phenotype validation task. PIVET completely refactors each part of the PheKnow-Cloud pipeline to deliver vast improvements in speed without sacrificing the quality of the insights PheKnow-Cloud achieved.\n\n\nMETHODS\nPIVET leverages indexing in NoSQL databases to efficiently generate evidence sets. Specifically, PIVET uses a succinct representation of the phenotypes that corresponds to the index on the corpus database and an optimized co-occurrence algorithm inspired by the Aho-Corasick algorithm. We compare PIVET's phenotype representation with PheKnow-Cloud's by using PheKnow-Cloud's experimental setup. In PIVET's framework, we also introduce a statistical model trained on domain expert-verified phenotypes to automatically classify phenotypes as clinically relevant or not. Additionally, we show how the classification model can be used to examine user-supplied phenotypes in an online, rather than batch, manner.\n\n\nRESULTS\nPIVET maintains the discriminative power of PheKnow-Cloud in terms of identifying clinically relevant phenotypes for the same corpus with which PheKnow-Cloud was originally developed, but PIVET's analysis is an order of magnitude faster than that of PheKnow-Cloud. Not only is PIVET much faster, it can be scaled to a larger corpus and still retain speed. We evaluated multiple classification models on top of the PIVET framework and found ridge regression to perform best, realizing an average F1 score of 0.91 when predicting clinically relevant phenotypes.\n\n\nCONCLUSIONS\nOur study shows that PIVET improves on the most notable existing computational tool for phenotype validation in terms of speed and automation and is comparable in terms of accuracy."
},
{
"pmid": "27652374",
"title": "Practical considerations for implementing genomic information resources. Experiences from eMERGE and CSER.",
"abstract": "OBJECTIVES\nTo understand opinions and perceptions on the state of information resources specifically targeted to genomics, and approaches to delivery in clinical practice.\n\n\nMETHODS\nWe conducted a survey of genomic content use and its clinical delivery from representatives across eight institutions in the electronic Medical Records and Genomics (eMERGE) network and two institutions in the Clinical Sequencing Exploratory Research (CSER) consortium in 2014.\n\n\nRESULTS\nEleven responses representing distinct projects across ten sites showed heterogeneity in how content is being delivered, with provider-facing content primarily delivered via the electronic health record (EHR) (n=10), and paper/pamphlets as the leading mode for patient-facing content (n=9). There was general agreement (91%) that new content is needed for patients and providers specific to genomics, and that while aspects of this content could be shared across institutions there remain site-specific needs (73% in agreement).\n\n\nCONCLUSION\nThis work identifies a need for the improved access to and expansion of information resources to support genomic medicine, and opportunities for content developers and EHR vendors to partner with institutions to develop needed resources, and streamline their use - such as a central content site in multiple modalities while implementing approaches to allow for site-specific customization."
}
] |
Frontiers in Neurorobotics | 31920615 | PMC6930239 | 10.3389/fnbot.2019.00105 | Walking Human Detection Using Stereo Camera Based on Feature Classification Algorithm of Second Re-projection Error | This paper presents a feature classification method based on vision sensor in dynamic environment. Aiming at the detected targets, a double-projection error based on orb and surf is proposed, which combines texture constraints and region constraints to achieve accurate feature classification in four different environments. For dynamic targets with different velocities, the proposed classification framework can effectively reduce the impact of large-area moving targets. The algorithm can classify static and dynamic feature objects and optimize the conversion relationship between frames only through visual sensors. The experimental results show that the proposed algorithm is superior to other algorithms in both static and dynamic environments. | Related WorkMoving object detection is one of the hotspots in machine vision research (Zhang et al., 2014, 2016; Choi and Maurer, 2016; Naiel et al., 2017). Fischler and Bolles (1981) proposed a paradigm for model fitting with applications to image analysis and automated cartography. According to the status of a camera, it can be divided into the static background detection and the dynamic background detection. In the static background detection, the camera is always stationary, so the moving target detection is easy. It has been widely used in scene monitoring of fixed environment, such as factory, road, and airport. The common background models have an adaptive background model based on the kernel density estimation, the Gaussian background model and the hidden Markov background model. In the dynamic background detection, the position of the camera changes, which can result in the change of the background and object in the image at the same time. So it increases the difficulty of moving target detection, which is the focus of the current research for moving target detection. There are three main categories about dynamic background detection of moving objects (Xu et al., 2011; Yin et al., 2016): optical flow, background comparison, and interframe difference method. For the optical flow method, because the background and the moving speed of the detected target is different, it can result in a large difference in optical flow. So moving objects can be identified according to it. The calculation of the optical flow is large, and there exists pore size problem. The comparison method of the background adopts image registration to dynamically update the background model, and it can compare the actual image with updated the background model to obtain the moving target. The interframe difference method is used to register the background of several successive images. The target detection is transformed into the moving object detection problem in the static background, and the moving object is separated by the difference image of the front and rear frames. The background image registration method includes texture algorithm, Fourier transform method, and feature matching method (Naiel et al., 2017). The feature matching method is simple and fast, but the matching error of the existing matching method is influenced by the changing environment.Some previous research didn't consider the dynamic scene, and the detected dynamic or uncertain points were discarded as abnormal values (Williams et al., 2007; Paz et al., 2008; Liang and Min, 2013; Zhang et al., 2015; Zhou et al., 2015). However, when some of the dynamic objects are large, they would have large error, or even failure. In the dynamic environment, the existing research mainly adopts the filter approach (Hahnel et al., 2003) and has been successfully applied to solve the problems of SLAM based on laser scanner and radar system, but the research of applying to visual SLAM is seldom studied. Fang et al. (2019) detect and recognize the target through visual tactile fusion. Gao and Zhang (2017) explains the basics of SLAM. In Chen and Cai (2011), the visual sensor and the radar were used to detect the dynamic object. The uncertain factor is detected by using the eight-neighborhood rolling window method based on the map difference method. But the long-time static target is difficult to be detected out accurately. Zhang et al. (2018, 2019), Afiq et al. (2019) applied dynamic target detection to crowd action and emotion recognition. In addition, the lack of laser radar can bring instable judgment for some obstacles with special material (such as glass doors), which can affect the accuracy of the map (Einhorn and Gross, 2015), achieved estimation tracking of the dynamic object by combining the normal distribution transformation with the grid map. But there are some restrictions on the scope of adaptation. In Sun et al. (2017), a novel movement removal method based on RGB-D data was proposed, which enhanced the application in dynamic environment. The work of Lee and Myung (2014) and Wang et al. (2016) used the posture structure diagram and the RGB-D sensor to complete the detection of the low speed target, and re-optimized the structure of posture to obtain the corrected location and mapping results. These dynamic separation methods are still not suitable very well for the detected target with speed and volume. And the judgment is still not accurate enough or they require a high economy. In Zou and Tan (2013), the cooperation of multiple cameras is used to detect dynamic points. The separation between static feature points and uncertain points is made by a camera at first. Then multiple cameras are used to further determine the uncertainty points. The method reduces the impact of the large-scale moving objects on the system, and it is suitable for high dynamic object. The method proposed in this paper is based on Zou and Tan (2013).In this experiment, a camera is used to solve classification problem of the feature points. Comparing with other sensors, the camera is passive, compact, and energy-saving. It has an important advantage for intelligent platforms with limited weight and energy capacity. The proposed algorithm in this experiment can achieve the detection classification of the moving objects with different speeds in a variety of complex environment, and complete the pose transformation optimization only by a visual sensor. | [
"25019633",
"22547430"
] | [
{
"pmid": "25019633",
"title": "Solution to the SLAM problem in low dynamic environments using a pose graph and an RGB-D sensor.",
"abstract": "In this study, we propose a solution to the simultaneous localization and mapping (SLAM) problem in low dynamic environments by using a pose graph and an RGB-D (red-green-blue depth) sensor. The low dynamic environments refer to situations in which the positions of objects change over long intervals. Therefore, in the low dynamic environments, robots have difficulty recognizing the repositioning of objects unlike in highly dynamic environments in which relatively fast-moving objects can be detected using a variety of moving object detection algorithms. The changes in the environments then cause groups of false loop closing when the same moved objects are observed for a while, which means that conventional SLAM algorithms produce incorrect results. To address this problem, we propose a novel SLAM method that handles low dynamic environments. The proposed method uses a pose graph structure and an RGB-D sensor. First, to prune the falsely grouped constraints efficiently, nodes of the graph, that represent robot poses, are grouped according to the grouping rules with noise covariances. Next, false constraints of the pose graph are pruned according to an error metric based on the grouped nodes. The pose graph structure is reoptimized after eliminating the false information, and the corrected localization and mapping results are obtained. The performance of the method was validated in real experiments using a mobile robot system."
},
{
"pmid": "22547430",
"title": "CoSLAM: collaborative visual SLAM in dynamic environments.",
"abstract": "This paper studies the problem of vision-based simultaneous localization and mapping (SLAM) in dynamic environments with multiple cameras. These cameras move independently and can be mounted on different platforms. All cameras work together to build a global map, including 3D positions of static background points and trajectories of moving foreground points. We introduce intercamera pose estimation and intercamera mapping to deal with dynamic objects in the localization and mapping process. To further enhance the system robustness, we maintain the position uncertainty of each map point. To facilitate intercamera operations, we cluster cameras into groups according to their view overlap, and manage the split and merge of camera groups in real time. Experimental results demonstrate that our system can work robustly in highly dynamic environments and produce more accurate results in static environments."
}
] |
International Journal of Information Security | 31929770 | PMC6936368 | 10.1007/s10207-017-0369-x | Stealing PINs via mobile sensors: actual risk versus user perception | In this paper, we present the actual risks of stealing user PINs by using mobile sensors versus the perceived risks by users. First, we propose PINlogger.js which is a JavaScript-based side channel attack revealing user PINs on an Android mobile phone. In this attack, once the user visits a website controlled by an attacker, the JavaScript code embedded in the web page starts listening to the motion and orientation sensor streams without needing any permission from the user. By analysing these streams, it infers the user’s PIN using an artificial neural network. Based on a test set of fifty 4-digit PINs, PINlogger.js is able to correctly identify PINs in the first attempt with a success rate of 74% which increases to 86 and 94% in the second and third attempts, respectively. The high success rates of stealing user PINs on mobile devices via JavaScript indicate a serious threat to user security. With the technical understanding of the information leakage caused by mobile phone sensors, we then study users’ perception of the risks associated with these sensors. We design user studies to measure the general familiarity with different sensors and their functionality, and to investigate how concerned users are about their PIN being discovered by an app that has access to all these sensors. Our studies show that there is significant disparity between the actual and perceived levels of threat with regard to the compromise of the user PIN. We confirm our results by interviewing our participants using two different approaches, within-subject and between-subject, and compare the results. We discuss how this observation, along with other factors, renders many academic and industry solutions ineffective in preventing such side channel attacks. | Comparison with related workObtaining sensitive information about users such as PINs based on mobile sensors has been actively explored by researchers in the field [21, 22]. In particular, there is a number of research which uses mobile sensors through a malicious app running in the background to extract PINs entered on the soft keyboard of the mobile device. For example, GyroPhone, by Michalevsky et al. [10], shows that gyroscope data are sufficient to identify the speaker and even parse speech to some extent. Other examples include Accessory [13] by Owusu et al. and Tapprints by Miluzzo et al. [11]. They infer passwords on full alphabetical soft keyboards based on accelerometer measurements. Touchlogger [8] is another example by Cai and Chen [20] which shows the possibility of distinguishing user’s input on a mobile numpad by using accelerometer and gyroscope. The same authors demonstrate a similar attack in [9] on both numerical and full keyboards. The only work which relies on in-browser access to sensors to attack a numpad is our previous work, TouchSignatures [4]. All of these works, however, aim for the individual digits or characters of a keyboard, rather than the entire PIN or password.Another category of works directly targets user PINs. For example, PIN skimmer by Simon and Anderson [14] is an attack on a user’s numpad and PINs using the camera and microphone on the smartphone. Spreitzer suggests another PIN Skimming attack [15] and steals a user’s PIN based on the measurements from the smartphone’s ambient light sensor. Narain et al. introduce another attack [12] on smartphone numerical and alphabetical keyboards and the user’s PINs and credit card numbers by using the smartphone microphone. TapLogger by Xu et al. [16] is another attack on the smartphone numpad which outputs the pressed digits and PINs based on accelerometer and orientation sensor data. Similarly, Aviv et al. introduce an accelerometer-based side channel attack on the user’s PINs and patterns in [7]. We choose to compare PINlogger.js with the works in this category since they have the same goal of revealing the user’s PINs. Table 3 presents the results of our comparison.As shown in Table 3, PINlogger.js is the only attack on PINs which acquires the sensor data via JavaScript code. In-browser JavaScript-based attacks impose even more security threats to users since unlike in-app attacks, they do not require any app installation and user permission to work. Moreover, the attacker does not need to develop different apps for different platforms such as Android, iOS, and Windows. Once the attacker develops the JavaScript code, it can be deployed to attack all mobile devices regardless of the platform. Moreover, Touchlogger.js is the only works which present the results of the attack for multiple-users modes. By contrast, the results from other works are mainly based on training the classifiers for individual users. In other words, they assume the attacker is able to collect input training data from the victim user before launching the PIN attack. We do not have such an assumption as the training data are obtained from all users in the experiment. In terms of accuracy, with the exception of [12], PINlogger.js generally outperforms other works with an identification rate of 74% in the first attempt. This is a significant success rate (despite that the sampling rate in-browser is much lower than that available in-app) and confirms that the described attack imposes a serious threat to the users’ security and privacy. | [] | [] |
BMC Medical Informatics and Decision Making | 31888608 | PMC6937661 | 10.1186/s12911-019-1012-8 | Forecasting one-day-forward wellness conditions for community-dwelling elderly with single lead short electrocardiogram signals | BackgroundThe accelerated growth of elderly population is creating a heavy burden to the healthcare system in many developed countries and regions. Electrocardiogram (ECG) analysis has been recognized as effective approach to cardiovascular disease diagnosis and widely utilized for monitoring personalized health conditions.MethodIn this study, we present a novel approach to forecasting one-day-forward wellness conditions for community-dwelling elderly by analyzing single lead short ECG signals acquired from a station-based monitoring device. More specifically, exponentially weighted moving-average (EWMA) method is employed to eliminate the high-frequency noise from original signals at first. Then, Fisher-Yates normalization approach is used to adjust the self-evaluated wellness score distribution since the scores among different individuals are skewed. Finally, both deep learning-based and traditional machine learning-based methods are utilized for building wellness forecasting models.ResultsThe experiment results show that the deep learning-based methods achieve the best fitted forecasting performance, where the forecasting accuracy and F value are 93.21% and 91.98% respectively. The deep learning-based methods, with the merit of non-hand-crafted engineering, have superior wellness forecasting performance towards the competitive traditional machine learning-based methods.ConclusionThe developed approach in this paper is effective in wellness forecasting for community-dwelling elderly, which can provide insights in terms of implementing a cost-effective approach to informing healthcare provider about health conditions of elderly in advance and taking timely interventions to reduce the risk of malignant events. | Related workIn this section, we review forecasting methods for temporal data particularly with applications to healthcare domain. These forecasting methods can be divided into two main categories: (i) traditional machine learning-based methods and (ii) deep learning-based methods.For traditional machine learning-based forecasting methods, two representative approaches are support vector machine (SVM) and artificial neural network (ANN). Wu et al. [24] employed SVM to predict heart failure more than six months via vast electronic health records (EHR). The highest value of area under curve (AUC) for SVM is around 0.75. Santillana et al. [25] utilized the SVM to forecast estimates of influenza activity in America. Yu et al. [3] used the SVM to predict one-day-forward wellness conditions for elderly and achieved the forecasting accuracy of around 60%. Meanwhile, the ANN also obtained widely application in health care domain. Suryadevara et al. [26] took advantage of the ANN to forecast the behavior and wellness of elderly and deployed it into a healthcare prototype system. Srinivas et al. [27] employed the ANN to predict heart diseases like chest pain, stroke and heart attack. The prediction performances of these traditional machine learning-based methods are difficult to meet the precisely forecasting demands of elderly. So, researchers shifted their attention to cutting-edge deep learning-based forecasting methods.In recent years, deep learning-based methods like recurrent neural network (RNN) has been achieved a big success in natural language processing, speech recognition, and machine translation [28–31]. Researchers also attempted to solve the problems in healthcare domain using these cutting-edge approaches [32–34]. Ma et al. [32] proposed an end-to-end simple recurrent neural network to model the temporality and high dimensionality of sequential EHR data to predict patients’ future health information. The experimental results based on two real world EHR datasets showed that their model improved the prediction accuracy significantly. Choi et al. [33] explored recurrent neural network whether improving initial diagnosis of heart failure compared to traditional machine learning-based approaches. Experimental results proved that recurrent neural network could leverage the temporal relations and improved the prediction performance of incident heart failure. Choi et al. [34] also proposed an interpretable forecasting model based on recurrent neural network. This deep model was tested on a large EHR dataset and demonstrated its superior prediction performance. Therefore, two popular deep learning-based approaches called long short-term memory network (LSTM) [35, 36] and bidirectional long short-term memory network (BiLSTM) [37] are utilized to forecast one-day-forward wellness conditions for elderly in this study. Meanwhile, two traditional machine learning-based methods of SVM and ANN are also employed for model selection. | [
"28546643",
"15067670",
"20875965",
"27796840",
"28003238"
] | [
{
"pmid": "28546643",
"title": "Decomposition of regional convergence in population aging across Europe.",
"abstract": "In the face of rapidly aging population, decreasing regional inequalities in population composition is one of the regional cohesion goals of the European Union. To our knowledge, no explicit quantification of the changes in regional population aging differentiation exist. We investigate how regional differences in population aging developed over the last decade and how they are likely to evolve in the coming three decades, and we examine how demographic components of population growth contribute to the process. We use the beta-convergence approach to test whether regions are moving towards a common level of population aging. The change in population composition is decomposed into the separate effects of changes in the size of the non-working-age population and of the working-age population. The latter changes are further decomposed into the effects of cohort turnover, migration at working ages, and mortality at working ages. European Nomenclature of Territorial Units for Statistics (NUTS)-2 regions experienced notable convergence in population aging during the period 2003-2012 and are expected to experience further convergence in the coming three decades. Convergence in aging mainly depends on changes in the population structure of East-European regions. Cohort turnover plays the major role in promoting convergence. Differences in mortality at working ages, though quite moderate themselves, have a significant cumulative effect. The projections show that when it is assumed that net migration flows at working ages are converging across European regions, this will not contribute to convergence of population aging. The beta-convergence approach proves useful to examine regional variations in population aging across Europe."
},
{
"pmid": "15067670",
"title": "Time to include time to death? The future of health care expenditure predictions.",
"abstract": "Government projections of future health care expenditures--a great concern given the aging baby-boom generation--are based on econometric regressions that control explicitly for age but do not control for end-of-life expenditures. Because expenditures increase dramatically on average at the end of life, predictions of future cost distributions based on regressions that omit time to death as an explanatory variable will be biased upward (or, more explicitly, the coefficients on age will be biased upward) if technology or other social factors continue to prolong life. Although health care expenditure predictions for a current sample will not be biased, predictions for future cohorts with greater longevity will be biased upwards, and the magnitude of the bias will increase as the expected longevity increases. We explore the empirical implications of incorporating time to death in longitudinal models of health expenditures for the purpose of predicting future expenditures. Predictions from a simple model that excludes time to death and uses current life tables are 9% higher than from an expanded model controlling for time to death. The bias increases to 15% when using projected life tables for 2020. The predicted differences between the models are sufficient to justify reassessment of the value of inclusion of time to death in models for predicting health care expenditures."
},
{
"pmid": "20875965",
"title": "From mobile phones to personal wellness dashboards.",
"abstract": "The paradigm of wellness mobiles will enable health-care professionals to have access to comprehensive real-time patient data at the point of care and anywhere there is cellular network coverage. More importantly, users can continuously and frequently track their health on the go and receive real-time user assistance when needed to alter their lifestyles. Recently, there has been a growing interest in developing proactive wellness products and health-related smartphone applications. However, developing quantifiable measures of wellness for continuous tracking and designing compliant-monitoring systems is quite challenging. This article motivates future research in this emerging field by presenting a ringside view of the recent developments and trends favoring this technology and the challenges facing the next generation of telemedicine."
},
{
"pmid": "27796840",
"title": "An IoT-cloud Based Wearable ECG Monitoring System for Smart Healthcare.",
"abstract": "Public healthcare has been paid an increasing attention given the exponential growth human population and medical expenses. It is well known that an effective health monitoring system can detect abnormalities of health conditions in time and make diagnoses according to the gleaned data. As a vital approach to diagnose heart diseases, ECG monitoring is widely studied and applied. However, nearly all existing portable ECG monitoring systems cannot work without a mobile application, which is responsible for data collection and display. In this paper, we propose a new method for ECG monitoring based on Internet-of-Things (IoT) techniques. ECG data are gathered using a wearable monitoring node and are transmitted directly to the IoT cloud using Wi-Fi. Both the HTTP and MQTT protocols are employed in the IoT cloud in order to provide visual and timely ECG data to users. Nearly all smart terminals with a web browser can acquire ECG data conveniently, which has greatly alleviated the cross-platform issue. Experiments are carried out on healthy volunteers in order to verify the reliability of the entire system. Experimental results reveal that the proposed system is reliable in collecting and displaying real-time ECG data, which can aid in the primary diagnosis of certain heart diseases."
},
{
"pmid": "28003238",
"title": "Calculating acute:chronic workload ratios using exponentially weighted moving averages provides a more sensitive indicator of injury likelihood than rolling averages.",
"abstract": "OBJECTIVE\nTo determine if any differences exist between the rolling averages and exponentially weighted moving averages (EWMA) models of acute:chronic workload ratio (ACWR) calculation and subsequent injury risk.\n\n\nMETHODS\nA cohort of 59 elite Australian football players from 1 club participated in this 2-year study. Global positioning system (GPS) technology was used to quantify external workloads of players, and non-contact 'time-loss' injuries were recorded. The ACWR were calculated for a range of variables using 2 models: (1) rolling averages, and (2) EWMA. Logistic regression models were used to assess both the likelihood of sustaining an injury and the difference in injury likelihood between models.\n\n\nRESULTS\nThere were significant differences in the ACWR values between models for moderate (ACWR 1.0-1.49; p=0.021), high (ACWR 1.50-1.99; p=0.012) and very high (ACWR >2.0; p=0.001) ACWR ranges. Although both models demonstrated significant (p<0.05) associations between a very high ACWR (ie, >2.0) and an increase in injury risk for total distance ((relative risk, RR)=6.52-21.28) and high-speed distance (RR=5.87-13.43), the EWMA model was more sensitive for detecting this increased risk. The variance (R2) in injury explained by each ACWR model was significantly (p<0.05) greater using the EWMA model.\n\n\nCONCLUSIONS\nThese findings demonstrate that large spikes in workload are associated with an increased injury risk using both models, although the EWMA model is more sensitive to detect increases in injury risk with higher ACWR."
}
] |
BMC Medical Informatics and Decision Making | 31906931 | PMC6945414 | 10.1186/s12911-019-1014-6 | A new concordant partial AUC and partial c statistic for imbalanced data in the evaluation of machine learning algorithms | BackgroundIn classification and diagnostic testing, the receiver-operator characteristic (ROC) plot and the area under the ROC curve (AUC) describe how an adjustable threshold causes changes in two types of error: false positives and false negatives. Only part of the ROC curve and AUC are informative however when they are used with imbalanced data. Hence, alternatives to the AUC have been proposed, such as the partial AUC and the area under the precision-recall curve. However, these alternatives cannot be as fully interpreted as the AUC, in part because they ignore some information about actual negatives.MethodsWe derive and propose a new concordant partial AUC and a new partial c statistic for ROC data—as foundational measures and methods to help understand and explain parts of the ROC plot and AUC. Our partial measures are continuous and discrete versions of the same measure, are derived from the AUC and c statistic respectively, are validated as equal to each other, and validated as equal in summation to whole measures where expected. Our partial measures are tested for validity on a classic ROC example from Fawcett, a variation thereof, and two real-life benchmark data sets in breast cancer: the Wisconsin and Ljubljana data sets. Interpretation of an example is then provided.ResultsResults show the expected equalities between our new partial measures and the existing whole measures. The example interpretation illustrates the need for our newly derived partial measures.ConclusionsThe concordant partial area under the ROC curve was proposed and unlike previous partial measure alternatives, it maintains the characteristics of the AUC. The first partial c statistic for ROC plots was also proposed as an unbiased interpretation for part of an ROC curve. The expected equalities among and between our newly derived partial measures and their existing full measure counterparts are confirmed. These measures may be used with any data set but this paper focuses on imbalanced data with low prevalence.Future workFuture work with our proposed measures may: demonstrate their value for imbalanced data with high prevalence, compare them to other measures not based on areas; and combine them with other ROC measures and techniques. | Related workRelated work on several alternatives to the partial AUC are found in the literature [18, 20–22] however none of them, including the partial AUC, have the same three mathematical relationships (formulas) that the AUC has. The AUC is equal to concordance, average TPR and average TNR—where each aspect facilitates understanding and explanation. To the best of our knowledge, we derive the first partial measure which maintains all three relationships of the AUC—the “concordant partial area under the curve” (see the section by that name).Jiang et al. [18] define a partial area index (PAI) for a range of TPR above a threshold. They compare a computer aided diagnostic (CAD) versus radiologists in the identification of benign and malignant cancers using mammograms. The authors select a sensitivity threshold of TPR > = 0.9, based on the assumption that identifying malignant cancer is more important than causing unnecessary biopsies for benign conditions. The authors find that the computer’s ROC curve is significantly higher (p = 0.03) than the radiologists’ ROC curve with their partial area index, whereas with the AUC, the difference was not significant (p = 0.21).Wu et al. [22] propose a learned partial area index that learns the clinically relevant range from the subjective ratings of physicians performing a task. For the task of identifying and segmenting tumors in radiological images, the authors perform an experiment with 29 images comparing an automated probabilistic segmentation algorithm with radiologists ratings. The results highlight that in radiologic diagnosis of cancer, FPR is more important than TPR. The authors conclude that ranges of FPR and TPR can be defined based on clinical indication and use.Related work on a partial concordance (c) statistic in the literature [23–26] do not correspond to partial areas in an ROC. To the best of our knowledge, we derive the first partial c statistic for partial curves in ROC data. Using a similar term, may cause some initial confusion among readers, but our context is sufficiently different and it is appropriate to reuse the term partial c statistic as it corresponds to the term partial AUC in our context.We develop the idea for a concordance matrix and find that Hilden [27] depicted the same idea. Placements or placement values [28, 29] are a related concept, sometimes in table/matrix form [30] but they are not ordered in the same way and they lack a key insight: geometric equivalence between empirical ROC curves and concordance as we later show (Fig. 4). Placements have been used to explain the (vertical) partial AUC [28], but not a combined horizontal and vertical perspective for partial measures, as in our proposed partial c statistic and proposed concordant partial AUC.
Fig. 4The concordance matrix and ROC plot. a The proposed concordance matrix visualizes how the c statistic is computed—as the proportion of correctly ranked pairs (green) out of all pairs. b The empirical ROC plot (above) equals the border in the concordance matrix (left), visualizing the known equivalence between the c statistic and the AUCThe only work with some similarity to the combined perspective of our proposed measures comes from jackknife pseudovalues [30, 31]—but its numeric perspective is not as readily translated into the ROC interpretations we seek. | [
"30207593",
"15900606",
"24898551",
"22716998",
"17569110",
"2668680",
"2814075",
"23122567",
"19068445",
"28707503",
"14601762",
"9040870",
"3203132",
"7069920",
"21030068",
"21484848",
"27920368",
"2251264",
"25881487"
] | [
{
"pmid": "30207593",
"title": "Global cancer statistics 2018: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries.",
"abstract": "This article provides a status report on the global burden of cancer worldwide using the GLOBOCAN 2018 estimates of cancer incidence and mortality produced by the International Agency for Research on Cancer, with a focus on geographic variability across 20 world regions. There will be an estimated 18.1 million new cancer cases (17.0 million excluding nonmelanoma skin cancer) and 9.6 million cancer deaths (9.5 million excluding nonmelanoma skin cancer) in 2018. In both sexes combined, lung cancer is the most commonly diagnosed cancer (11.6% of the total cases) and the leading cause of cancer death (18.4% of the total cancer deaths), closely followed by female breast cancer (11.6%), prostate cancer (7.1%), and colorectal cancer (6.1%) for incidence and colorectal cancer (9.2%), stomach cancer (8.2%), and liver cancer (8.2%) for mortality. Lung cancer is the most frequent cancer and the leading cause of cancer death among males, followed by prostate and colorectal cancer (for incidence) and liver and stomach cancer (for mortality). Among females, breast cancer is the most commonly diagnosed cancer and the leading cause of cancer death, followed by colorectal and lung cancer (for incidence), and vice versa (for mortality); cervical cancer ranks fourth for both incidence and mortality. The most frequently diagnosed cancer and the leading cause of cancer death, however, substantially vary across countries and within each country depending on the degree of economic development and associated social and life style factors. It is noteworthy that high-quality cancer registry data, the basis for planning and implementing evidence-based cancer control programs, are not available in most low- and middle-income countries. The Global Initiative for Cancer Registry Development is an international partnership that supports better estimation, as well as the collection and use of local data, to prioritize and evaluate national cancer control efforts. CA: A Cancer Journal for Clinicians 2018;0:1-31. © 2018 American Cancer Society."
},
{
"pmid": "15900606",
"title": "The partial area under the summary ROC curve.",
"abstract": "The area under the curve (AUC) is commonly used as a summary measure of the receiver operating characteristic (ROC) curve. It indicates the overall performance of a diagnostic test in terms of its accuracy at various diagnostic thresholds used to discriminate cases and non-cases of disease. The AUC measure is also used in meta-analyses, where each component study provides an estimate of the test sensitivity and specificity. These estimates are then combined to calculate a summary ROC (SROC) curve which describes the relationship between-test sensitivity and specificity across studies. The partial AUC has been proposed as an alternative measure to the full AUC. When using the partial AUC, one considers only those regions of the ROC space where data have been observed, or which correspond to clinically relevant values of test sensitivity or specificity. In this paper, we extend the idea of using the partial AUC to SROC curves in meta-analysis. Theoretical and numerical results describe the variation in the partial AUC and its standard error as a function of the degree of inter-study heterogeneity and of the extent of truncation applied to the ROC space. A scaled partial area measure is also proposed to restore the property that the summary measure should range from 0 to 1. The results suggest several disadvantages of the partial AUC measures. In contrast to earlier findings with the full AUC, the partial AUC is rather sensitive to heterogeneity. Comparisons between tests are more difficult, especially if an empirical truncation process is used. Finally, the partial area lacks a useful symmetry property enjoyed by the full AUC. Although the partial AUC may sometimes have clinical appeal, on balance the use of the full AUC is preferred."
},
{
"pmid": "24898551",
"title": "Towards better clinical prediction models: seven steps for development and an ABCD for validation.",
"abstract": "Clinical prediction models provide risk estimates for the presence of disease (diagnosis) or an event in the future course of disease (prognosis) for individual patients. Although publications that present and evaluate such models are becoming more frequent, the methodology is often suboptimal. We propose that seven steps should be considered in developing prediction models: (i) consideration of the research question and initial data inspection; (ii) coding of predictors; (iii) model specification; (iv) model estimation; (v) evaluation of model performance; (vi) internal validation; and (vii) model presentation. The validity of a prediction model is ideally assessed in fully independent data, where we propose four key measures to evaluate model performance: calibration-in-the-large, or the model intercept (A); calibration slope (B); discrimination, with a concordance statistic (C); and clinical usefulness, with decision-curve analysis (D). As an application, we develop and validate prediction models for 30-day mortality in patients with an acute myocardial infarction. This illustrates the usefulness of the proposed framework to strengthen the methodological rigour and quality for prediction models in cardiovascular research."
},
{
"pmid": "22716998",
"title": "Interpreting the concordance statistic of a logistic regression model: relation to the variance and odds ratio of a continuous explanatory variable.",
"abstract": "BACKGROUND\nWhen outcomes are binary, the c-statistic (equivalent to the area under the Receiver Operating Characteristic curve) is a standard measure of the predictive accuracy of a logistic regression model.\n\n\nMETHODS\nAn analytical expression was derived under the assumption that a continuous explanatory variable follows a normal distribution in those with and without the condition. We then conducted an extensive set of Monte Carlo simulations to examine whether the expressions derived under the assumption of binormality allowed for accurate prediction of the empirical c-statistic when the explanatory variable followed a normal distribution in the combined sample of those with and without the condition. We also examine the accuracy of the predicted c-statistic when the explanatory variable followed a gamma, log-normal or uniform distribution in combined sample of those with and without the condition.\n\n\nRESULTS\nUnder the assumption of binormality with equality of variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the product of the standard deviation of the normal components (reflecting more heterogeneity) and the log-odds ratio (reflecting larger effects). Under the assumption of binormality with unequal variances, the c-statistic follows a standard normal cumulative distribution function with dependence on the standardized difference of the explanatory variable in those with and without the condition. In our Monte Carlo simulations, we found that these expressions allowed for reasonably accurate prediction of the empirical c-statistic when the distribution of the explanatory variable was normal, gamma, log-normal, and uniform in the entire sample of those with and without the condition.\n\n\nCONCLUSIONS\nThe discriminative ability of a continuous explanatory variable cannot be judged by its odds ratio alone, but always needs to be considered in relation to the heterogeneity of the population."
},
{
"pmid": "17569110",
"title": "Evaluating the added predictive ability of a new marker: from area under the ROC curve to reclassification and beyond.",
"abstract": "Identification of key factors associated with the risk of developing cardiovascular disease and quantification of this risk using multivariable prediction algorithms are among the major advances made in preventive cardiology and cardiovascular epidemiology in the 20th century. The ongoing discovery of new risk markers by scientists presents opportunities and challenges for statisticians and clinicians to evaluate these biomarkers and to develop new risk formulations that incorporate them. One of the key questions is how best to assess and quantify the improvement in risk prediction offered by these new models. Demonstration of a statistically significant association of a new biomarker with cardiovascular risk is not enough. Some researchers have advanced that the improvement in the area under the receiver-operating-characteristic curve (AUC) should be the main criterion, whereas others argue that better measures of performance of prediction models are needed. In this paper, we address this question by introducing two new measures, one based on integrated sensitivity and specificity and the other on reclassification tables. These new measures offer incremental information over the AUC. We discuss the properties of these new measures and contrast them with the AUC. We also develop simple asymptotic tests of significance. We illustrate the use of these measures with an example from the Framingham Heart Study. We propose that scientists consider these types of measures in addition to the AUC when assessing the performance of newer biomarkers."
},
{
"pmid": "2668680",
"title": "Analyzing a portion of the ROC curve.",
"abstract": "The area under the ROC curve is a common index summarizing the information contained in the curve. When comparing two ROC curves, though, problems arise when interest does not lie in the entire range of false-positive rates (and hence the entire area). Numerical integration is suggested for evaluating the area under a portion of the ROC curve. Variance estimates are derived. The method is applicable for either continuous or rating scale binormal data, from independent or dependent samples. An example is presented which looks at rating scale data of computed tomographic scans of the head with and without concomitant use of clinical history. The areas under the two ROC curves over an a priori range of false-positive rates are examined, as well as the areas under the two curves at a specific point."
},
{
"pmid": "2814075",
"title": "On the statistical analysis of ROC curves.",
"abstract": "We introduce a new accuracy index for receiver operating characteristic (ROC) curves, namely the partial area under the binormal ROC graph over any specified region of interest. We propose a simple but general procedure, based on a conventional analysis of variance, for comparing accuracy indices derived from two or more different modalities. The proposed method is related to and compared with existing methodology, and is illustrated by results from an experiment on optimization of density and contrast yielded by multiform photographic images used for scintigraphy."
},
{
"pmid": "23122567",
"title": "Evaluation of the accuracy of medical tests in a region around the optimal point.",
"abstract": "RATIONALE AND OBJECTIVES\nThe accuracy of medical tests is often assessed using the area under the entire receiver-operating characteristic (ROC) curve. However, this includes values that might be of no clinical importance. Evaluation of a portion of the curve, or a single point, requires identifying a range of clinical interest, which may not be obvious. The author suggests evaluating the accuracy of medical tests in the vicinity of the optimal point.\n\n\nMATERIALS AND METHODS\nAssuming binormality, the author estimated the optimal threshold as the value that maximizes the generalized Youden index. The confidence interval around the optimal point defined a region of clinical interest; the accuracy of the medical test was assessed using the partial area index (PAI) and standardized partial area (sPA). Bootstrapping was used to estimate variances and construct confidence intervals. Coverage probabilities for the PAI and sPA were assessed, as was the size of the test to compare measures. An example using biomechanical measures from radiographic images of the pelvis and lumbar spine to detect disk hernia and spondylolisthesis is presented.\n\n\nRESULTS\nCoverage probabilities of confidence intervals for the partial area measures were good. The size of the test to compare partial area measures was appropriate. Values of PAI and sPA varied with the cost/prevalence ratio. In the example, the biomechanical measures were not found to have significantly different accuracy around the optimal point.\n\n\nCONCLUSIONS\nThe PAI and sPA associated with the optimal point were found to be reasonable and useful measures of accuracy."
},
{
"pmid": "19068445",
"title": "SVMs modeling for highly imbalanced classification.",
"abstract": "Traditional classification algorithms can be limited in their performance on highly unbalanced data sets. A popular stream of work for countering the problem of class imbalance has been the application of a sundry of sampling strategies. In this paper, we focus on designing modifications to support vector machines (SVMs) to appropriately tackle the problem of class imbalance. We incorporate different \"rebalance\" heuristics in SVM modeling, including cost-sensitive learning, and over- and undersampling. These SVM-based strategies are compared with various state-of-the-art approaches on a variety of data sets by using various metrics, including G-mean, area under the receiver operating characteristic curve, F-measure, and area under the precision/recall curve. We show that we are able to surpass or match the previously known best algorithms on each data set. In particular, of the four SVM variations considered in this paper, the novel granular SVMs-repetitive undersampling algorithm (GSVM-RU) is the best in terms of both effectiveness and efficiency. GSVM-RU is effective, as it can minimize the negative effect of information loss while maximizing the positive effect of data cleaning in the undersampling process. GSVM-RU is efficient by extracting much less support vectors and, hence, greatly speeding up SVM prediction."
},
{
"pmid": "28707503",
"title": "Two-way partial AUC and its properties.",
"abstract": "Simultaneous control on true positive rate and false positive rate is of significant importance in the performance evaluation of diagnostic tests. Most of the established literature utilizes partial area under receiver operating characteristic (ROC) curve with restrictions only on false positive rate (FPR), called FPR pAUC, as a performance measure. However, its indirect control on true positive rate (TPR) is conceptually and practically misleading. In this paper, a novel and intuitive performance measure, named as two-way pAUC, is proposed, which directly quantifies partial area under ROC curve with explicit restrictions on both TPR and FPR. To estimate two-way pAUC, we devise a nonparametric estimator. Based on the estimator, a bootstrap-assisted testing method for two-way pAUC comparison is established. Moreover, to evaluate possible covariate effects on two-way pAUC, a regression analysis framework is constructed. Asymptotic normalities of the methods are provided. Advantages of the proposed methods are illustrated by simulation and Wisconsin Breast Cancer Data. We encode the methods as a publicly available R package tpAUC."
},
{
"pmid": "14601762",
"title": "Partial AUC estimation and regression.",
"abstract": "Accurate diagnosis of disease is a critical part of health care. New diagnostic and screening tests must be evaluated based on their abilities to discriminate diseased from nondiseased states. The partial area under the receiver operating characteristic (ROC) curve is a measure of diagnostic test accuracy. We present an interpretation of the partial area under the curve (AUC), which gives rise to a nonparametric estimator. This estimator is more robust than existing estimators, which make parametric assumptions. We show that the robustness is gained with only a moderate loss in efficiency. We describe a regression modeling framework for making inference about covariate effects on the partial AUC. Such models can refine knowledge about test accuracy. Model parameters can be estimated using binary regression methods. We use the regression framework to compare two prostate-specific antigen biomarkers and to evaluate the dependence of biomarker accuracy on the time prior to clinical diagnosis of prostate cancer."
},
{
"pmid": "9040870",
"title": "Sampling variability of nonparametric estimates of the areas under receiver operating characteristic curves: an update.",
"abstract": "RATIONALE AND OBJECTIVES\nSeveral methods have been proposed for calculating the variances and covariances of nonparametric estimates of the area under receiver operating characteristic curves (AUC). The authors provide an explanation of the relationships between them and illustrate the factors that determine sampling variability.\n\n\nMETHODS\nThe authors investigated the algebraic links between two methods, that of \"placements\" and that of \"pseudovalues\" based on jackknifing. They also performed a numerical investigation of the comparative performance of the two methods.\n\n\nRESULTS\nThe \"placement\" method has a simple structure that illustrates the determinants of the sampling variability and does not require specialized software. The authors show that the pseudovalues used in the jackknife method are directly linked to the placement values.\n\n\nCONCLUSION\nBecause of the close link, borne out in a numeric investigation of the sampling variation, and because of the ease of computation, the choice between the two methods can be based on users' preferences. For indexes other than the AUC, however, the use of pseudovalues holds greater promise."
},
{
"pmid": "3203132",
"title": "Comparing the areas under two or more correlated receiver operating characteristic curves: a nonparametric approach.",
"abstract": "Methods of evaluating and comparing the performance of diagnostic tests are of increasing importance as new tests are developed and marketed. When a test is based on an observed variable that lies on a continuous or graded scale, an assessment of the overall value of the test can be made through the use of a receiver operating characteristic (ROC) curve. The curve is constructed by varying the cutpoint used to determine which values of the observed variable will be considered abnormal and then plotting the resulting sensitivities against the corresponding false positive rates. When two or more empirical curves are constructed based on tests performed on the same individuals, statistical analysis on differences between curves must take into account the correlated nature of the data. This paper presents a nonparametric approach to the analysis of areas under correlated ROC curves, by using the theory on generalized U-statistics to generate an estimated covariance matrix."
},
{
"pmid": "7069920",
"title": "Evaluating the yield of medical tests.",
"abstract": "A method is presented for evaluating the amount of information a medical test provides about individual patients. Emphasis is placed on the role of a test in the evaluation of patients with a chronic disease. In this context, the yield of a test is best interpreted by analyzing the prognostic information it furnishes. Information from the history, physical examination, and routine procedures should be used in assessing the yield of a new test. As an example, the method is applied to the use of the treadmill exercise test in evaluating the prognosis of patients with suspected coronary artery disease. The treadmill test is shown to provide surprisingly little prognostic information beyond that obtained from basic clinical measurements."
},
{
"pmid": "21484848",
"title": "On the C-statistics for evaluating overall adequacy of risk prediction procedures with censored survival data.",
"abstract": "For modern evidence-based medicine, a well thought-out risk scoring system for predicting the occurrence of a clinical event plays an important role in selecting prevention and treatment strategies. Such an index system is often established based on the subject's 'baseline' genetic or clinical markers via a working parametric or semi-parametric model. To evaluate the adequacy of such a system, C-statistics are routinely used in the medical literature to quantify the capacity of the estimated risk score in discriminating among subjects with different event times. The C-statistic provides a global assessment of a fitted survival model for the continuous event time rather than focussing on the prediction of bit-year survival for a fixed time. When the event time is possibly censored, however, the population parameters corresponding to the commonly used C-statistics may depend on the study-specific censoring distribution. In this article, we present a simple C-statistic without this shortcoming. The new procedure consistently estimates a conventional concordance measure which is free of censoring. We provide a large sample approximation to the distribution of this estimator for making inferences about the concordance measure. Results from numerical studies suggest that the new procedure performs well in finite sample."
},
{
"pmid": "27920368",
"title": "Use of the concordance index for predictors of censored survival data.",
"abstract": "The concordance index is often used to measure how well a biomarker predicts the time to an event. Estimators of the concordance index for predictors of right-censored data are reviewed, including those based on censored pairs, inverse probability weighting and a proportional-hazards model. Predictive and prognostic biomarkers often lose strength with time, and in this case the aforementioned statistics depend on the length of follow up. A semi-parametric estimator of the concordance index is developed that accommodates converging hazards through a single parameter in a Pareto model. Concordance index estimators are assessed through simulations, which demonstrate substantial bias of classical censored-pairs and proportional-hazards model estimators. Prognostic biomarkers in a cohort of women diagnosed with breast cancer are evaluated using new and classical estimators of the concordance index."
},
{
"pmid": "2251264",
"title": "Multisurface method of pattern separation for medical diagnosis applied to breast cytology.",
"abstract": "Multisurface pattern separation is a mathematical method for distinguishing between elements of two pattern sets. Each element of the pattern sets is comprised of various scalar observations. In this paper, we use the diagnosis of breast cytology to demonstrate the applicability of this method to medical diagnosis and decision making. Each of 11 cytological characteristics of breast fine-needle aspirates reported to differ between benign and malignant samples was graded 1 to 10 at the time of sample collection. Nine characteristics were found to differ significantly between benign and malignant samples. Mathematically, these values for each sample were represented by a point in a nine-dimensional space of real variables. Benign points were separated from malignant ones by planes determined by linear programming. Correct separation was accomplished in 369 of 370 samples (201 benign and 169 malignant). In the one misclassified malignant case, the fine-needle aspirate cytology was so definitely benign and the cytology of the excised cancer so definitely malignant that we believe the tumor was missed on aspiration. Our mathematical method is applicable to other medical diagnostic and decision-making problems."
},
{
"pmid": "25881487",
"title": "The precision--recall curve overcame the optimism of the receiver operating characteristic curve in rare diseases.",
"abstract": "OBJECTIVES\nCompare the area under the receiver operating characteristic curve (AUC) vs. the area under the precision-recall curve (AUPRC) in summarizing the performance of a diagnostic biomarker according to the disease prevalence.\n\n\nSTUDY DESIGN AND SETTING\nA simulation study was performed considering different sizes of diseased and nondiseased groups. Values of a biomarker were sampled with various variances and differences in mean values between the two groups. The AUCs and the AUPRCs were examined regarding their agreement and vs. the positive predictive value (PPV) and the negative predictive value (NPV) of the biomarker.\n\n\nRESULTS\nWith a disease prevalence of 50%, the AUC and the AUPRC showed high correlations with the PPV and the NPV (ρ > 0.95). With a prevalence of 1%, small PPV and AUPRC values (<0.2) but high AUC values (>0.9) were found. The AUPRC reflected better than the AUC the discriminant ability of the biomarker; it had a higher correlation with the PPV (ρ = 0.995 vs. 0.724; P < 0.001).\n\n\nCONCLUSION\nIn uncommon and rare diseases, the AUPRC should be preferred to the AUC because it summarizes better the performance of a biomarker."
}
] |
IEEE Journal of Translational Engineering in Health and Medicine | 31929952 | PMC6946021 | 10.1109/JTEHM.2019.2955458 | Cloud-Based Automated Clinical Decision Support System for Detection and Diagnosis of Lung Cancer in Chest CT | Lung cancer is a major cause for cancer-related deaths. The detection of pulmonary cancer in the early stages can highly increase survival rate. Manual delineation of lung nodules by radiologists is a tedious task. We developed a novel computer-aided decision support system for lung nodule detection based on a 3D Deep Convolutional Neural Network (3DDCNN) for assisting the radiologists. Our decision support system provides a second opinion to the radiologists in lung cancer diagnostic decision making. In order to leverage 3-dimensional information from Computed Tomography (CT) scans, we applied median intensity projection and multi-Region Proposal Network (mRPN) for automatic selection of potential region-of-interests. Our Computer Aided Diagnosis (CAD) system has been trained and validated using LUNA16, ANODE09, and LIDC-IDR datasets; the experiments demonstrate the superior performance of our system, attaining sensitivity, specificity, AUROC, accuracy, of 98.4%, 92%, 96% and 98.51% with 2.1 FPs per scan. We integrated cloud computing, trained and validated our Cloud-Based 3DDCNN on the datasets provided by Shanghai Sixth People’s Hospital, as well as LUNA16, ANODE09, and LIDC-IDR. Our system outperformed the state-of-the-art systems and obtained an impressive 98.7% sensitivity at 1.97 FPs per scan. This shows the potentials of deep learning, in combination with cloud computing, for accurate and efficient lung nodule detection via CT imaging, which could help doctors and radiologists in treating lung cancer patients. | II.Related WorkCAD system is one of the most common means to improve the accuracy of cancer diagnosis done by the radiologists and decrease the time required for interpretation of the CT images. CAD systems are further categorised as: Computer Aided Detection (CADe) systems and Computer Aided Diagnosis (CADx) systems. CADe systems assist in finding the locality of nodules in CT images acquired from different imaging modalities while on the other hand the CADx systems characterize and classify these detected lesions as malignant or benign tumors. In general, a CAD system designed for the detection of pulmonary lesions (nodules) has two steps namely candidate nodule detection and FP Elimination. Firstly, the Regions Of Interest (ROIs) are selected in the input CT image, then the lung nodule candidates are extracted. Teramoto and Fujita [8] used Active Contour Model (ACM) filter for enhancement of contrast then used thresholding of the resultant images for the screening of candidate nodules. Supervised learning methods namely linear discriminant analysis (LDA), gray-scale distance transform, clustering (\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{upgreek}
\usepackage{mathrsfs}
\setlength{\oddsidemargin}{-69pt}
\begin{document}
}{}$k$
\end{document}-means clustering), connected component analysis, and patient-specific priori model have been used in conventional approaches [9]. FP reduction step classifies the lung nodules and non-nodules using machine learning techniques. The main objective is to eliminate the FP results which are considered as candidate in the previous step. Hierarchical Vector Quantization (HVQ), Rule-based filter, LDA, Artificial Neural Networks (ANN), and Support Vector Machine (SVM) are few supervised reduction methods which are used for FP reduction. Random Forest (RF) is reported to surpass SVM in FPs reduction in lung CAD system. Regression tree-based classifiers have shown efficient discrimination ability in reduction of FPs for improved detection results [10]. Spatial and metabolic features in combination with SVM [8] are other approaches used for FPs elminiation.In the past few years, researchers have presented deep learning based CAD systems for cancer detection with promising results [11]. Convolutional Neural Network (CNN) framework is used for FPs reduction [12]. Nodules were accurately classified by using the fully connected layers (Fc) of CNN integrated with SVM classifier in [13]. Shen et al.
[14] proposed a Multi-Crop CNN (MC-CNN) comprising of training by cropped convolutional feature maps and max-pooling layers recursively. Multi-View CNN proposed by Setio et al.
[15] combines three candidate detectors each for sub-solid, solid, and large nodule category and then utilizes a fusion method to classify the input CT image. A 3-dimensional Fully Convolutional Network (FCN) based on Volumes Of Interest (VOI) was employed for classification [16]. This proposed work produces a score map with respect to the input VOI in single pass which is used for training of CNN used for classification. Deep learning based models have also been proposed for the candidate nodule detection [17], [18]. Multi-scale Laplace of Gaussian (LoG) filters and shape priors based multi-scale 3D-CNN model is proposed in [19]. In the past few years both CADe and CADx systems have been researched independently. CADe’s major shortcoming for detecting lung cancer is their lack of ability to characterize them. CADe systems assist the radiologists in detection of lung nodules but do not provide detailed radiological characteristics of the lesion, consequently missing the information which is crucial for radiologists, while on the other hand CADx systems do not automatically identify lesions thus they do not possess high automation levels, making it not suitable for clinical use. Therefore, a new and advanced CAD system is needed, that incorporates the benefits of detection from CADe and diagnosis from CADx into a single system for better performance. The CADx systems performance evaluation is conducted in terms of computational efficiency, accuracy, sensitivity and specificity. | [
"29313949",
"28715341",
"22684487",
"26328955",
"21992380",
"28732268",
"20573538",
"21452728",
"28688283",
"27295650",
"27244717",
"23020972"
] | [
{
"pmid": "29313949",
"title": "Cancer statistics, 2018.",
"abstract": "Each year, the American Cancer Society estimates the numbers of new cancer cases and deaths that will occur in the United States and compiles the most recent data on cancer incidence, mortality, and survival. Incidence data, available through 2014, were collected by the Surveillance, Epidemiology, and End Results Program; the National Program of Cancer Registries; and the North American Association of Central Cancer Registries. Mortality data, available through 2015, were collected by the National Center for Health Statistics. In 2018, 1,735,350 new cancer cases and 609,640 cancer deaths are projected to occur in the United States. Over the past decade of data, the cancer incidence rate (2005-2014) was stable in women and declined by approximately 2% annually in men, while the cancer death rate (2006-2015) declined by about 1.5% annually in both men and women. The combined cancer death rate dropped continuously from 1991 to 2015 by a total of 26%, translating to approximately 2,378,600 fewer cancer deaths than would have been expected if death rates had remained at their peak. Of the 10 leading causes of death, only cancer declined from 2014 to 2015. In 2015, the cancer death rate was 14% higher in non-Hispanic blacks (NHBs) than non-Hispanic whites (NHWs) overall (death rate ratio [DRR], 1.14; 95% confidence interval [95% CI], 1.13-1.15), but the racial disparity was much larger for individuals aged <65 years (DRR, 1.31; 95% CI, 1.29-1.32) compared with those aged ≥65 years (DRR, 1.07; 95% CI, 1.06-1.09) and varied substantially by state. For example, the cancer death rate was lower in NHBs than NHWs in Massachusetts for all ages and in New York for individuals aged ≥65 years, whereas for those aged <65 years, it was 3 times higher in NHBs in the District of Columbia (DRR, 2.89; 95% CI, 2.16-3.91) and about 50% higher in Wisconsin (DRR, 1.78; 95% CI, 1.56-2.02), Kansas (DRR, 1.51; 95% CI, 1.25-1.81), Louisiana (DRR, 1.49; 95% CI, 1.38-1.60), Illinois (DRR, 1.48; 95% CI, 1.39-1.57), and California (DRR, 1.45; 95% CI, 1.38-1.54). Larger racial inequalities in young and middle-aged adults probably partly reflect less access to high-quality health care. CA Cancer J Clin 2018;68:7-30. © 2018 American Cancer Society."
},
{
"pmid": "28715341",
"title": "An Automatic Detection System of Lung Nodule Based on Multigroup Patch-Based Deep Learning Network.",
"abstract": "High-efficiency lung nodule detection dramatically contributes to the risk assessment of lung cancer. It is a significant and challenging task to quickly locate the exact positions of lung nodules. Extensive work has been done by researchers around this domain for approximately two decades. However, previous computer-aided detection (CADe) schemes are mostly intricate and time-consuming since they may require more image processing modules, such as the computed tomography image transformation, the lung nodule segmentation, and the feature extraction, to construct a whole CADe system. It is difficult for these schemes to process and analyze enormous data when the medical images continue to increase. Besides, some state of the art deep learning schemes may be strict in the standard of database. This study proposes an effective lung nodule detection scheme based on multigroup patches cut out from the lung images, which are enhanced by the Frangi filter. Through combining two groups of images, a four-channel convolution neural networks model is designed to learn the knowledge of radiologists for detecting nodules of four levels. This CADe scheme can acquire the sensitivity of 80.06% with 4.7 false positives per scan and the sensitivity of 94% with 15.1 false positives per scan. The results demonstrate that the multigroup patch-based learning system is efficient to improve the performance of lung nodule detection and greatly reduce the false positives under a huge amount of image data."
},
{
"pmid": "22684487",
"title": "Fast lung nodule detection in chest CT images using cylindrical nodule-enhancement filter.",
"abstract": "PURPOSE\nExisting computer-aided detection schemes for lung nodule detection require a large number of calculations and tens of minutes per case; there is a large gap between image acquisition time and nodule detection time. In this study, we propose a fast detection scheme of lung nodule in chest CT images using cylindrical nodule-enhancement filter with the aim of improving the workflow for diagnosis in CT examinations.\n\n\nMETHODS\nProposed detection scheme involves segmentation of the lung region, preprocessing, nodule enhancement, further segmentation, and false-positive (FP) reduction. As a nodule enhancement, our method employs a cylindrical shape filter to reduce the number of calculations. False positives (FPs) in nodule candidates are reduced using support vector machine and seven types of characteristic parameters.\n\n\nRESULTS\nThe detection performance and speed were evaluated experimentally using Lung Image Database Consortium publicly available image database. A 5-fold cross-validation result demonstrates that our method correctly detects 80 % of nodules with 4.2 FPs per case, and detection speed of proposed method is also 4-36 times faster than existing methods.\n\n\nCONCLUSION\nDetection performance and speed indicate that our method may be useful for fast detection of lung nodules in CT images."
},
{
"pmid": "26328955",
"title": "Hybrid detection of lung nodules on CT scan images.",
"abstract": "PURPOSE\nThe diversity of lung nodules poses difficulty for the current computer-aided diagnostic (CAD) schemes for lung nodule detection on computed tomography (CT) scan images, especially in large-scale CT screening studies. We proposed a novel CAD scheme based on a hybrid method to address the challenges of detection in diverse lung nodules.\n\n\nMETHODS\nThe hybrid method proposed in this paper integrates several existing and widely used algorithms in the field of nodule detection, including morphological operation, dot-enhancement based on Hessian matrix, fuzzy connectedness segmentation, local density maximum algorithm, geodesic distance map, and regression tree classification. All of the adopted algorithms were organized into tree structures with multi-nodes. Each node in the tree structure aimed to deal with one type of lung nodule.\n\n\nRESULTS\nThe method has been evaluated on 294 CT scans from the Lung Image Database Consortium (LIDC) dataset. The CT scans were randomly divided into two independent subsets: a training set (196 scans) and a test set (98 scans). In total, the 294 CT scans contained 631 lung nodules, which were annotated by at least two radiologists participating in the LIDC project. The sensitivity and false positive per scan for the training set were 87% and 2.61%. The sensitivity and false positive per scan for the testing set were 85.2% and 3.13%.\n\n\nCONCLUSIONS\nThe proposed hybrid method yielded high performance on the evaluation dataset and exhibits advantages over existing CAD schemes. We believe that the present method would be useful for a wide variety of CT imaging protocols used in both routine diagnosis and screening studies."
},
{
"pmid": "21992380",
"title": "A novel computer-aided lung nodule detection system for CT images.",
"abstract": "PURPOSE\nThe paper presents a complete computer-aided detection (CAD) system for the detection of lung nodules in computed tomography images. A new mixed feature selection and classification methodology is applied for the first time on a difficult medical image analysis problem.\n\n\nMETHODS\nThe CAD system was trained and tested on images from the publicly available Lung Image Database Consortium (LIDC) on the National Cancer Institute website. The detection stage of the system consists of a nodule segmentation method based on nodule and vessel enhancement filters and a computed divergence feature to locate the centers of the nodule clusters. In the subsequent classification stage, invariant features, defined on a gauge coordinates system, are used to differentiate between real nodules and some forms of blood vessels that are easily generating false positive detections. The performance of the novel feature-selective classifier based on genetic algorithms and artificial neural networks (ANNs) is compared with that of two other established classifiers, namely, support vector machines (SVMs) and fixed-topology neural networks. A set of 235 randomly selected cases from the LIDC database was used to train the CAD system. The system has been tested on 125 independent cases from the LIDC database.\n\n\nRESULTS\nThe overall performance of the fixed-topology ANN classifier slightly exceeds that of the other classifiers, provided the number of internal ANN nodes is chosen well. Making educated guesses about the number of internal ANN nodes is not needed in the new feature-selective classifier, and therefore this classifier remains interesting due to its flexibility and adaptability to the complexity of the classification problem to be solved. Our fixed-topology ANN classifier with 11 hidden nodes reaches a detection sensitivity of 87.5% with an average of four false positives per scan, for nodules with diameter greater than or equal to 3 mm. Analysis of the false positive items reveals that a considerable proportion (18%) of them are smaller nodules, less than 3 mm in diameter.\n\n\nCONCLUSIONS\nA complete CAD system incorporating novel features is presented, and its performance with three separate classifiers is compared and analyzed. The overall performance of our CAD system equipped with any of the three classifiers is well with respect to other methods described in literature."
},
{
"pmid": "28732268",
"title": "Validation, comparison, and combination of algorithms for automatic detection of pulmonary nodules in computed tomography images: The LUNA16 challenge.",
"abstract": "Automatic detection of pulmonary nodules in thoracic computed tomography (CT) scans has been an active area of research for the last two decades. However, there have only been few studies that provide a comparative performance evaluation of different systems on a common database. We have therefore set up the LUNA16 challenge, an objective evaluation framework for automatic nodule detection algorithms using the largest publicly available reference database of chest CT scans, the LIDC-IDRI data set. In LUNA16, participants develop their algorithm and upload their predictions on 888 CT scans in one of the two tracks: 1) the complete nodule detection track where a complete CAD system should be developed, or 2) the false positive reduction track where a provided set of nodule candidates should be classified. This paper describes the setup of LUNA16 and presents the results of the challenge so far. Moreover, the impact of combining individual systems on the detection performance was also investigated. It was observed that the leading solutions employed convolutional networks and used the provided set of nodule candidates. The combination of these solutions achieved an excellent sensitivity of over 95% at fewer than 1.0 false positives per scan. This highlights the potential of combining algorithms to improve the detection performance. Our observer study with four expert readers has shown that the best system detects nodules that were missed by expert readers who originally annotated the LIDC-IDRI data. We released this set of additional nodules for further development of CAD systems."
},
{
"pmid": "20573538",
"title": "Comparing and combining algorithms for computer-aided detection of pulmonary nodules in computed tomography scans: The ANODE09 study.",
"abstract": "Numerous publications and commercial systems are available that deal with automatic detection of pulmonary nodules in thoracic computed tomography scans, but a comparative study where many systems are applied to the same data set has not yet been performed. This paper introduces ANODE09 ( http://anode09.isi.uu.nl), a database of 55 scans from a lung cancer screening program and a web-based framework for objective evaluation of nodule detection algorithms. Any team can upload results to facilitate benchmarking. The performance of six algorithms for which results are available are compared; five from academic groups and one commercially available system. A method to combine the output of multiple systems is proposed. Results show a substantial performance difference between algorithms, and demonstrate that combining the output of algorithms leads to marked performance improvements."
},
{
"pmid": "21452728",
"title": "The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): a completed reference database of lung nodules on CT scans.",
"abstract": "PURPOSE\nThe development of computer-aided diagnostic (CAD) methods for lung nodule detection, classification, and quantitative assessment can be facilitated through a well-characterized repository of computed tomography (CT) scans. The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) completed such a database, establishing a publicly available reference for the medical imaging research community. Initiated by the National Cancer Institute (NCI), further advanced by the Foundation for the National Institutes of Health (FNIH), and accompanied by the Food and Drug Administration (FDA) through active participation, this public-private partnership demonstrates the success of a consortium founded on a consensus-based process.\n\n\nMETHODS\nSeven academic centers and eight medical imaging companies collaborated to identify, address, and resolve challenging organizational, technical, and clinical issues to provide a solid foundation for a robust database. The LIDC/IDRI Database contains 1018 cases, each of which includes images from a clinical thoracic CT scan and an associated XML file that records the results of a two-phase image annotation process performed by four experienced thoracic radiologists. In the initial blinded-read phase, each radiologist independently reviewed each CT scan and marked lesions belonging to one of three categories (\"nodule > or =3 mm,\" \"nodule <3 mm,\" and \"non-nodule > or =3 mm\"). In the subsequent unblinded-read phase, each radiologist independently reviewed their own marks along with the anonymized marks of the three other radiologists to render a final opinion. The goal of this process was to identify as completely as possible all lung nodules in each CT scan without requiring forced consensus.\n\n\nRESULTS\nThe Database contains 7371 lesions marked \"nodule\" by at least one radiologist. 2669 of these lesions were marked \"nodule > or =3 mm\" by at least one radiologist, of which 928 (34.7%) received such marks from all four radiologists. These 2669 lesions include nodule outlines and subjective nodule characteristic ratings.\n\n\nCONCLUSIONS\nThe LIDC/IDRI Database is expected to provide an essential medical imaging research resource to spur CAD development, validation, and dissemination in clinical practice."
},
{
"pmid": "28688283",
"title": "Central focused convolutional neural networks: Developing a data-driven model for lung nodule segmentation.",
"abstract": "Accurate lung nodule segmentation from computed tomography (CT) images is of great importance for image-driven lung cancer analysis. However, the heterogeneity of lung nodules and the presence of similar visual characteristics between nodules and their surroundings make it difficult for robust nodule segmentation. In this study, we propose a data-driven model, termed the Central Focused Convolutional Neural Networks (CF-CNN), to segment lung nodules from heterogeneous CT images. Our approach combines two key insights: 1) the proposed model captures a diverse set of nodule-sensitive features from both 3-D and 2-D CT images simultaneously; 2) when classifying an image voxel, the effects of its neighbor voxels can vary according to their spatial locations. We describe this phenomenon by proposing a novel central pooling layer retaining much information on voxel patch center, followed by a multi-scale patch learning strategy. Moreover, we design a weighted sampling to facilitate the model training, where training samples are selected according to their degree of segmentation difficulty. The proposed method has been extensively evaluated on the public LIDC dataset including 893 nodules and an independent dataset with 74 nodules from Guangdong General Hospital (GDGH). We showed that CF-CNN achieved superior segmentation performance with average dice scores of 82.15% and 80.02% for the two datasets respectively. Moreover, we compared our results with the inter-radiologists consistency on LIDC dataset, showing a difference in average dice score of only 1.98%."
},
{
"pmid": "27295650",
"title": "Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks.",
"abstract": "State-of-the-art object detection networks depend on region proposal algorithms to hypothesize object locations. Advances like SPPnet [1] and Fast R-CNN [2] have reduced the running time of these detection networks, exposing region proposal computation as a bottleneck. In this work, we introduce a Region Proposal Network (RPN) that shares full-image convolutional features with the detection network, thus enabling nearly cost-free region proposals. An RPN is a fully convolutional network that simultaneously predicts object bounds and objectness scores at each position. The RPN is trained end-to-end to generate high-quality region proposals, which are used by Fast R-CNN for detection. We further merge RPN and Fast R-CNN into a single network by sharing their convolutional features-using the recently popular terminology of neural networks with 'attention' mechanisms, the RPN component tells the unified network where to look. For the very deep VGG-16 model [3] , our detection system has a frame rate of 5 fps (including all steps) on a GPU, while achieving state-of-the-art object detection accuracy on PASCAL VOC 2007, 2012, and MS COCO datasets with only 300 proposals per image. In ILSVRC and COCO 2015 competitions, Faster R-CNN and RPN are the foundations of the 1st-place winning entries in several tracks. Code has been made publicly available."
},
{
"pmid": "27244717",
"title": "Fully Convolutional Networks for Semantic Segmentation.",
"abstract": "Convolutional networks are powerful visual models that yield hierarchies of features. We show that convolutional networks by themselves, trained end-to-end, pixels-to-pixels, improve on the previous best result in semantic segmentation. Our key insight is to build \"fully convolutional\" networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning. We define and detail the space of fully convolutional networks, explain their application to spatially dense prediction tasks, and draw connections to prior models. We adapt contemporary classification networks (AlexNet, the VGG net, and GoogLeNet) into fully convolutional networks and transfer their learned representations by fine-tuning to the segmentation task. We then define a skip architecture that combines semantic information from a deep, coarse layer with appearance information from a shallow, fine layer to produce accurate and detailed segmentations. Our fully convolutional networks achieve improved segmentation of PASCAL VOC (30% relative improvement to 67.2% mean IU on 2012), NYUDv2, SIFT Flow, and PASCAL-Context, while inference takes one tenth of a second for a typical image."
},
{
"pmid": "23020972",
"title": "Automatic detection of lung nodules in CT datasets based on stable 3D mass-spring models.",
"abstract": "We propose a computer-aided detection (CAD) system which can detect small-sized (from 3mm) pulmonary nodules in spiral CT scans. A pulmonary nodule is a small lesion in the lungs, round-shaped (parenchymal nodule) or worm-shaped (juxtapleural nodule). Both kinds of lesions have a radio-density greater than lung parenchyma, thus appearing white on the images. Lung nodules might indicate a lung cancer and their early stage detection arguably improves the patient survival rate. CT is considered to be the most accurate imaging modality for nodule detection. However, the large amount of data per examination makes the full analysis difficult, leading to omission of nodules by the radiologist. We developed an advanced computerized method for the automatic detection of internal and juxtapleural nodules on low-dose and thin-slice lung CT scan. This method consists of an initial selection of nodule candidates list, the segmentation of each candidate nodule and the classification of the features computed for each segmented nodule candidate.The presented CAD system is aimed to reduce the number of omissions and to decrease the radiologist scan examination time. Our system locates with the same scheme both internal and juxtapleural nodules. For a correct volume segmentation of the lung parenchyma, the system uses a Region Growing (RG) algorithm and an opening process for including the juxtapleural nodules. The segmentation and the extraction of the suspected nodular lesions from CT images by a lung CAD system constitutes a hard task. In order to solve this key problem, we use a new Stable 3D Mass-Spring Model (MSM) combined with a spline curves reconstruction process. Our model represents concurrently the characteristic gray value range, the directed contour information as well as shape knowledge, which leads to a much more robust and efficient segmentation process. For distinguishing the real nodules among nodule candidates, an additional classification step is applied; furthermore, a neural network is applied to reduce the false positives (FPs) after a double-threshold cut. The system performance was tested on a set of 84 scans made available by the Lung Image Database Consortium (LIDC) annotated by four expert radiologists. The detection rate of the system is 97% with 6.1 FPs/CT. A reduction to 2.5 FPs/CT is achieved at 88% sensitivity. We presented a new 3D segmentation technique for lung nodules in CT datasets, using deformable MSMs. The result is a efficient segmentation process able to converge, identifying the shape of the generic ROI, after a few iterations. Our suitable results show that the use of the 3D AC model and the feature analysis based FPs reduction process constitutes an accurate approach to the segmentation and the classification of lung nodules."
}
] |
Journal of Clinical Medicine | 31817530 | PMC6947400 | 10.3390/jcm8122154 | Decision Support for the Optimization of Provider Staffing for Hospital Emergency Departments with a Queue-Based Approach | Deployment or distribution of valuable medical resources has emerged as an increasing challenge to hospital administrators and health policy makers. The hospital emergency department (HED) census and workload can be highly variable. Improvement of emergency services is an important stage in the development of the healthcare system and research on the optimal deployment of medical resources appears to be an important issue for HED long-term management. HED performance, in terms of patient flow and available resources, can be studied using the queue-based approach. The kernel point of this research is to approach the optimal cost on logistics using queuing theory. To model the proposed approach for a qualitative profile, a generic HED system is mapped into the M/M/R/N queue-based model, which assumes an R-server queuing system with Poisson arrivals, exponentially distributed service times and a system capacity of N. A comprehensive quantitative mathematical analysis on the cost pattern was done, while relevant simulations were also conducted to validate the proposed optimization model. The design illustration is presented in this paper to demonstrate the application scenario in a HED platform. Hence, the proposed approach provides a feasibly cost-oriented decision support framework to adapt a HED management requirement. | 2. Related WorkHED crowding represents an important issue that may affect the quality and access of health care. Accordingly, the optimization of average waiting times has become a focus across many mainstream hospitals. As defined by the Canadian Association of Emergency Physicians [6], HED overcrowding is a situation in which the demand for services exceeds the ability of health care professionals to provide care within a reasonable length of time. As stated in [7], significant variation in HED patient arrival rates necessitates the adjustment of staffing patterns to optimize the timely care of patients. Green et al. [7] collected detailed HED arrival data from an urban hospital and used a queue-based analysis to gain insights on how to change provider staffing to decrease the proportion of patients who leave without being seen. However, no optimization materials in terms of mathematical theory were addressed at all in these studies [8,9,10].Finamore et al. [6] described an innovative use of a satellite clinic to prevent patients returning to HED for care on a scheduled basis. Their strategy allows patients returning for follow-up diagnostics or treatment to bypass the main HED. The proposed HED satellite clinic may shorten the waiting times in multiple ways, such as increasing the capacity to remove returning patients from the pool of patients requiring care in the HED, and creating a separate registration area and a separated staffed treatment area. The visit data in the HED were used to measure crowding and completion of waiting room time, treatment time, and boarding time for all patients treated and released or admitted to a single HED during 2010. In [11], the authors conducted a relevant statistical analysis and concluded that a HED census at arrival demonstrated variation in crowding exposure over time. In the work of Wiler et al. [12], the authors developed an agent-based simulation model for the evaluation of the FTS (fast track strategies) scheme applied in the HED to reduce patient waiting time. By and large, the issues regarding cost optimization on the HED management cost are not a concern for these open studies [5,11,12].Vass and Szabo [13] evaluated 2195 questionnaires in the HED situated in Mures County, Romania, for a period three years (2010–2013). Their research reported that long waiting times were the most important complaint in patient’s satisfaction surveys. To perceive the waiting times, only a specific M/M/3 queuing model was considered in their work to demonstrate the computation details. The work of [13] has motivated us to consider whether it is possible to provide an effective and feasible approach to decision support for the optimization of provider staffing under cost constraints for the HEDs with more elaborative queue-based frameworks. This research generalizes the queuing model of [13] into the M/M/R/N queuing framework in terms of three practical aspects: (1) Numbers of medical servers (provider staffing) can be configured to one of the system parameters instead of a fixed quantity. Such a dynamic staffing level enables a hospital to quantify the cost patterns and the alleviation of HED waiting times. (2) The space available in the HED would be limited for every hospital management. The fourth factor (N) in the notation of the M/M/R/N model symbolizes the fact that only N patients can be allowed to enter the waiting rooms of the HED in order not to exacerbate the issue of overcrowding. (3) The exact mathematical expressions would be derived in an elaborative manner and the relevant cost formulation would be used to provide the generic decision support for the hospital administrators. | [
"18306043",
"21553561",
"19914473",
"16365329",
"11157291",
"15111915",
"18433933",
"22168190",
"20058527",
"18387699",
"15114227"
] | [
{
"pmid": "18306043",
"title": "Cost reduction strategies for emergency services: insurance role, practice changes and patients accountability.",
"abstract": "Progress in medicine and the subsequent extension of health coverage has meant that health expenditure has increased sharply in Western countries. In the United States, this rise was precipitated in the 1980s, compounded by an increase in drug consumption which prompted the government to re-examine its financial support to care delivery, most notably in hospital care and emergencies services. In California for example, 50 emergency service providers were closed between 1990 and 2000, and nine in 1999-2000 alone. In that State, only 355 hospitals (out of 568) have maintained emergency services departments (Darves, WebMB, 2001). Reforming hospital Emergency Department (ED) operations requires caution not only because the media pay a lot of attention to ED operations, but also because it raises ethical issues: this became more apparent with the enactment of the EMTALA which stipulates that federally funded hospitals are required to give emergency aid in order to \"stabilize\" a patient suffering from an \"emergency medical condition\" before discharging or transferring that patient to another facility. While in essence the law aims to preserve patient access to care, physicians assert that the EMTALA leads to more patients seeking care for non-urgent conditions in EDs (GAO, Report to Congressional Committees, 2001), leading to overcrowding, delayed care for patients with true emergency needs, and forcing hospitals to divert ambulances to other facilities resulting in further delays in urgent care. Also, fewer physicians are willing to be on-call in emergency departments because the EMTALA law requires on-call physicians to provide uncompensated care. Thus there is a need to find a balance between appropriate care to be provided to ED patients, and low costs since uncompensated care is not covered by state or federal funds. This concerns, first and foremost, hospitals that provide a greater amount of uncompensated care (e.g. hospitals serving communities with a higher population of illegal immigrants). Looking at the intrinsic causes of high ED costs, the paper first explains why costs of care provided in EDs are high, and look at a major cause of high ED costs: overcrowding and ED users' characteristics. This is followed by a discussion on a much-debated factor: the use of EDs for non-emergency conditions, a practice which has often been accused of disproportionately raising costs. We look at various mechanisms used either to divert or prevent the patient from using ED: these include triage services; and the role of HMOs in the ED chain of care: though the US government has increasingly relied on Managed Care organizations to contain costs (e.g. Medicaid and Medicare Managed Care), do HMOs make a difference when it comes to ED costs? Of particular interest is the family physician acting as a gatekeeper, and the legislation that was enacted to protect those who bypass the referral system. We then look at the other end of the ED chain (i.e. the recipient): the financial responsibility of ED users has increased. Alternative providers such as walk-in clinics are increasingly common. EDs also attempt to reengineer their operations to curb costs. While the data are mostly applicable to a private health care system (e.g. the US), the article, using a critical assessment of the existing literature, has implications for other EDs generally, wherever they operate, since every ED faces similar funding problems."
},
{
"pmid": "21553561",
"title": "Cost analysis of emergency department.",
"abstract": "This paper is intended to examine both clinical and economic data concerning the activity of an emergency department of an Italian primary Hospital. Real data referring to arrivals, waiting times, service times, severity (according to triage classification) of patients' condition collected along the whole 2009 are matched up with the relevant accounting and economic information concerning the costs faced. A new methodological approach is implemented in order to identify a \"standard production cost\" and its variability. We believe that this kind of analysis well fits the federalizing process that Italy is experiencing. In fact the federal reform is driving our Country toward a decentralized provision and funding of local public services. The health care services are \"fundamental\" under the provisions of the law that in turn implies that a standard cost has to be defined for its funding. The standard cost (as it is defined by the law) relies on the concepts of appropriateness and efficiency in the production of the health care service, assuming a standard quality level as target. The identification and measurement of health care costs is therefore a crucial task propaedeutic to health services economic evaluation. Various guidelines with different amount of details have been set up for costing methods which, however, are defined in simplified frameworks and using fictious data. This study is a first attempt to proceed in the direction of a precise definition of the costs inherent to the emergency department activity."
},
{
"pmid": "19914473",
"title": "Shortening the wait: a strategy to reduce waiting times in the emergency department.",
"abstract": "Emergency Department crowding (EDC), extended wait times, and the issues arising as a result are well described in the health-care literature. Accordingly, reducing waiting times has become a focus across Canada. Less-urgent patient presentations represent a large proportion of the individuals presenting for care in Canadian emergency departments (ED). This patient population contributes to congestion in the ED. In light of these issues, an innovative program is being trialed at Burnaby Hospital, in the lower mainland of British Columbia. The goals of the program include: a reduction of EDC, a shortening of the duration of time between patient presentation and treatment, and an increase reported levels of patient satisfaction."
},
{
"pmid": "16365329",
"title": "Using queueing theory to increase the effectiveness of emergency department provider staffing.",
"abstract": "OBJECTIVES\nSignificant variation in emergency department (ED) patient arrival rates necessitates the adjustment of staffing patterns to optimize the timely care of patients. This study evaluated the effectiveness of a queueing model in identifying provider staffing patterns to reduce the fraction of patients who leave without being seen.\n\n\nMETHODS\nThe authors collected detailed ED arrival data from an urban hospital and used a Lag SIPP queueing analysis to gain insights on how to change provider staffing to decrease the proportion of patients who leave without being seen. The authors then compared this proportion for the same 39-week period before and after the resulting changes.\n\n\nRESULTS\nDespite an increase in arrival volume of 1,078 patients (6.3%), an average increase in provider hours of 12 hours per week (3.1%) resulted in 258 fewer patients who left without being seen. This represents a decrease in the proportion of patients who left without being seen by 22.9%. Restricting attention to a four-day subset of the week during which there was no increase in total provider hours, a reallocation of providers based on the queueing model resulted in 161 fewer patients who left without being seen (21.7%), despite an additional 548 patients (5.5%) arriving in the second half of the study.\n\n\nCONCLUSIONS\nTimely access to a provider is a critical dimension of ED quality performance. In an environment in which EDs are often understaffed, analyses of arrival patterns and the use of queueing models can be extremely useful in identifying the most effective allocation of staff."
},
{
"pmid": "11157291",
"title": "Frequent overcrowding in U.S. emergency departments.",
"abstract": "OBJECTIVE\nTo describe the definition, extent, and factors associated with overcrowding in emergency departments (EDs) in the United States as perceived by ED directors.\n\n\nMETHODS\nSurveys were mailed to a random sample of EDs in all 50 states. Questions included ED census, frequency, impact, and determination of overcrowding. Respondents were asked to rank perceived causes using a five-point Likert scale.\n\n\nRESULTS\nOf 836 directors surveyed, 575 (69%) responded, and 525 (91%) reported overcrowding as a problem. Common definitions of overcrowding (>70%) included: patients in hallways, all ED beds occupied, full waiting rooms >6 hours/day, and acutely ill patients who wait >60 minutes to see a physician. Overcrowding situations were similar in academic EDs (94%) and private hospital EDs (91%). Emergency departments serving populations < or =250,000 had less severe overcrowding (87%) than EDs serving larger areas (96%). Overcrowding occurred most often several times per week (53%), but 39% of EDs reported daily overcrowding. On a 1-5 scale (+/-SD), causes of overcrowding included high patient acuity (4.3 +/- 0.9), hospital bed shortage (4.2 +/- 1.1), high ED patient volume (3.8 +/- 1.2), radiology and lab delays (3.3 +/- 1.2), and insufficient ED space (3.3 +/- 1.3). Thirty-three percent reported that a few patients had actual poor outcomes as a result of overcrowding.\n\n\nCONCLUSIONS\nEpisodic, but frequent, overcrowding is a significant problem in academic, county, and private hospital EDs in urban and rural settings. Its causes are complex and multifactorial."
},
{
"pmid": "15111915",
"title": "Access to emergency care: restricted by long waiting times and cost and coverage concerns.",
"abstract": "STUDY OBJECTIVE\nWe monitor progress toward Healthy People 2010 objectives of reducing health disparities and decreasing delay and difficulty in access to emergency care.\n\n\nMETHODS\nThis was a secondary analysis of 2001 National Health Interview Survey interviews of 33,326 adults to provide population-based estimates of self-reported delay, difficulty, or inability to get care from a hospital emergency department (ED) in the preceding 12 months.\n\n\nRESULTS\nAbout 7.7% of the estimated 36.6 million adults who sought care in a hospital ED in the preceding 12 months reported a delay in receiving care, having difficulty receiving care, or being unable to receive care. Waiting times were the most frequently noted cause of problems. Concerns about service costs and insurance coverage were also commonly cited access barriers. Access problems were more likely to be reported by adults without health insurance, younger adults, adults in fair or poor health, and adults with annual incomes of less than 20,000 dollars.\n\n\nCONCLUSION\nSelf-reported access to ED care is impeded by prolonged waiting times and by cost and insurance coverage concerns. These access problems are occurring more frequently among groups that face multiple social and economic disadvantages. Hospital operational changes to reduce ED treatment delays and health care financing policies that reduce insurance coverage inequities may both be needed to meet these Healthy People 2010 objectives."
},
{
"pmid": "18433933",
"title": "Systematic review of emergency department crowding: causes, effects, and solutions.",
"abstract": "Emergency department (ED) crowding represents an international crisis that may affect the quality and access of health care. We conducted a comprehensive PubMed search to identify articles that (1) studied causes, effects, or solutions of ED crowding; (2) described data collection and analysis methodology; (3) occurred in a general ED setting; and (4) focused on everyday crowding. Two independent reviewers identified the relevant articles by consensus. We applied a 5-level quality assessment tool to grade the methodology of each study. From 4,271 abstracts and 188 full-text articles, the reviewers identified 93 articles meeting the inclusion criteria. A total of 33 articles studied causes, 27 articles studied effects, and 40 articles studied solutions of ED crowding. Commonly studied causes of crowding included nonurgent visits, \"frequent-flyer\" patients, influenza season, inadequate staffing, inpatient boarding, and hospital bed shortages. Commonly studied effects of crowding included patient mortality, transport delays, treatment delays, ambulance diversion, patient elopement, and financial effect. Commonly studied solutions of crowding included additional personnel, observation units, hospital bed access, nonurgent referrals, ambulance diversion, destination control, crowding measures, and queuing theory. The results illustrated the complex, multifaceted characteristics of the ED crowding problem. Additional high-quality studies may provide valuable contributions toward better understanding and alleviating the daily crisis. This structured overview of the literature may help to identify future directions for the crowding research agenda."
},
{
"pmid": "22168190",
"title": "Comparison of methods for measuring crowding and its effects on length of stay in the emergency department.",
"abstract": "OBJECTIVES\nThis consensus conference presentation article focuses on methods of measuring crowding. The authors compare daily versus hourly measures, static versus dynamic measures, and the use of linear or logistic regression models versus survival analysis models to estimate the effect of crowding on an outcome.\n\n\nMETHODS\nEmergency department (ED) visit data were used to measure crowding and completion of waiting room time, treatment time, and boarding time for all patients treated and released or admitted to a single ED during 2010 (excluding patients who left without being seen). Crowding was characterized according to total ED census. First, total ED census on a daily and hourly basis throughout the 1-year study period was measured, and the ratios of daily and hourly census to the ED's median daily and hourly census were computed. Second, the person-based ED visit data set was transposed to person-period data. Multiple records per patient were created, whereby each record represented a consecutive 15-minute interval during each patient's ED length of stay (LOS). The variation in crowding measured statically (i.e., crowding at arrival or mean crowding throughout the shift in which the patient arrived) or dynamically (every 15 minutes throughout each patient's ED LOS) were compared. Within each phase of care, the authors divided each individual crowding value by the median crowding value of all 15-minute intervals to create a time-varying ED census ratio. For the two static measures, the ratio between each patient's ED census at arrival and the overall median ED census at arrival was computed, as well as the ratio between the mean shift ED census (based on the shift in which the patient arrived) and the study ED's overall mean shift ED census. Finally, the effect of crowding on the probability of completing different phases of emergency care was compared when estimated using a log-linear regression model versus a discrete time survival analysis model.\n\n\nRESULTS\nDuring the 1-year study period, for 9% of the hours, total ED census was at least 50% greater than the median hourly census (median, 36). In contrast, on none of the days was total ED census at least 50% greater than the median daily census (median, 161). ED census at arrival and time-varying ED census yielded greater variation in crowding exposure compared to mean shift census for all three phases of emergency care. When estimating the effect of crowding on the completion of care, the discrete time survival analysis model fit the observed data better than the log-linear regression models. The discrete time survival analysis model also determined that the effect of crowding on care completion varied during patients' ED LOS.\n\n\nCONCLUSIONS\nCrowding measured at the daily level will mask much of the variation in crowding that occurs within a 24-hour period. ED census at arrival demonstrated similar variation in crowding exposure as time-varying ED census. Discrete time survival analysis is a more appropriate approach for estimating the effect of crowding on an outcome."
},
{
"pmid": "20058527",
"title": "What is a 'generic' hospital model?--a comparison of 'generic' and 'specific' hospital models of emergency patient flows.",
"abstract": "The paper addresses the question in the title via a survey of experienced healthcare modellers and an extensive literature review. It has two objectives. 1. To compare the characteristics of 'generic' and 'specific' models and their success in hospitals for emergency patients 2. To learn lessons about the design, validation and implementation of models of flows of emergency patients through acute hospitals First the survey and some key papers lead to a proposed 'spectrum of genericity', consisting of four levels. We focus on two of these levels, distinguished from each other by their purpose. Secondly modelling work on the flow of emergency patient flows through and between A&E, Bed Management, Surgery, Intensive Care and Diagnostics is then reviewed. Finally the review is used to provide a much more comprehensive comparison of'generic' and 'specific' models, distinguishing three types of genericity and identifying 24 important features of models and the associated modelling process. Many features are common across model types, but there are also important distinctions, with implications for model development."
},
{
"pmid": "18387699",
"title": "Forecasting emergency department crowding: a discrete event simulation.",
"abstract": "STUDY OBJECTIVE\nTo develop a discrete event simulation of emergency department (ED) patient flow for the purpose of forecasting near-future operating conditions and to validate the forecasts with several measures of ED crowding.\n\n\nMETHODS\nWe developed a discrete event simulation of patient flow with evidence from the literature. Development was purely theoretical, whereas validation involved patient data from an academic ED. The model inputs and outputs, respectively, are 6-variable descriptions of every present and future patient in the ED. We validated the model by using a sliding-window design, ensuring separation of fitting and validation data in time series. We sampled consecutive 10-minute observations during 2006 (n=52,560). The outcome measures--all forecast 2, 4, 6, and 8 hours into the future from each observation--were the waiting count, waiting time, occupancy level, length of stay, boarding count, boarding time, and ambulance diversion. Forecasting performance was assessed with Pearson's correlation, residual summary statistics, and area under the receiver operating characteristic curve.\n\n\nRESULTS\nThe correlations between crowding forecasts and actual outcomes started high and decreased gradually up to 8 hours into the future (lowest Pearson's r for waiting count=0.56; waiting time=0.49; occupancy level=0.78; length of stay=0.86; boarding count=0.79; boarding time=0.80). The residual means were unbiased for all outcomes except the boarding time. The discriminatory power for ambulance diversion remained consistently high up to 8 hours into the future (lowest area under the receiver operating characteristic curve=0.86).\n\n\nCONCLUSION\nBy modeling patient flow, rather than operational summary variables, our simulation forecasts several measures of near-future ED crowding, with various degrees of good performance."
},
{
"pmid": "15114227",
"title": "Queuing theory accurately models the need for critical care resources.",
"abstract": "BACKGROUND\nAllocation of scarce resources presents an increasing challenge to hospital administrators and health policy makers. Intensive care units can present bottlenecks within busy hospitals, but their expansion is costly and difficult to gauge. Although mathematical tools have been suggested for determining the proper number of intensive care beds necessary to serve a given demand, the performance of such models has not been prospectively evaluated over significant periods.\n\n\nMETHODS\nThe authors prospectively collected 2 years' admission, discharge, and turn-away data in a busy, urban intensive care unit. Using queuing theory, they then constructed a mathematical model of patient flow, compared predictions from the model to observed performance of the unit, and explored the sensitivity of the model to changes in unit size.\n\n\nRESULTS\nThe queuing model proved to be very accurate, with predicted admission turn-away rates correlating highly with those actually observed (correlation coefficient = 0.89). The model was useful in predicting both monthly responsiveness to changing demand (mean monthly difference between observed and predicted values, 0.4+/-2.3%; range, 0-13%) and the overall 2-yr turn-away rate for the unit (21%vs. 22%). Both in practice and in simulation, turn-away rates increased exponentially when utilization exceeded 80-85%. Sensitivity analysis using the model revealed rapid and severe degradation of system performance with even the small changes in bed availability that might result from sudden staffing shortages or admission of patients with very long stays.\n\n\nCONCLUSIONS\nThe stochastic nature of patient flow may falsely lead health planners to underestimate resource needs in busy intensive care units. Although the nature of arrivals for intensive care deserves further study, when demand is random, queuing theory provides an accurate means of determining the appropriate supply of beds."
}
] |
Genes | 31766738 | PMC6947459 | 10.3390/genes10120962 | Computational Inference of Gene Co-Expression Networks for the identification of Lung Carcinoma Biomarkers: An Ensemble Approach | Gene Networks (GN), have emerged as an useful tool in recent years for the analysis of different diseases in the field of biomedicine. In particular, GNs have been widely applied for the study and analysis of different types of cancer. In this context, Lung carcinoma is among the most common cancer types and its short life expectancy is partly due to late diagnosis. For this reason, lung cancer biomarkers that can be easily measured are highly demanded in biomedical research. In this work, we present an application of gene co-expression networks in the modelling of lung cancer gene regulatory networks, which ultimately served to the discovery of new biomarkers. For this, a robust GN inference was performed from microarray data concomitantly using three different co-expression measures. Results identified a major cluster of genes involved in SRP-dependent co-translational protein target to membrane, as well as a set of 28 genes that were exclusively found in networks generated from cancer samples. Amongst potential biomarkers, genes NCKAP1L and DMD are highlighted due to their implications in a considerable portion of lung and bronchus primary carcinomas. These findings demonstrate the potential of GN reconstruction in the rational prediction of biomarkers. | 1.1. Related WorksCo-expression networks have been extensively used in the literature for the analysis and study of cancer disease. For example, Aggarwal et al. [22] applied a consensus gene co-expression meta-network of gastric cancer, the second most common cause of cancer-related deaths in the world. The results suggest, at single-gene level, an interaction between the PLA2G2A prognostic marker and the EphB2 receptor. Furthermore, the network analysis also enhances the understanding of gastric cancer at the levels of system topology and functional modules. In another work, Ma et al. [23] adopted weighted co-expression networks to describe the interplay among genes for cancer prognosis. In particular the authors presented six prognosis analyses on breast cancer and lymphoma. The results presented showed that their approach can identify genes that are significantly different from those using different alternatives. Genes that were identified using this approach presented sound biological bases, better prediction performance, and better reproducibility.In Clarke et al. [24], a weighted version of gene co-expression network is used to analyze breast cancer samples from microarray-based gene expression studies. From the several gene clusters identified, some of them were found to be correlated with clinicopathological variables, survival endpoints for breast cancer as a whole and also its molecular subtypes. Also in 2013, the paper presented by Chang et al. [25], used a weigthed co-expression network in order to identify coexpression modules associated with malignancy menginiomas, one of the most common primary adult brain tumors. The authors identified, at the transcriptome level, 23 coexpression modules from the weighted gene coexpression network. In addition, they were able to identified a module with 356 genes that was highly related to tumorigenesis.In 2014, the work presented by Yang et al. [26] a prognosis genes analysis based on gene co-expression networks for four cancer types using data from “The Cancer Genome Atlas”. The authors performed a systematic analysis of the properties of prognostic genes in the context of biological networks across multiple cancer types. The results of this work suggested that the prognostic mRNA genes tend not to be hub genes (genes with an extremely high connectivity). On the contrary, the prognostic genes are enriched in modules (a group of highly interconnected genes), especially in module genes conserved across different cancer co-expression networks.In 2015, Liu et al. [27] also uses a weighted co-expression network to investigate how gene interactions influence lung cancer and the roles of gene networks in lung cancer regulation. It was found that the overall expression of one of the modules identified was significantly higher in the normal group than in the lung cancer group.Recently in 2018, the work presented by Yang et al. [28] weighted gene co-expression network analysis (WGCNA) was applied to investigate intrinsic association between genomic changes and transcriptome profiling in neuroblastoma cancer (a highly complex and heterogeneous cancer in children). The results achieved identified multiple gene coexpression modules in two independent datasets and associated with functional pathways. The results also indicated that modules involved in nervous system development and cell cycle are highly associated with MYCN amplification and 1p deletion.Finally, in Xu et al. [29] (2019), Xu et al. study Hepatocellular carcinoma, a very common subtype of liver cancer. The authors conducted a WGCNA to identify complex gene interactions that affect prognosis. The final results identified 10 genes that have never been mentioned in hepatocellular carcinoma and that are associated with malignant progression and patient prognosis. | [
"29408296",
"25935118",
"29207053",
"20309759",
"23226279",
"11099257",
"28751908",
"25390635",
"19150482",
"29706607",
"21296855",
"19809459",
"23740839",
"24289128",
"24488081",
"25752287",
"29247836",
"30775801",
"17334370",
"11752295",
"11507038",
"17029630",
"26785265",
"19348636",
"20501553",
"27695050",
"23638278",
"26149713",
"29659649",
"14597658",
"20950452",
"22070249",
"21123224",
"19033363",
"19237447",
"23325622",
"19131956",
"22543366",
"27653561",
"24071849",
"26561982",
"15356269",
"26957268",
"22266734",
"19572116",
"15849206",
"18775299",
"22330683",
"19752085",
"23172663",
"27432794",
"27704267",
"27391342",
"9915494",
"10582567"
] | [
{
"pmid": "29408296",
"title": "GNC-app: A new Cytoscape app to rate gene networks biological coherence using gene-gene indirect relationships.",
"abstract": "MOTIVATION\nGene networks are currently considered a powerful tool to model biological processes in the Bioinformatics field. A number of approaches to infer gene networks and various software tools to handle them in a visual simplified way have been developed recently. However, there is still a need to assess the inferred networks in order to prove their relevance.\n\n\nRESULTS\nIn this paper, we present the new GNC-app for Cytoscape. GNC-app implements the GNC methodology for assessing the biological coherence of gene association networks and integrates it into Cytoscape. Implemented de novo, GNC-app significantly improves the performance of the original algorithm in order to be able to analyse large gene networks more efficiently. It has also been integrated in Cytoscape to increase the tool accessibility for non-technical users and facilitate the visual analysis of the results. This integration allows the user to analyse not only the global biological coherence of the network, but also the biological coherence at the gene-gene relationship level. It also allows the user to leverage Cytoscape capabilities as well as its rich ecosystem of apps to perform further analyses and visualizations of the network using such data.\n\n\nAVAILABILITY\nThe GNC-app is freely available at the official Cytoscape app store: http://apps.cytoscape.org/apps/gnc."
},
{
"pmid": "25935118",
"title": "Gene network coherence based on prior knowledge using direct and indirect relationships.",
"abstract": "Gene networks (GNs) have become one of the most important approaches for modeling biological processes. They are very useful to understand the different complex biological processes that may occur in living organisms. Currently, one of the biggest challenge in any study related with GN is to assure the quality of these GNs. In this sense, recent works use artificial data sets or a direct comparison with prior biological knowledge. However, these approaches are not entirely accurate as they only take into account direct gene-gene interactions for validation, leaving aside the weak (indirect) relationships. We propose a new measure, named gene network coherence (GNC), to rate the coherence of an input network according to different biological databases. In this sense, the measure considers not only the direct gene-gene relationships but also the indirect ones to perform a complete and fairer evaluation of the input network. Hence, our approach is able to use the whole information stored in the networks. A GNC JAVA-based implementation is available at: http://fgomezvela.github.io/GNC/. The results achieved in this work show that GNC outperforms the classical approaches for assessing GNs by means of three different experiments using different biological databases and input networks. According to the results, we can conclude that the proposed measure, which considers the inherent information stored in the direct and indirect gene-gene relationships, offers a new robust solution to the problem of GNs biological validation."
},
{
"pmid": "29207053",
"title": "Diagnostic significance and potential function of miR-338-5p in hepatocellular carcinoma: A bioinformatics study with microarray and RNA sequencing data.",
"abstract": "MicroRNA (miR)-338-5p has been studied in hepatocellular carcinoma (HCC); however, the diagnostic value and molecular mechanism underlying its actions remains to be elucidated. The present study aimed to validate the diagnostic ability of miR‑338‑5p and further explore the underlying molecular mechanism. Data from eligible studies, Gene Expression Omnibus (GEO) chips and The Cancer Genome Atlas (TCGA) datasets were gathered in the data mining and the integrated meta‑analysis, to evaluate the significance of miR‑338‑5p in diagnosing HCC comprehensively. The potential target genes of miR‑338‑5p were achieved from the intersection of the deregulated targets of miR‑338‑5p from GEO and TCGA in addition to the predicted target genes from 12 online software. A protein‑protein‑interaction (PPI) network was drawn to illustrate the interaction between target genes and to define the hub genes. Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway enrichment analyses were performed to investigate the function of the target genes. From the results, miR‑338‑5p exhibited favorable value in diagnosing HCC. Types of sample and experiment were defined as the possible sources of heterogeneity in meta‑analysis. A total of 423 genes were selected as the potential target genes of miR‑338‑5p, and five genes were defined as the hub genes from the PPI network. The GO and KEGG analyses indicated that the target genes were significantly assembled in the pathways of metabolic process and cell cycle. miR‑338‑5p may function as a novel diagnostic target for HCC through regulating certain target genes and signaling pathways."
},
{
"pmid": "20309759",
"title": "Weighted gene coexpression network analysis: state of the art.",
"abstract": "Weighted gene coexpression network analysis (WGCNA) has been applied to many important studies since its introduction in 2005. WGCNA can be used as a data exploratory tool or as a gene screening method; WGCNA can also be used as a tool to generate testable hypothesis for validation in independent data sets. In this article, we review key concepts of WGCNA and some of its applications in gene expression analysis of oncology, brain function, and protein interaction data."
},
{
"pmid": "23226279",
"title": "Evaluation of gene association methods for coexpression network construction and biological knowledge discovery.",
"abstract": "BACKGROUND\nConstructing coexpression networks and performing network analysis using large-scale gene expression data sets is an effective way to uncover new biological knowledge; however, the methods used for gene association in constructing these coexpression networks have not been thoroughly evaluated. Since different methods lead to structurally different coexpression networks and provide different information, selecting the optimal gene association method is critical.\n\n\nMETHODS AND RESULTS\nIn this study, we compared eight gene association methods - Spearman rank correlation, Weighted Rank Correlation, Kendall, Hoeffding's D measure, Theil-Sen, Rank Theil-Sen, Distance Covariance, and Pearson - and focused on their true knowledge discovery rates in associating pathway genes and construction coordination networks of regulatory genes. We also examined the behaviors of different methods to microarray data with different properties, and whether the biological processes affect the efficiency of different methods.\n\n\nCONCLUSIONS\nWe found that the Spearman, Hoeffding and Kendall methods are effective in identifying coexpressed pathway genes, whereas the Theil-sen, Rank Theil-Sen, Spearman, and Weighted Rank methods perform well in identifying coordinated transcription factors that control the same biological processes and traits. Surprisingly, the widely used Pearson method is generally less efficient, and so is the Distance Covariance method that can find gene pairs of multiple relationships. Some analyses we did clearly show Pearson and Distance Covariance methods have distinct behaviors as compared to all other six methods. The efficiencies of different methods vary with the data properties to some degree and are largely contingent upon the biological processes, which necessitates the pre-analysis to identify the best performing method for gene association and coexpression network construction."
},
{
"pmid": "11099257",
"title": "Genetic network inference: from co-expression clustering to reverse engineering.",
"abstract": "Advances in molecular biological, analytical and computational technologies are enabling us to systematically investigate the complex molecular processes underlying biological systems. In particular, using high-throughput gene expression assays, we are able to measure the output of the gene regulatory network. We aim here to review datamining and modeling approaches for conceptualizing and unraveling the functional relationships implicit in these datasets. Clustering of co-expression profiles allows us to infer shared regulatory inputs and functional pathways. We discuss various aspects of clustering, ranging from distance measures to clustering algorithms and multiple-cluster memberships. More advanced analysis aims to infer causal connections between genes directly, i.e. who is regulating whom and how. We discuss several approaches to the problem of reverse engineering of genetic networks, from discrete Boolean networks, to continuous linear and non-linear models. We conclude that the combination of predictive modeling with systematic experimental verification will be required to gain a deeper insight into living organisms, therapeutic targeting and bioengineering."
},
{
"pmid": "28751908",
"title": "Quantifying Gene Regulatory Relationships with Association Measures: A Comparative Study.",
"abstract": "In this work, we provide a comparative study of the main available association measures for characterizing gene regulatory strengths. Detecting the association between genes (as well as RNAs, proteins, and other molecules) is very important to decipher their functional relationship from genomic data in bioinformatics. With the availability of more and more high-throughput datasets, the quantification of meaningful relationships by employing association measures will make great sense of the data. There are various quantitative measures have been proposed for identifying molecular associations. They are depended on different statistical assumptions, for different intentions, as well as with different computational costs in calculating the associations in thousands of genes. Here, we comprehensively summarize these association measures employed and developed for describing gene regulatory relationships. We compare these measures in their consistency and specificity of detecting gene regulations from both simulation and real gene expression profiling data. Obviously, these measures used in genes can be easily extended in other biological molecules or across them."
},
{
"pmid": "25390635",
"title": "Ensemble-based network aggregation improves the accuracy of gene network reconstruction.",
"abstract": "Reverse engineering approaches to constructing gene regulatory networks (GRNs) based on genome-wide mRNA expression data have led to significant biological findings, such as the discovery of novel drug targets. However, the reliability of the reconstructed GRNs needs to be improved. Here, we propose an ensemble-based network aggregation approach to improving the accuracy of network topologies constructed from mRNA expression data. To evaluate the performances of different approaches, we created dozens of simulated networks from combinations of gene-set sizes and sample sizes and also tested our methods on three Escherichia coli datasets. We demonstrate that the ensemble-based network aggregation approach can be used to effectively integrate GRNs constructed from different studies - producing more accurate networks. We also apply this approach to building a network from epithelial mesenchymal transition (EMT) signature microarray data and identify hub genes that might be potential drug targets. The R code used to perform all of the analyses is available in an R package entitled \"ENA\", accessible on CRAN (http://cran.r-project.org/web/packages/ENA/)."
},
{
"pmid": "19150482",
"title": "Gene regulatory network inference: data integration in dynamic models-a review.",
"abstract": "Systems biology aims to develop mathematical models of biological systems by integrating experimental and theoretical techniques. During the last decade, many systems biological approaches that base on genome-wide data have been developed to unravel the complexity of gene regulation. This review deals with the reconstruction of gene regulatory networks (GRNs) from experimental data through computational methods. Standard GRN inference methods primarily use gene expression data derived from microarrays. However, the incorporation of additional information from heterogeneous data sources, e.g. genome sequence and protein-DNA interaction data, clearly supports the network inference process. This review focuses on promising modelling approaches that use such diverse types of molecular biological information. In particular, approaches are discussed that enable the modelling of the dynamics of gene regulatory systems. The review provides an overview of common modelling schemes and learning algorithms and outlines current challenges in GRN modelling."
},
{
"pmid": "29706607",
"title": "Identifying circRNA-associated-ceRNA networks in the hippocampus of Aβ1-42-induced Alzheimer's disease-like rats using microarray analysis.",
"abstract": "Alzheimer's disease (AD) is the most common form of dementia worldwide. Accumulating evidence indicates that non-coding RNAs are strongly implicated in AD-associated pathophysiology. However, the role of these ncRNAs remains largely unknown. In the present study, we used microarray analysis technology to characterize the expression patterns of circular RNAs (circRNAs), microRNAs (miRNAs), and mRNAs in hippocampal tissue from Aβ1-42-induced AD model rats, to integrate interaction data and thus provide novel insights into the mechanisms underlying AD. A total of 555 circRNAs, 183 miRNAs and 319 mRNAs were identified to be significantly dysregulated (fold-change ≥ 2.0 and p-value < 0.05) in the hippocampus of AD rats. Quantitative real-time polymerase chain reaction (qRT-PCR) was then used to validate the expression of randomly-selected circRNAs, miRNAs and mRNAs. Next, GO and KEGG pathway analyses were performed to further investigate ncRNAs biological functions and potential mechanisms. In addition, we constructed circRNA-miRNA and competitive endogenous RNA (ceRNA) regulatory networks to determine functional interactions between ncRNAs and mRNAs. Our results suggest the involvement of different ncRNA expression patterns in the pathogenesis of AD. Our findings provide a novel perspective for further research into AD pathogenesis and might facilitate the development of novel therapeutics targeting ncRNAs."
},
{
"pmid": "21296855",
"title": "Global cancer statistics.",
"abstract": "The global burden of cancer continues to increase largely because of the aging and growth of the world population alongside an increasing adoption of cancer-causing behaviors, particularly smoking, in economically developing countries. Based on the GLOBOCAN 2008 estimates, about 12.7 million cancer cases and 7.6 million cancer deaths are estimated to have occurred in 2008; of these, 56% of the cases and 64% of the deaths occurred in the economically developing world. Breast cancer is the most frequently diagnosed cancer and the leading cause of cancer death among females, accounting for 23% of the total cancer cases and 14% of the cancer deaths. Lung cancer is the leading cancer site in males, comprising 17% of the total new cancer cases and 23% of the total cancer deaths. Breast cancer is now also the leading cause of cancer death among females in economically developing countries, a shift from the previous decade during which the most common cause of cancer death was cervical cancer. Further, the mortality burden for lung cancer among females in developing countries is as high as the burden for cervical cancer, with each accounting for 11% of the total female cancer deaths. Although overall cancer incidence rates in the developing world are half those seen in the developed world in both sexes, the overall cancer mortality rates are generally similar. Cancer survival tends to be poorer in developing countries, most likely because of a combination of a late stage at diagnosis and limited access to timely and standard treatment. A substantial proportion of the worldwide burden of cancer could be prevented through the application of existing cancer control knowledge and by implementing programs for tobacco control, vaccination (for liver and cervical cancers), and early detection and treatment, as well as public health campaigns promoting physical activity and a healthier dietary intake. Clinicians, public health professionals, and policy makers can play an active role in accelerating the application of such interventions globally."
},
{
"pmid": "19809459",
"title": "Diagnosing lung cancer in exhaled breath using gold nanoparticles.",
"abstract": "Conventional diagnostic methods for lung cancer are unsuitable for widespread screening because they are expensive and occasionally miss tumours. Gas chromatography/mass spectrometry studies have shown that several volatile organic compounds, which normally appear at levels of 1-20 ppb in healthy human breath, are elevated to levels between 10 and 100 ppb in lung cancer patients. Here we show that an array of sensors based on gold nanoparticles can rapidly distinguish the breath of lung cancer patients from the breath of healthy individuals in an atmosphere of high humidity. In combination with solid-phase microextraction, gas chromatography/mass spectrometry was used to identify 42 volatile organic compounds that represent lung cancer biomarkers. Four of these were used to train and optimize the sensors, demonstrating good agreement between patient and simulated breath samples. Our results show that sensors based on gold nanoparticles could form the basis of an inexpensive and non-invasive diagnostic tool for lung cancer."
},
{
"pmid": "23740839",
"title": "Correlating transcriptional networks to breast cancer survival: a large-scale coexpression analysis.",
"abstract": "Weighted gene coexpression network analysis (WGCNA) is a powerful 'guilt-by-association'-based method to extract coexpressed groups of genes from large heterogeneous messenger RNA expression data sets. We have utilized WGCNA to identify 11 coregulated gene clusters across 2342 breast cancer samples from 13 microarray-based gene expression studies. A number of these transcriptional modules were found to be correlated to clinicopathological variables (e.g. tumor grade), survival endpoints for breast cancer as a whole (disease-free survival, distant disease-free survival and overall survival) and also its molecular subtypes (luminal A, luminal B, HER2+ and basal-like). Examples of findings arising from this work include the identification of a cluster of proliferation-related genes that when upregulated correlated to increased tumor grade and were associated with poor survival in general. The prognostic potential of novel genes, for example, ubiquitin-conjugating enzyme E2S (UBE2S) within this group was confirmed in an independent data set. In addition, gene clusters were also associated with survival for breast cancer molecular subtypes including a cluster of genes that was found to correlate with prognosis exclusively for basal-like breast cancer. The upregulation of several single genes within this coexpression cluster, for example, the potassium channel, subfamily K, member 5 (KCNK5) was associated with poor outcome for the basal-like molecular subtype. We have developed an online database to allow user-friendly access to the coexpression patterns and the survival analysis outputs uncovered in this study (available at http://glados.ucd.ie/Coexpression/)."
},
{
"pmid": "24289128",
"title": "Genomic and transcriptome analysis revealing an oncogenic functional module in meningiomas.",
"abstract": "OBJECT\nMeningiomas are among the most common primary adult brain tumors. Although typically benign, roughly 2%-5% display malignant pathological features. The key molecular pathways involved in malignant transformation remain to be determined.\n\n\nMETHODS\nIllumina expression microarrays were used to assess gene expression levels, and Illumina single-nucleotide polymorphism arrays were used to identify copy number variants in benign, atypical, and malignant meningiomas (19 tumors, including 4 malignant ones). The authors also reanalyzed 2 expression data sets generated on Affymetrix microarrays (n = 68, including 6 malignant ones; n = 56, including 3 malignant ones). A weighted gene coexpression network approach was used to identify coexpression modules associated with malignancy.\n\n\nRESULTS\nAt the genomic level, malignant meningiomas had more chromosomal losses than atypical and benign meningiomas, with average length of 528, 203, and 34 megabases, respectively. Monosomic loss of chromosome 22 was confirmed to be one of the primary chromosomal level abnormalities in all subtypes of meningiomas. At the transcriptome level, the authors identified 23 coexpression modules from the weighted gene coexpression network. Gene functional enrichment analysis highlighted a module with 356 genes that was highly related to tumorigenesis. Four intramodular hubs within the module (GAB2, KLF2, ID1, and CTF1) were oncogenic in other cancers such as leukemia. A putative meningioma tumor suppressor MN1 was also identified in this module with differential expression between malignant and benign meningiomas.\n\n\nCONCLUSIONS\nThe authors' genomic and transcriptome analysis of meningiomas provides novel insights into the molecular pathways involved in malignant transformation of meningiomas, with implications for molecular heterogeneity of the disease."
},
{
"pmid": "24488081",
"title": "Gene co-expression network analysis reveals common system-level properties of prognostic genes across cancer types.",
"abstract": "Prognostic genes are key molecules informative for cancer prognosis and treatment. Previous studies have focused on the properties of individual prognostic genes, but have lacked a global view of their system-level properties. Here we examined their properties in gene co-expression networks for four cancer types using data from 'The Cancer Genome Atlas'. We found that prognostic mRNA genes tend not to be hub genes (genes with an extremely high connectivity), and this pattern is unique to the corresponding cancer-type-specific network. In contrast, the prognostic genes are enriched in modules (a group of highly interconnected genes), especially in module genes conserved across different cancer co-expression networks. The target genes of prognostic miRNA genes show similar patterns. We identified the modules enriched in various prognostic genes, some of which show cross-tumour conservation. Given the cancer types surveyed, our study presents a view of emergent properties of prognostic genes."
},
{
"pmid": "25752287",
"title": "Identification and validation of gene module associated with lung cancer through coexpression network analysis.",
"abstract": "Lung cancer, a tumor with heterogeneous biology, is influenced by a complex network of gene interactions. Therefore, elucidating the relationships between genes and lung cancer is critical to attain further knowledge on tumor biology. In this study, we performed weighted gene coexpression network analysis to investigate the roles of gene networks in lung cancer regulation. Gene coexpression relationships were explored in 58 samples with tumorous and matched non-tumorous lungs, and six gene modules were identified on the basis of gene coexpression patterns. The overall expression of one module was significantly higher in the normal group than in the lung cancer group. This finding was validated across six datasets (all p values <0.01). The particular module was highly enriched for genes belonging to the biological Gene Ontology category \"response to wounding\" (adjusted p value = 4.28 × 10(-10)). A lung cancer-specific hub network (LCHN) consisting of 15 genes was also derived from this module. A support vector machine based on classification model robustly separated lung cancer from adjacent normal tissues in the validation datasets (accuracy ranged from 91.7% to 98.5%) by using the LCHN gene signatures as predictors. Eight genes in the LCHN are associated with lung cancer. Overall, we identified a gene module associated with lung cancer, as well as an LCHN consisting of hub genes that may be candidate biomarkers and therapeutic targets for lung cancer. This integrated analysis of lung cancer transcriptome provides an alternative strategy for identification of potential oncogenic drivers."
},
{
"pmid": "29247836",
"title": "Coexpression network analysis identifies transcriptional modules associated with genomic alterations in neuroblastoma.",
"abstract": "Neuroblastoma is a highly complex and heterogeneous cancer in children. Acquired genomic alterations including MYCN amplification, 1p deletion and 11q deletion are important risk factors and biomarkers in neuroblastoma. Here, we performed a co-expression-based gene network analysis to study the intrinsic association between specific genomic changes and transcriptome organization. We identified multiple gene coexpression modules which are recurrent in two independent datasets and associated with functional pathways including nervous system development, cell cycle, immune system process and extracellular matrix/space. Our results also indicated that modules involved in nervous system development and cell cycle are highly associated with MYCN amplification and 1p deletion, while modules responding to immune system process are associated with MYCN amplification only. In summary, this integrated analysis provides novel insights into molecular heterogeneity and pathogenesis of neuroblastoma. This article is part of a Special Issue entitled: Accelerating Precision Medicine through Genetic and Genomic Big Data Analysis edited by Yudong Cai & Tao Huang."
},
{
"pmid": "30775801",
"title": "Prognostic genes of hepatocellular carcinoma based on gene coexpression network analysis.",
"abstract": "Hepatocellular carcinoma (HCC) is the most common subtype in liver cancer whose prognosis is affected by malignant progression associated with complex gene interactions. However, there is currently no available biomarkers associated with HCC progression in clinical application. In our study, RNA sequencing expression data of 50 normal samples and 374 tumor samples was analyzed and 9225 differentially expressed genes were screened. Weighted gene coexpression network analysis was then conducted and the blue module we were interested was identified by calculating the correlations between 17 gene modules and clinical features. In the blue module, the calculation of topological overlap was applied to select the top 30 genes and these 30 genes were divided into the green group (11 genes) and the yellow group (19 genes) through searching whether these genes were validated by in vitro or in vivo experiments. The genes in the green group which had never been validated by any experiments were recognized as hub genes. These hub genes were subsequently validated by a new data set GSE76427 and KM Plotter Online Tool, and the results indicated that 10 genes (FBXO43, ARHGEF39, MXD3, VIPR1, DNASE1L3, PHLDA1, CSRNP1, ADR2B, C1RL, and CDC37L1) could act as prognosis and progression biomarkers of HCC. In summary, 10 genes who have never been mentioned in HCC were identified to be associated with malignant progression and prognosis of patients. These findings may contribute to the improvement of the therapeutic decision, risk stratification, and prognosis prediction for HCC patients."
},
{
"pmid": "17334370",
"title": "Airway epithelial gene expression in the diagnostic evaluation of smokers with suspect lung cancer.",
"abstract": "Lung cancer is the leading cause of death from cancer in the US and the world. The high mortality rate (80-85% within 5 years) results, in part, from a lack of effective tools to diagnose the disease at an early stage. Given that cigarette smoke creates a field of injury throughout the airway, we sought to determine if gene expression in histologically normal large-airway epithelial cells obtained at bronchoscopy from smokers with suspicion of lung cancer could be used as a lung cancer biomarker. Using a training set (n = 77) and gene-expression profiles from Affymetrix HG-U133A microarrays, we identified an 80-gene biomarker that distinguishes smokers with and without lung cancer. We tested the biomarker on an independent test set (n = 52), with an accuracy of 83% (80% sensitive, 84% specific), and on an additional validation set independently obtained from five medical centers (n = 35). Our biomarker had approximately 90% sensitivity for stage 1 cancer across all subjects. Combining cytopathology of lower airway cells obtained at bronchoscopy with the biomarker yielded 95% sensitivity and a 95% negative predictive value. These findings indicate that gene expression in cytologically normal large-airway epithelial cells can serve as a lung cancer biomarker, potentially owing to a cancer-specific airway-wide response to cigarette smoke."
},
{
"pmid": "11752295",
"title": "Gene Expression Omnibus: NCBI gene expression and hybridization array data repository.",
"abstract": "The Gene Expression Omnibus (GEO) project was initiated in response to the growing demand for a public repository for high-throughput gene expression data. GEO provides a flexible and open design that facilitates submission, storage and retrieval of heterogeneous data sets from high-throughput gene expression and genomic hybridization experiments. GEO is not intended to replace in house gene expression databases that benefit from coherent data sets, and which are constructed to facilitate a particular analytic method, but rather complement these by acting as a tertiary, central data distribution hub. The three central data entities of GEO are platforms, samples and series, and were designed with gene expression and genomic hybridization experiments in mind. A platform is, essentially, a list of probes that define what set of molecules may be detected. A sample describes the set of molecules that are being probed and references a single platform used to generate its molecular abundance data. A series organizes samples into the meaningful data sets which make up an experiment. The GEO repository is publicly accessible through the World Wide Web at http://www.ncbi.nlm.nih.gov/geo."
},
{
"pmid": "11507038",
"title": "Estrogen receptor status in breast cancer is associated with remarkably distinct gene expression patterns.",
"abstract": "To investigate the phenotype associated with estrogen receptor alpha (ER) expression in breast carcinoma, gene expression profiles of 58 node-negative breast carcinomas discordant for ER status were determined using DNA microarray technology. Using artificial neural networks as well as standard hierarchical clustering techniques, the tumors could be classified according to ER status, and a list of genes which discriminate tumors according to ER status was generated. The artificial neural networks could accurately predict ER status even when excluding top discriminator genes, including ER itself. By reference to the serial analysis of gene expression database, we found that only a small proportion of the 100 most important ER discriminator genes were also regulated by estradiol in MCF-7 cells. The results provide evidence that ER+ and ER- tumors display remarkably different gene-expression phenotypes not solely explained by differences in estrogen responsiveness."
},
{
"pmid": "17029630",
"title": "Microarray analysis after RNA amplification can detect pronounced differences in gene expression using limma.",
"abstract": "BACKGROUND\nRNA amplification is necessary for profiling gene expression from small tissue samples. Previous studies have shown that the T7 based amplification techniques are reproducible but may distort the true abundance of targets. However, the consequences of such distortions on the ability to detect biological variation in expression have not been explored sufficiently to define the true extent of usability and limitations of such amplification techniques.\n\n\nRESULTS\nWe show that expression ratios are occasionally distorted by amplification using the Affymetrix small sample protocol version 2 due to a disproportional shift in intensity across biological samples. This occurs when a shift in one sample cannot be reflected in the other sample because the intensity would lie outside the dynamic range of the scanner. Interestingly, such distortions most commonly result in smaller ratios with the consequence of reducing the statistical significance of the ratios. This becomes more critical for less pronounced ratios where the evidence for differential expression is not strong. Indeed, statistical analysis by limma suggests that up to 87% of the genes with the largest and therefore most significant ratios (p < 10e(-20)) in the unamplified group have a p-value below 10e(-20) in the amplified group. On the other hand, only 69% of the more moderate ratios (10e(-20) < p < 10e(-10)) in the unamplified group have a p-value below 10e(-10) in the amplified group. Our analysis also suggests that, overall, limma shows better overlap of genes found to be significant in the amplified and unamplified groups than the Z-scores statistics.\n\n\nCONCLUSION\nWe conclude that microarray analysis of amplified samples performs best at detecting differences in gene expression, when these are large and when limma statistics are used."
},
{
"pmid": "26785265",
"title": "Cell and Microvesicle Urine microRNA Deep Sequencing Profiles from Healthy Individuals: Observations with Potential Impact on Biomarker Studies.",
"abstract": "BACKGROUND\nUrine is a potential source of biomarkers for diseases of the kidneys and urinary tract. RNA, including microRNA, is present in the urine enclosed in detached cells or in extracellular vesicles (EVs) or bound and protected by extracellular proteins. Detection of cell- and disease-specific microRNA in urine may aid early diagnosis of organ-specific pathology. In this study, we applied barcoded deep sequencing to profile microRNAs in urine of healthy volunteers, and characterized the effects of sex, urine fraction (cells vs. EVs) and repeated voids by the same individuals.\n\n\nRESULTS\nCompared to urine-cell-derived small RNA libraries, urine-EV-derived libraries were relatively enriched with miRNA, and accordingly had lesser content of other small RNA such as rRNA, tRNA and sn/snoRNA. Unsupervised clustering of specimens in relation to miRNA expression levels showed prominent bundling by specimen type (urine cells or EVs) and by sex, as well as a tendency of repeated (first and second void) samples to neighbor closely. Likewise, miRNA profile correlations between void repeats, as well as fraction counterparts (cells and EVs from the same specimen) were distinctly higher than correlations between miRNA profiles overall. Differential miRNA expression by sex was similar in cells and EVs.\n\n\nCONCLUSIONS\nmiRNA profiling of both urine EVs and sediment cells can convey biologically important differences between individuals. However, to be useful as urine biomarkers, careful consideration is needed for biofluid fractionation and sex-specific analysis, while the time of voiding appears to be less important."
},
{
"pmid": "19348636",
"title": "Combining multiple results of a reverse-engineering algorithm: application to the DREAM five-gene network challenge.",
"abstract": "The output of reverse-engineering methods for biological networks is often not a single network prediction, but an ensemble of networks that are consistent with the experimentally measured data. In this paper, we consider the problem of combining the information contained within such an ensemble in order to (1) make more accurate network predictions and (2) estimate the reliability of these predictions. We review existing methods, discuss their limitations, and point out possible research directions toward more advanced methods for this purpose. The potential of considering ensembles of networks, rather than individual inferred networks, is demonstrated by showing how an ensemble voting method achieved winning performance on the Five-Gene Network Challenge of the second DREAM conference (Dialogue on Reverse Engineering Assessments and Methods 2007, New York, NY)."
},
{
"pmid": "20501553",
"title": "Revealing differences in gene network inference algorithms on the network level by ensemble methods.",
"abstract": "MOTIVATION\nThe inference of regulatory networks from large-scale expression data holds great promise because of the potentially causal interpretation of these networks. However, due to the difficulty to establish reliable methods based on observational data there is so far only incomplete knowledge about possibilities and limitations of such inference methods in this context.\n\n\nRESULTS\nIn this article, we conduct a statistical analysis investigating differences and similarities of four network inference algorithms, ARACNE, CLR, MRNET and RN, with respect to local network-based measures. We employ ensemble methods allowing to assess the inferability down to the level of individual edges. Our analysis reveals the bias of these inference methods with respect to the inference of various network components and, hence, provides guidance in the interpretation of inferred regulatory networks from expression data. Further, as application we predict the total number of regulatory interactions in human B cells and hypothesize about the role of Myc and its targets regarding molecular information processing."
},
{
"pmid": "27695050",
"title": "Graphlet Based Metrics for the Comparison of Gene Regulatory Networks.",
"abstract": "Understanding the control of gene expression remains one of the main challenges in the post-genomic era. Accordingly, a plethora of methods exists to identify variations in gene expression levels. These variations underlay almost all relevant biological phenomena, including disease and adaptation to environmental conditions. However, computational tools to identify how regulation changes are scarce. Regulation of gene expression is usually depicted in the form of a gene regulatory network (GRN). Structural changes in a GRN over time and conditions represent variations in the regulation of gene expression. Like other biological networks, GRNs are composed of basic building blocks called graphlets. As a consequence, two new metrics based on graphlets are proposed in this work: REConstruction Rate (REC) and REC Graphlet Degree (RGD). REC determines the rate of graphlet similarity between different states of a network and RGD identifies the subset of nodes with the highest topological variation. In other words, RGD discerns how th GRN was rewired. REC and RGD were used to compare the local structure of nodes in condition-specific GRNs obtained from gene expression data of Escherichia coli, forming biofilms and cultured in suspension. According to our results, most of the network local structure remains unaltered in the two compared conditions. Nevertheless, changes reported by RGD necessarily imply that a different cohort of regulators (i.e. transcription factors (TFs)) appear on the scene, shedding light on how the regulation of gene expression occurs when E. coli transits from suspension to biofilm. Consequently, we propose that both metrics REC and RGD should be adopted as a quantitative approach to conduct differential analyses of GRNs. A tool that implements both metrics is available as an on-line web server (http://dlab.cl/loto)."
},
{
"pmid": "23638278",
"title": "Statistics corner: A guide to appropriate use of correlation coefficient in medical research.",
"abstract": "Correlation is a statistical method used to assess a possible linear association between two continuous variables. It is simple both to calculate and to interpret. However, misuse of correlation is so common among researchers that some statisticians have wished that the method had never been devised at all. The aim of this article is to provide a guide to appropriate use of correlation in medical research and to highlight some misuse. Examples of the applications of the correlation coefficient have been provided using data from statistical simulations as well as real data. Rule of thumb for interpreting size of a correlation coefficient has been provided."
},
{
"pmid": "26149713",
"title": "Systems biology and gene networks in neurodevelopmental and neurodegenerative disorders.",
"abstract": "Genetic and genomic approaches have implicated hundreds of genetic loci in neurodevelopmental disorders and neurodegeneration, but mechanistic understanding continues to lag behind the pace of gene discovery. Understanding the role of specific genetic variants in the brain involves dissecting a functional hierarchy that encompasses molecular pathways, diverse cell types, neural circuits and, ultimately, cognition and behaviour. With a focus on transcriptomics, this Review discusses how high-throughput molecular, integrative and network approaches inform disease biology by placing human genetics in a molecular systems and neurobiological context. We provide a framework for interpreting network biology studies and leveraging big genomics data sets in neurobiology."
},
{
"pmid": "29659649",
"title": "Systems analysis of the genetic interaction network of yeast molecular chaperones.",
"abstract": "Molecular chaperones are typically promiscuous interacting proteins that function globally in the cell to maintain protein homeostasis. Recently, we had carried out experiments that elucidated a comprehensive interaction network for the core 67 chaperones and 15 cochaperones in the budding yeast Saccharomyces cerevisiae [Rizzolo et al., Cell Rep., 2017, 20, 2735-2748]. Here, the genetic (i.e. epistatic) interaction network obtained for chaperones was further analyzed, revealing that the global topological parameters of the resulting network have a more central role in mediating interactions in comparison to the rest of the proteins in the cell. Most notably, we observed Hsp10, Hsp70 Ssz1 chaperone, and Hsp90 cochaperone Cdc37 to be the main drivers of the network architecture. Systematic analysis on the physicochemical properties for all chaperone interactors further revealed the presence of preferential domains and folds that are highly interactive with chaperones such as the WD40 repeat domain. Further analysis with established cellular complexes revealed the involvement of R2TP chaperone in quaternary structure formation. Our results thus provide a global overview of the chaperone network properties in yeast, expanding our understanding of their functional diversity and their role in protein homeostasis."
},
{
"pmid": "14597658",
"title": "Cytoscape: a software environment for integrated models of biomolecular interaction networks.",
"abstract": "Cytoscape is an open source software project for integrating biomolecular interaction networks with high-throughput expression data and other molecular states into a unified conceptual framework. Although applicable to any system of molecular components and interactions, Cytoscape is most powerful when used in conjunction with large databases of protein-protein, protein-DNA, and genetic interactions that are increasingly available for humans and model organisms. Cytoscape's software Core provides basic functionality to layout and query the network; to visually integrate the network with expression profiles, phenotypes, and other molecular states; and to link the network to databases of functional annotations. The Core is extensible through a straightforward plug-in architecture, allowing rapid development of additional computational analyses and features. Several case studies of Cytoscape plug-ins are surveyed, including a search for interaction pathways correlating with changes in gene expression, a study of protein complexes involved in cellular recovery to DNA damage, inference of a combined physical/functional interaction network for Halobacterium, and an interface to detailed stochastic/kinetic gene regulatory models."
},
{
"pmid": "20950452",
"title": "Inferring gene regression networks with model trees.",
"abstract": "BACKGROUND\nNovel strategies are required in order to handle the huge amount of data produced by microarray technologies. To infer gene regulatory networks, the first step is to find direct regulatory relationships between genes building the so-called gene co-expression networks. They are typically generated using correlation statistics as pairwise similarity measures. Correlation-based methods are very useful in order to determine whether two genes have a strong global similarity but do not detect local similarities.\n\n\nRESULTS\nWe propose model trees as a method to identify gene interaction networks. While correlation-based methods analyze each pair of genes, in our approach we generate a single regression tree for each gene from the remaining genes. Finally, a graph from all the relationships among output and input genes is built taking into account whether the pair of genes is statistically significant. For this reason we apply a statistical procedure to control the false discovery rate. The performance of our approach, named REGNET, is experimentally tested on two well-known data sets: Saccharomyces Cerevisiae and E.coli data set. First, the biological coherence of the results are tested. Second the E.coli transcriptional network (in the Regulon database) is used as control to compare the results to that of a correlation-based method. This experiment shows that REGNET performs more accurately at detecting true gene associations than the Pearson and Spearman zeroth and first-order correlation-based methods.\n\n\nCONCLUSIONS\nREGNET generates gene association networks from gene expression data, and differs from correlation-based methods in that the relationship between one gene and others is calculated simultaneously. Model trees are very useful techniques to estimate the numerical values for the target genes by linear regression functions. They are very often more precise than linear regression models because they can add just different linear regressions to separate areas of the search space favoring to infer localized similarities over a more global similarity. Furthermore, experimental results show the good performance of REGNET."
},
{
"pmid": "22070249",
"title": "clusterMaker: a multi-algorithm clustering plugin for Cytoscape.",
"abstract": "BACKGROUND\nIn the post-genomic era, the rapid increase in high-throughput data calls for computational tools capable of integrating data of diverse types and facilitating recognition of biologically meaningful patterns within them. For example, protein-protein interaction data sets have been clustered to identify stable complexes, but scientists lack easily accessible tools to facilitate combined analyses of multiple data sets from different types of experiments. Here we present clusterMaker, a Cytoscape plugin that implements several clustering algorithms and provides network, dendrogram, and heat map views of the results. The Cytoscape network is linked to all of the other views, so that a selection in one is immediately reflected in the others. clusterMaker is the first Cytoscape plugin to implement such a wide variety of clustering algorithms and visualizations, including the only implementations of hierarchical clustering, dendrogram plus heat map visualization (tree view), k-means, k-medoid, SCPS, AutoSOME, and native (Java) MCL.\n\n\nRESULTS\nResults are presented in the form of three scenarios of use: analysis of protein expression data using a recently published mouse interactome and a mouse microarray data set of nearly one hundred diverse cell/tissue types; the identification of protein complexes in the yeast Saccharomyces cerevisiae; and the cluster analysis of the vicinal oxygen chelate (VOC) enzyme superfamily. For scenario one, we explore functionally enriched mouse interactomes specific to particular cellular phenotypes and apply fuzzy clustering. For scenario two, we explore the prefoldin complex in detail using both physical and genetic interaction clusters. For scenario three, we explore the possible annotation of a protein as a methylmalonyl-CoA epimerase within the VOC superfamily. Cytoscape session files for all three scenarios are provided in the Additional Files section.\n\n\nCONCLUSIONS\nThe Cytoscape plugin clusterMaker provides a number of clustering algorithms and visualizations that can be used independently or in combination for analysis and visualization of biological data sets, and for confirming or generating hypotheses about biological function. Several of these visualizations and algorithms are only available to Cytoscape users through the clusterMaker plugin. clusterMaker is available via the Cytoscape plugin manager."
},
{
"pmid": "21123224",
"title": "GLay: community structure analysis of biological networks.",
"abstract": "SUMMARY\nGLay provides Cytoscape users an assorted collection of versatile community structure algorithms and graph layout functions for network clustering and structured visualization. High performance is achieved by dynamically linking highly optimized C functions to the Cytoscape JAVA program, which makes GLay especially suitable for decomposition, display and exploratory analysis of large biological networks.\n\n\nAVAILABILITY\nhttp://brainarray.mbni.med.umich.edu/glay/."
},
{
"pmid": "19033363",
"title": "Bioinformatics enrichment tools: paths toward the comprehensive functional analysis of large gene lists.",
"abstract": "Functional analysis of large gene lists, derived in most cases from emerging high-throughput genomic, proteomic and bioinformatics scanning approaches, is still a challenging and daunting task. The gene-annotation enrichment analysis is a promising high-throughput strategy that increases the likelihood for investigators to identify biological processes most pertinent to their study. Approximately 68 bioinformatics enrichment tools that are currently available in the community are collected in this survey. Tools are uniquely categorized into three major classes, according to their underlying enrichment algorithms. The comprehensive collections, unique tool classifications and associated questions/issues will provide a more comprehensive and up-to-date view regarding the advantages, pitfalls and recent trends in a simpler tool-class level rather than by a tool-by-tool approach. Thus, the survey will help tool designers/developers and experienced end users understand the underlying algorithms and pertinent details of particular tool categories/tools, enabling them to make the best choices for their particular research interests."
},
{
"pmid": "19237447",
"title": "ClueGO: a Cytoscape plug-in to decipher functionally grouped gene ontology and pathway annotation networks.",
"abstract": "We have developed ClueGO, an easy to use Cytoscape plug-in that strongly improves biological interpretation of large lists of genes. ClueGO integrates Gene Ontology (GO) terms as well as KEGG/BioCarta pathways and creates a functionally organized GO/pathway term network. It can analyze one or compare two lists of genes and comprehensively visualizes functionally grouped terms. A one-click update option allows ClueGO to automatically download the most recent GO/KEGG release at any time. ClueGO provides an intuitive representation of the analysis results and can be optionally used in conjunction with the GOlorize plug-in."
},
{
"pmid": "23325622",
"title": "CluePedia Cytoscape plugin: pathway insights using integrated experimental and in silico data.",
"abstract": "SUMMARY\nThe CluePedia Cytoscape plugin is a search tool for new markers potentially associated to pathways. CluePedia calculates linear and non-linear statistical dependencies from experimental data. Genes, proteins and miRNAs can be connected based on in silico and/or experimental information and integrated into a ClueGO network of terms/pathways. Interrelations within each pathway can be investigated, and new potential associations may be revealed through gene/protein/miRNA enrichments. A pathway-like visualization can be created using the Cerebral plugin layout. Combining all these features is essential for data interpretation and the generation of new hypotheses. The CluePedia Cytoscape plugin is user-friendly and has an expressive and intuitive visualization.\n\n\nAVAILABILITY\nhttp://www.ici.upmc.fr/cluepedia/ and via the Cytoscape plugin manager. The user manual is available at the CluePedia website."
},
{
"pmid": "19131956",
"title": "Systematic and integrative analysis of large gene lists using DAVID bioinformatics resources.",
"abstract": "DAVID bioinformatics resources consists of an integrated biological knowledgebase and analytic tools aimed at systematically extracting biological meaning from large gene/protein lists. This protocol explains how to use DAVID, a high-throughput and integrated data-mining environment, to analyze gene lists derived from high-throughput genomic experiments. The procedure first requires uploading a gene list containing any number of common gene identifiers followed by analysis using one or more text and pathway-mining tools such as gene functional classification, functional annotation chart or clustering and functional annotation table. By following this protocol, investigators are able to gain an in-depth understanding of the biological themes in lists of genes that are enriched in genome-scale studies."
},
{
"pmid": "22543366",
"title": "DAVID-WS: a stateful web service to facilitate gene/protein list analysis.",
"abstract": "SUMMARY\nThe database for annotation, visualization and integrated discovery (DAVID), which can be freely accessed at http://david.abcc.ncifcrf.gov/, is a web-based online bioinformatics resource that aims to provide tools for the functional interpretation of large lists of genes/proteins. It has been used by researchers from more than 5000 institutes worldwide, with a daily submission rate of ∼1200 gene lists from ∼400 unique researchers, and has been cited by more than 6000 scientific publications. However, the current web interface does not support programmatic access to DAVID, and the uniform resource locator (URL)-based application programming interface (API) has a limit on URL size and is stateless in nature as it uses URL request and response messages to communicate with the server, without keeping any state-related details. DAVID-WS (web service) has been developed to automate user tasks by providing stateful web services to access DAVID programmatically without the need for human interactions.\n\n\nAVAILABILITY\nThe web service and sample clients (written in Java, Perl, Python and Matlab) are made freely available under the DAVID License at http://david.abcc.ncifcrf.gov/content.jsp?file=WS.html."
},
{
"pmid": "24071849",
"title": "The Cancer Genome Atlas Pan-Cancer analysis project.",
"abstract": "The Cancer Genome Atlas (TCGA) Research Network has profiled and analyzed large numbers of human tumors to discover molecular aberrations at the DNA, RNA, protein and epigenetic levels. The resulting rich data provide a major opportunity to develop an integrated picture of commonalities, differences and emergent themes across tumor lineages. The Pan-Cancer initiative compares the first 12 tumor types profiled by TCGA. Analysis of the molecular aberrations and their functional roles across tumor types will teach us how to extend therapies effective in one cancer type to others with a similar genomic profile."
},
{
"pmid": "26561982",
"title": "Signal recognition particle immunoglobulin g detected incidentally associates with autoimmune myopathy.",
"abstract": "INTRODUCTION\nParaneoplastic autoantibody screening of 150,000 patient sera by tissue-based immunofluorescence incidentally revealed 170 with unsuspected signal recognition particle (SRP) immunoglobulin G (IgG), which is a recognized biomarker of autoimmune myopathy. Of the 77 patients with available information, 54 had myopathy. We describe the clinical/laboratory associations.\n\n\nMETHODS\nDistinctive cytoplasm-binding IgG (mouse tissue substrate) prompted western blot, enzyme-linked immunoassay, and immunoprecipitation analyses. Available histories were reviewed.\n\n\nRESULTS\nThe immunostaining pattern resembled rough endoplasmic reticulum, and mimicked Purkinje-cell cytoplasmic antibody type 1 IgG/anti-Yo. Immunoblotting revealed ribonucleoprotein reactivity. Recombinant antigens confirmed the following: SRP54 IgG specificity alone (17); SRP72 IgG specificity alone (3); both (32); or neither (2). Coexisting neural autoantibodies were identified in 28% (low titer). Electromyography revealed myopathy with fibrillation potentials; 78% of biopsies had active necrotizing myopathy with minimal inflammation, and 17% had inflammatory myopathy. Immunotherapy responsiveness was typically slow and incomplete, and relapses were frequent on withdrawal. Histologically confirmed cancers (17%) were primarily breast and hematologic, with some others.\n\n\nCONCLUSIONS\nAutoimmune necrotizing SRP myopathy, both idiopathic and paraneoplastic, is underdiagnosed in neurological practice. Serological screening aids early diagnosis. Cancer surveillance and appropriate immunosuppressant therapy may improve outcome. Muscle Nerve 53: 925-932, 2016."
},
{
"pmid": "15356269",
"title": "Differential regulation of the TRAIL death receptors DR4 and DR5 by the signal recognition particle.",
"abstract": "TRAIL (TNF-related apoptosis-inducing ligand) death receptors DR4 and DR5 facilitate the selective elimination of malignant cells through the induction of apoptosis. From previous studies the regulation of the DR4 and DR5 cell-death pathways appeared similar; nevertheless in this study we screened a library of small interfering RNA (siRNA) for genes, which when silenced, differentially affect DR4- vs. DR5-mediated apoptosis. These experiments revealed that expression of the signal recognition particle (SRP) complex is essential for apoptosis mediated by DR4, but not DR5. Selective diminution of SRP subunits by RNA interference resulted in a dramatic decrease in cell surface DR4 receptors that correlated with inhibition of DR4-dependent cell death. Conversely, SRP silencing had little influence on cell surface DR5 levels or DR5-mediated apoptosis. Although loss of SRP function in bacteria, yeast and protozoan parasites causes lethality or severe growth defects, we observed no overt phenotypes in the human cancer cells studied--even in stable cell lines with diminished expression of SRP components. The lack of severe phenotype after SRP depletion allowed us to delineate, for the first time, a mechanism for the differential regulation of the TRAIL death receptors DR4 and DR5--implicating the SRP complex as an essential component of the DR4 cell-death pathway."
},
{
"pmid": "26957268",
"title": "Identification of key genes involved in HER2-positive breast cancer.",
"abstract": "OBJECTIVE\nAs an invasive cancer, breast cancer is the most common tumour in women and is with high mortality. To study the mechanisms of HER2-positive breast cancer, we analyzed microarray of GSE52194.\n\n\nMATERIALS AND METHODS\nGSE52194 was downloaded from Gene Expression Omnibus including 5 HER2-positive breast cancer samples and 3 normal breast samples. Using cuffdiff software, differentially expressed genes (DEGs) and differentially expressed long non-coding RNAs (DE-lncRNAs) were screened. Functions of the DEGs were analyzed by Gene Ontology (GO) and pathway enrichment analyses. Then, protein-protein interaction (PPI) network of the DEGs was constructed using Cytoscape and modules of the PPI network were screened by CFinder. Moreover, lncRNA-DEG pairs were screened.\n\n\nRESULTS\nTotal 209 lncRNA transcriptions were predicted, and 996 differentially expressed transcriptions were screened. Besides, FOS had interaction relationships with EGR1 and SOD2 separately in module E and F of the PPI network for the DEGs. Moreover, there were many lncRNA-DEG pairs (e.g. TCONS_00003876-EGR1, TCONS_00003876-FOS, lnc-HOXC4-3:1-FOS, lnc-HOXC4-3:1-BCL6B, lnc-TEAD4-1:1-FOS and lnc-TEAD4-1:1-BCL6B), meanwhile, co-expressed DEGs of TCONS_00003876, lnc-HOXC4-3:1 and lnc-TEAD4-1:1 were enriched in p53 signaling pathway, MAPK signaling pathway and cancer-related pathways, respectively.\n\n\nCONCLUSIONS\nANXA1, EGR1, BCL6, SOD2, FOS, TCONS_00003876, lnc-HOXC4-3:1 and lnc-TEAD4-1:1 might play a role in HER2-positive breast cancer."
},
{
"pmid": "22266734",
"title": "Mortality after incident cancer in people with and without type 2 diabetes: impact of metformin on survival.",
"abstract": "OBJECTIVE\nType 2 diabetes is associated with an increased risk of several types of cancer and with reduced survival after cancer diagnosis. We examined the hypotheses that survival after a diagnosis of solid-tumor cancer is reduced in those with diabetes when compared with those without diabetes, and that treatment with metformin influences survival after cancer diagnosis.\n\n\nRESEARCH DESIGN AND METHODS\nData were obtained from >350 U.K. primary care practices in a retrospective cohort study. All individuals with or without diabetes who developed a first tumor after January 1990 were identified and records were followed to December 2009. Diabetes was further stratified by treatment regimen. Cox proportional hazards models were used to compare all-cause mortality from all cancers and from specific cancers.\n\n\nRESULTS\nOf 112,408 eligible individuals, 8,392 (7.5%) had type 2 diabetes. Cancer mortality was increased in those with diabetes, compared with those without (hazard ratio 1.09 [95% CI 1.06-1.13]). Mortality was increased in those with breast (1.32 [1.17-1.49]) and prostate cancer (1.19 [1.08-1.31]) but decreased in lung cancer (0.84 [0.77-0.92]). When analyzed by diabetes therapy, mortality was increased relative to nondiabetes in those on monotherapy with sulfonylureas (1.13 [1.05-1.21]) or insulin (1.13 [1.01-1.27]) but reduced in those on metformin monotherapy (0.85 [0.78-0.93]).\n\n\nCONCLUSIONS\nThis study confirmed that type 2 diabetes was associated with poorer prognosis after incident cancer, but that the association varied according to diabetes therapy and cancer site. Metformin was associated with survival benefit both in comparison with other treatments for diabetes and in comparison with a nondiabetic population."
},
{
"pmid": "19572116",
"title": "The influence of glucose-lowering therapies on cancer risk in type 2 diabetes.",
"abstract": "AIMS/HYPOTHESIS\nThe risk of developing a range of solid tumours is increased in type 2 diabetes, and may be influenced by glucose-lowering therapies. We examined the risk of development of solid tumours in relation to treatment with oral agents, human insulin and insulin analogues.\n\n\nMETHODS\nThis was a retrospective cohort study of people treated in UK general practices. Those included in the analysis developed diabetes >40 years of age, and started treatment with oral agents or insulin after 2000. A total of 62,809 patients were divided into four groups according to whether they received monotherapy with metformin or sulfonylurea, combined therapy (metformin plus sulfonylurea), or insulin. Insulin users were grouped according to treatment with insulin glargine, long-acting human insulin, biphasic analogue and human biphasic insulin. The outcome measures were progression to any solid tumour, or cancer of the breast, colon, pancreas or prostate. Confounding factors were accounted for using Cox proportional hazards models.\n\n\nRESULTS\nMetformin monotherapy carried the lowest risk of cancer. In comparison, the adjusted HR was 1.08 (95% CI 0.96-1.21) for metformin plus sulfonylurea, 1.36 (95% CI 1.19-1.54) for sulfonylurea monotherapy, and 1.42 (95% CI 1.27-1.60) for insulin-based regimens. Adding metformin to insulin reduced progression to cancer (HR 0.54, 95% CI 0.43-0.66). The risk for those on basal human insulin alone vs insulin glargine alone was 1.24 (95% CI 0.90-1.70). Compared with metformin, insulin therapy increased the risk of colorectal (HR 1.69, 95% CI 1.23-2.33) or pancreatic cancer (HR 4.63, 95% CI 2.64-8.10), but did not influence the risk of breast or prostate cancer. Sulfonylureas were associated with a similar pattern of risk as insulin.\n\n\nCONCLUSIONS/INTERPRETATION\nThose on insulin or insulin secretagogues were more likely to develop solid cancers than those on metformin, and combination with metformin abolished most of this excess risk. Metformin use was associated with lower risk of cancer of the colon or pancreas, but did not affect the risk of breast or prostate cancer. Use of insulin analogues was not associated with increased cancer risk as compared with human insulin."
},
{
"pmid": "18775299",
"title": "Cancer cell metabolism: Warburg and beyond.",
"abstract": "Described decades ago, the Warburg effect of aerobic glycolysis is a key metabolic hallmark of cancer, yet its significance remains unclear. In this Essay, we re-examine the Warburg effect and establish a framework for understanding its contribution to the altered metabolism of cancer cells."
},
{
"pmid": "22330683",
"title": "Targeting glucose metabolism for cancer therapy.",
"abstract": "Cellular transformation is associated with the reprogramming of cellular pathways that control proliferation, survival, and metabolism. Among the metabolic changes exhibited by tumor cells is an increase in glucose metabolism and glucose dependence. It has been hypothesized that targeting glucose metabolism may provide a selective mechanism by which to kill cancer cells. In this minireview, we discuss the benefits that high levels of glycolysis provide for tumor cells, as well as several key enzymes required by cancer cells to maintain this high level of glucose metabolism. It is anticipated that understanding which metabolic enzymes are particularly critical for tumor cell proliferation and survival will identify novel therapeutic targets."
},
{
"pmid": "19752085",
"title": "Metformin selectively targets cancer stem cells, and acts together with chemotherapy to block tumor growth and prolong remission.",
"abstract": "The cancer stem cell hypothesis suggests that, unlike most cancer cells within a tumor, cancer stem cells resist chemotherapeutic drugs and can regenerate the various cell types in the tumor, thereby causing relapse of the disease. Thus, drugs that selectively target cancer stem cells offer great promise for cancer treatment, particularly in combination with chemotherapy. Here, we show that low doses of metformin, a standard drug for diabetes, inhibits cellular transformation and selectively kills cancer stem cells in four genetically different types of breast cancer. The combination of metformin and a well-defined chemotherapeutic agent, doxorubicin, kills both cancer stem cells and non-stem cancer cells in culture. Furthermore, this combinatorial therapy reduces tumor mass and prevents relapse much more effectively than either drug alone in a xenograft mouse model. Mice seem to remain tumor-free for at least 2 months after combinatorial therapy with metformin and doxorubicin is ended. These results provide further evidence supporting the cancer stem cell hypothesis, and they provide a rationale and experimental basis for using the combination of metformin and chemotherapeutic drugs to improve treatment of patients with breast (and possibly other) cancers."
},
{
"pmid": "23172663",
"title": "Genome-wide CpG island methylation analyses in non-small cell lung cancer patients.",
"abstract": "DNA methylation is part of the epigenetic gene regulation complex, which is relevant for the pathogenesis of cancer. We performed a genome-wide search for methylated CpG islands in tumors and corresponding non-malignant lung tissue samples of 101 stages I-III non-small cell lung cancer (NSCLC) patients by combining methylated DNA immunoprecipitation and microarray analysis. Overall, we identified 2414 genomic positions differentially methylated between tumor and non-malignant lung tissue samples. Ninety-seven percent of them were found to be tumor-specifically methylated. Annotation of these genomic positions resulted in the identification of 477 tumor-specifically methylated genes of which many are involved in regulation of gene transcription and cell adhesion. Tumor-specific methylation was confirmed by a gene-specific approach. In the majority of tumors, methylation of certain genes was associated with loss of their protein expression determined by immunohistochemistry. Treatment of NSCLC cells with epigenetically active drugs resulted in upregulated expression of many tumor-specifically methylated genes analyzed by gene expression microarrays suggesting that about one-third of these genes are transcriptionally regulated by methylation. Moreover, comparison of methylation results with certain clinicopathological characteristics of the patients suggests that methylation of HOXA2 and HOXA10 may be of prognostic relevance in squamous cell carcinoma (SCC) patients. In conclusion, we identified a large number of tumor-specifically methylated genes in NSCLC patients. Expression of many of them is regulated by methylation. Moreover, HOXA2 and HOXA10 methylation may serve as prognostic parameters in SCC patients. Overall, our findings emphasize the impact of methylation on the pathogenesis of NSCLCs."
},
{
"pmid": "27432794",
"title": "The WASF3-NCKAP1-CYFIP1 Complex Is Essential for Breast Cancer Metastasis.",
"abstract": "Inactivation of the WASF3 gene suppresses invasion and metastasis of breast cancer cells. WASF3 function is regulated through a protein complex that includes the NCKAP1 and CYFIP1 proteins. Here, we report that silencing NCKAP1 destabilizes the WASF3 complex, resulting in a suppression of the invasive capacity of breast, prostate, and colon cancer cells. In an in vivo model of spontaneous metastasis in immunocompromized mice, loss of NCKAP1 also suppresses metastasis. Activation of the WASF protein complex occurs through interaction with RAC1, and inactivation of NCKAP1 prevents the association of RAC1 with the WASF3 complex. Thus, WASF3 depends on NCKAP1 to promote invasion and metastasis. Here, we show that stapled peptides targeting the interface between NCKAP1 and CYFIP1 destabilize the WASF3 complex and suppress RAC1 binding, thereby suppressing invasion. Using a complex-disrupting compound identified in this study termed WANT3, our results offer a mechanistic proof of concept to target this interaction as a novel approach to inhibit breast cancer metastasis. Cancer Res; 76(17); 5133-42. ©2016 AACR."
},
{
"pmid": "27704267",
"title": "MicroRNA-34c-3p promotes cell proliferation and invasion in hepatocellular carcinoma by regulation of NCKAP1 expression.",
"abstract": "PURPOSE\nOur previous miRNA profiling study indicated that microRNA-34c-3p (miR-34c-3p) was overexpressed and associated with survival in HCC. This study is aimed to confirm its clinical significance and explore the function and underlying mechanism of miR-34c-3p in HCC.\n\n\nMETHODS\nWe first evaluated miR-34c-3p expression and its relationship with prognosis in HCC patients. We then established stable HCC cell lines with miR-34c-3p overexpression and knockdown by the lentiviral packaging systems and performed the functional assays in vitro and in vivo, respectively. We next identified the target of miR-34c-3p by using microRNA target databases and dual-luciferase assay. Finally, the correlation between the expression of miR-34c-3p and the target gene was analyzed by immunohistochemistry and qRT-PCR in HCC tissues and hepatoma xenografts.\n\n\nRESULTS\nOverexpressed miR-34c-3p was confirmed in HCC tissues and significantly associated with poor survival of HCC patients. miR-34c-3p expression was also recognized as an independent risk factor for DFS and OS in multivariate analysis. Ectopic expression of miR-34c-3p significantly promotes the proliferation, colony formation, invasion and cell cycle regression of HCC cell lines. Knockdown of miR-34c-3p remarkably blocked hepatoma growth in the xenograft model. miRNA target databases and luciferase reporter assay showed that NCKAP1 was a direct target of miR-34c-3p in HCC cells and the high expression of NCKAP1 in HCC tissues is significantly correlated with low expression of miR-34c-3p and associated with a favorable prognosis of HCC patients.\n\n\nCONCLUSION\nThe current study demonstrates that miR-34c-3p functions as a tumor promoter by targeting NCKAP1 that is associated with prognosis in HCC. miR-34c-3p and NCKAP1 may be new potential molecular targets for HCC therapy."
},
{
"pmid": "27391342",
"title": "Non-myogenic tumors display altered expression of dystrophin (DMD) and a high frequency of genetic alterations.",
"abstract": "DMD gene mutations have been associated with the development of Dystrophinopathies. Interestingly, it has been recently reported that DMD is involved in the development and progression of myogenic tumors, assigning DMD a tumor suppressor activity in these types of cancer. However, there are only few reports that analyze DMD in non-myogenic tumors. Our study was designed to examine DMD expression and genetic alterations in non-myogenic tumors using public repositories. We also evaluated the overall survival of patients with and without DMD mutations. We studied 59 gene expression microarrays (GEO database) and RNAseq (cBioPortal) datasets that included 9817 human samples. We found reduced DMD expression in 15/27 (56%) pairwise comparisons performed (Fold-Change (FC) ≤ 0.70; p-value range = 0.04-1.5x10-20). The analysis of RNAseq studies revealed a median frequency of DMD genetic alterations of 3.4%, higher or similar to other well-known tumor suppressor genes. In addition, we observed significant poorer overall survival for patients with DMD mutations. The analyses of paired tumor/normal tissues showed that the majority of tumor specimens had lower DMD expression compared to their normal adjacent counterpart. Interestingly, statistical significant over-expression of DMD was found in 6/27 studies (FC ≥ 1.4; p-value range = 0.03-3.4x10-15). These results support that DMD expression and genetic alterations are frequent and relevant in non-myogenic tumors. The study and validation of DMD as a new player in tumor development and as a new prognostic factor for tumor progression and survival are warranted."
},
{
"pmid": "9915494",
"title": "Expression profiling using cDNA microarrays.",
"abstract": "cDNA microarrays are capable of profiling gene expression patterns of tens of thousands of genes in a single experiment. DNA targets, in the form of 3' expressed sequence tags (ESTs), are arrayed onto glass slides (or membranes) and probed with fluorescent- or radioactively-labelled cDNAs. Here, we review technical aspects of cDNA microarrays, including the general principles, fabrication of the arrays, target labelling, image analysis and data extraction, management and mining."
},
{
"pmid": "10582567",
"title": "Clustering gene expression patterns.",
"abstract": "Recent advances in biotechnology allow researchers to measure expression levels for thousands of genes simultaneously, across different conditions and over time. Analysis of data produced by such experiments offers potential insight into gene function and regulatory mechanisms. A key step in the analysis of gene expression data is the detection of groups of genes that manifest similar expression patterns. The corresponding algorithmic problem is to cluster multicondition gene expression patterns. In this paper we describe a novel clustering algorithm that was developed for analysis of gene expression data. We define an appropriate stochastic error model on the input, and prove that under the conditions of the model, the algorithm recovers the cluster structure with high probability. The running time of the algorithm on an n-gene dataset is O[n2[log(n)]c]. We also present a practical heuristic based on the same algorithmic ideas. The heuristic was implemented and its performance is demonstrated on simulated data and on real gene expression data, with very promising results."
}
] |
Heliyon | 31938750 | PMC6953713 | 10.1016/j.heliyon.2020.e03172 | ASGOP: An aggregated similarity-based greedy-oriented approach for relational DDBSs design | In the literature of distributed database system (DDBS), several methods sought to meet the satisfactory reduction on transmission cost (TC) and were seen substantially effective. Data Fragmentation, site clustering, and data distribution have been considered the major leading TC-mitigating influencers. Sites clustering, on one hand, aims at grouping sites appropriately according to certain similarity metrics. On the other hand, data distribution seeks to allocate the fragmented data into clusters/sites properly. The combination of these methods, however, has been shown fruitful concerning TC reduction along with network overheads. In this work, hence, a heuristic clustering-based approach for vertical fragmentation and data allocation is meticulously designed. The focus is directed on proposing an influential solution for improving relational DDBS throughputs across an aggregated similarity-based fragmentation procedure, an effective site clustering and a greedy algorithm-driven data allocation model. Moreover, the data replication is also considered so TC is further minimized. Through the delineated-below evaluation, the findings of experimental implementation have been observed to be promising. | 2Related work2.1Data fragmentationData fragmentation (Vertical, Horizontal or Mixed) has long seen to play the key role in DDBS performance enhancement. As a matter of fact, it is commonly agreed upon that the proper the data fragmentation and allocation (including replication) are, the highly likely that the overall performance of DDBS is sustainably satisfied (Nashat and Amer, 2018). In (Nashat and Amer, 2018), a fine-grained taxonomy was drawn. This taxonomy was extensively examined that more than one hundred references (Chapters, Papers, Reports, Books, etc) were investigated in both static and dynamic environments. The key drive behind this taxonomy was to find the drawbacks and shortcomings from which most of earlier works were observed to be suffering. Data fragmentation, data allocation and replication were all studied and then classified according to taxonomy-centric metrics. The concern was sought to specify these defects so a more effective methods for DDBS performance improvement are set to be designed. The reduction of TC (including communication costs and response time) was the major motivators for which a good number of previous works sought to quench. Data locality maximization and data remote access mitigation were observed to be crucial issues that always need to be tackled wisely so TC is decreased.Raouf et al. (2018) developed a cloud-based architecture for DDBS design. Data allocation for the resulted fragments along with the data replication at the run time were also considered so DDBSMs were able to work in parallel to process the queries of client's. The work had also studied the clustering of sites to further increase DDBS throughputs by maximizing locality of concerned data. Nevertheless, some drawbacks are recorded such as the selection of leaders of clusters which was intuitive and impractical to be considered in an efficient environment since almost most of DDBSs today have the same specifications for all sites, specifically in peer-two-peer network. In the meanwhile, Wiese et al. (2016) studied data replication problem (DRP) deeply that DRP was presented as an integer linear problem with the assumption of having the overlapping horizontally-split fragments. That is, the replication problem was treated as an optimization problem to gain the intended aim of having fragments' copies at the less number of sites of network. In the same line, Mahi et al. (2018) proposed a method based on Particle Swarm Optimization (PSO) algorithm to lessen TC through solving data allocation problem (DAP) by using PSO algorithm. Work's performance was observed and graded on 20 different test problems.On the other hand (Sewisy et al., 2017; Amer, 2018), came to incorporate site clustering and the cost-effective model of data allocation and replication into one individual work. The results obtained were highly encouraging. Moreover, the authors in (Sewisy et al., 2017) drew the results in both cases, with site clustering and without site clustering to draw the impact of site clustering on DDBS performance. On the same page (Abdalla and Artoli, 2019), came to present an enhanced version of (Sewisy et al., 2017). This works was evaluated against (Sewisy et al., 2017) and shown to behave slightly better in most cases. A small-scale experimental study was conducted to exhibit the effectiveness of enhanced approach. A different data allocation scenarios were addressed, and the data replication was carried out using the replication model proposed in (Wiese et al., 2016). A significant enhancement was recorded in terms of overall DDBSs performance through TC mitigation. The constraints of clusters and sites were maintained to stimulate the real-world DDBS and tightened the proposed work's effectiveness.Lastly, Luong et al. (2018) proposed k-Means rough clustering technique for vertical fragmentation. The distance and similarity were combined together with the upper and lower approximations to better proposed algorithm. The error average cost was seen to be high as both upper and lower approximations were addressed during the process of updating the new concentration.2.2Data allocationIn order for DAP problem to be solved, a good number of approaches were proposed in literature, and profoundly studied for both redundant and non-redundant states. In (Mukherjee, 2011), a cost model for data distribution over sites was presented to lessen communication costs. On the other hand (Tonini and Siqueira, 2013), came with an algorithm to find a distributed allocation schema so query performance was improved based on query history and data patterns analysis. A large biological database, as case study, was used for algorithm evaluation and promising results were observed.However, static data allocation was consensually seen ineffectual in terms of DDBS performance in the ever-changing environment. So, to tackle this deficit, a well-structured approaches of dynamic nature are needed for data allocation in dynamic environment. A holistic data allocation approach was first proposed in (Apers, 1988), to find a solution for the problem of dynamically assigning data over network sites. In fact, this approach has been the core upon which many existing algorithms for dynamic data allocation have been built. For the same purpose, in Wolfson et al. (1997), a dynamic algorithm (named, adaptive data replication, ADR) was developed within a framework for dynamic data allocation. A genetic algorithm-based method was provided in (Rahmani et al., 2009) to solve data allocation problem in two steps. Firstly, site clusters was formed based on communication costs; secondly, the targeted data were scattered over clusters using GA.By the same token (Singh, 2016), proposed a data allocation framework for non-replicated dynamic DDBS using the threshold and time constraint algorithms (TTCA). TTCA performance was experimentally compared with threshold algorithm on the basis of the total cost of reallocation and the number of fragment migrations over network. The findings illustrated that TTCA is more effective (in terms of performance promotion) than threshold algorithm chiefly as access frequency pattern changes swiftly. In (Kamali et al., 2011), an algorithm for tackling data allocation in replicated DDBS was evolved. Several aspects were included such as replication strategy and “non-uniform” distance between sites of network. The results obtained had shown that this algorithm yielded a good solution to data allocation in DDS. Some flaws, on the other hand, were noted like its being incapable of determining the number of fragment replica.In the same line, a dynamic approach for fragment allocation was drawn in (Mukherjee, 2011). Time constraints, threshold value and the transmitted volume of data were taken into account. The problem of data allocation in the ever-changing environment was studied in (Li and Wong, 2013). Firstly, problem was defined and the use of time series models was enabled to perform short-term load forecasting. That is, the node number adjustment and fragment reallocation could be determined in advance. Subsequently, the nodes’ over-loadings and performance deterioration could be evaded particularly when fragment migrations is grown steadily. The load balancing was observed under the assumption that the future workloads can be modelled when time series was noted. In essence, the algorithm was presented to prove that the time series-based work outweighed threshold-based.By the same token, a non-replicated data allocation approach was drawn in (Abdalla et al., 2014) in a dynamic environment. The given algorithm called “Performance Optimality Enhancement Algorithm” (POEA). It was destined to comprehensively integrate some concepts used in earlier algorithms. The time and sites constraints and the changing patterns of data access were taken into account. Moreover, the shortest path problem between sites was incorporated into POEA to be used when migration decision was being made. This step led to significant decrease in data migration. The experimental results draw a solid evidence that “POEA” had efficiently contributed in transmission costs and response time mitigation. Finally, to solve DAP problem (Lotfi, 2019), proposed a hybrid strategy using the differential evolution (DE) algorithm and variable neighborhood search (VNS) technique. The author sought to raise DE performance across the selection and crossover operators. The proposed approach aimed to navigate the search space via DE and performed further navigation using the neighborhood search technique. This approach was experimentally tested against state-of-the-art techniques and shown effective. | [] | [] |
Brain Sciences | 31817120 | PMC6956025 | 10.3390/brainsci9120355 | Improved EOG Artifact Removal Using Wavelet Enhanced Independent Component Analysis | Electroencephalography (EEG) signals are frequently contaminated with unwanted electrooculographic (EOG) artifacts. Blinks and eye movements generate large amplitude peaks that corrupt EEG measurements. Independent component analysis (ICA) has been used extensively in manual and automatic methods to remove artifacts. By decomposing the signals into neural and artifactual components and artifact components can be eliminated before signal reconstruction. Unfortunately, removing entire components may result in losing important neural information present in the component and eventually may distort the spectral characteristics of the reconstructed signals. An alternative approach is to correct artifacts within the independent components instead of rejecting the entire component, for which wavelet transform based decomposition methods have been used with good results. An improved, fully automatic wavelet-based component correction method is presented for EOG artifact removal that corrects EOG components selectively, i.e., within EOG activity regions only, leaving other parts of the component untouched. In addition, the method does not rely on reference EOG channels. The results show that the proposed method outperforms other component rejection and wavelet-based EOG removal methods in its accuracy both in the time and the spectral domain. The proposed new method represents an important step towards the development of accurate, reliable and automatic EOG artifact removal methods. | 2. Related WorkEye movements and blinks are transient activities that occur relatively infrequently, but unfortunately generate very high amplitude peaks. These artifacts can be easily identified visually in frontal lobe signals waveforms. Since the spectrum of the EOG artifact overlaps the spectrum of the underlying EEG signal, simple filtering methods are unable to remove artifact effects entirely [7]. Adaptive filtering methods based on autoregressive models and reference EOG signals [8] have been used for removing EOG artifacts but these methods do not take into consideration that the reference EOG channels are also contaminated with EEG data, which presents difficulties in obtaining an accurate estimate of the EOG effect [9]. For these reasons, independent component analysis [10] is the method of choice today for EOG artifact removal.Originally developed for solving the blind source separation (BSS) problem, ICA is a robust method for detecting artifacts by decomposing the EEG signals into their independent source components. Since Makeig et al. [11] suggested the use of ICA for artifact removal, many alternative ICA-based artifact cleaning methods have been proposed. The major differences among the methods are in (i) how artifact components are identified (manual, reference electrode and statistical methods) and (ii) how artifact removal is performed (rejection of entire artifact component and artifact correction within component). Joyce et al. [4] proposed the use of ICA for automatic EOG artifact removal using reference EOG channels for correlation-based artifact component identification followed by full component rejection. The auto-regressive exogenous method (ICA-ARX) [5] also removes ocular artifacts using EOG reference signals. Here input and output signal pairs are required to build the auto-regressive model. A similar approach is used in the popular FASTER [12] and DETECT [13] Matlab-based artifact removal toolboxes. The also widely used ADJUST [14] toolbox, however, does not rely on reference electrodes, it uses spatial and temporal component features to identify EOG components [15]. An example of ICA followed by adaptive filtering based EOG removal is [16].While ICA-based methods showed encouraging results for EOG artifact removal, it has been also pointed out that ocular sources are not entirely separated from neural sources [4], which makes the full component rejection method a non-ideal solution. Zeroing out the weights of an artifact component before the inverse ICA is performed will also remove all neural data present in that component. To overcome this problem, more sophisticated ICA-based methods were proposed for removing artifacts while retaining the original neural information present in the data.The discrete wavelet transform (DWT) can be used to decompose a measured signal or its ICA derived independent components into wavelet components using basis functions from wavelet families such as Symlets, Coifs, Haar, etc. [17]. Examples for wavelet-based artifact removal from raw measured data are [15] and [16], in which the wavelet decomposition was combined with statistical approaches to extract artifact features from the decomposed raw EEG signal using the Symlet basis function. Here the assumption is similar to ICA that one wavelet component describes the artifact, which, when removed, removes the artifact from the signal. The most successful approach for artifact removal is the combination of wavelet decomposition and ICA [18,19,20,21]. One approach is to apply ICA to the wavelet decomposed signal components (AWICA) [22]. In the AWICA method, artifacts are detected using statistical measures, such as kurtosis or Renyi’s entropy [23]. The drawback of this approach, however, is that in higher dimensions, Renyi’s entropy incurs high computational cost due to the kernel density needed for the component [24], and the two statistical metrics could not differentiate clearly between EOG and ECG peaks. The same approach was proposed by Kelly et al. [24] where the artifactual coefficients above a threshold were replaced by the median of a set of coefficients outside the artifacts, but they only tested the method on a measured dataset without evaluating the performance on a standard dataset containing EOG artifact annotations.Another approach is to apply the wavelet transform after ICA decomposition to the artifact independent components (ICs) such as wavelet-enhanced ICA and wICA [6]. In the wICA method, the wavelet transform was used in combination with ICA, relying on the fact that wavelet coefficients of the artifact component typically have higher amplitudes than that of the cerebral activity components, and by zeroing out the coefficients that are larger than a certain threshold, EOG artifacts can be removed from the signal. For successful wavelet-based removal, the threshold selection is crucial. An adaptive threshold method based on DWT was introduced to identify and remove EOG artifacts [25] without losing the related EEG information. However, these methods are not as effective if applied to the raw signal and not ICA components. This approach was modified by Nguyen et al., [26] who introduced the wavelet neural network (WNN; clean and contaminated EEG data is used to train the network) and achieved 9.07 µV root mean square error (RMSE) between the cleaned and the artifact-free data. This method works without a reference EOG signal that is normally required in the linear regression based methods [8]. Burger and van den Heever [3] improved upon this method but their solution can only remove eye blinks; it does not work for eye movement artifacts. Besides wavelet transformation, other decomposition methods have been recommended, such as ensemble empirical mode decomposition for single channel EEG followed by ICA for artifact removal [27]. | [
"25834104",
"10851802",
"15032997",
"16828877",
"15865141",
"10946390",
"20654646",
"23638169",
"20636297",
"27751622",
"24845273",
"20561810",
"21097374",
"26306657",
"7584893",
"21810266",
"10459681",
"20811092",
"10851218",
"17517386",
"29425248",
"19002707",
"24512692",
"23086501",
"24968340",
"17765604",
"10731767"
] | [
{
"pmid": "25834104",
"title": "EEG artifact removal-state-of-the-art and guidelines.",
"abstract": "This paper presents an extensive review on the artifact removal algorithms used to remove the main sources of interference encountered in the electroencephalogram (EEG), specifically ocular, muscular and cardiac artifacts. We first introduce background knowledge on the characteristics of EEG activity, of the artifacts and of the EEG measurement model. Then, we present algorithms commonly employed in the literature and describe their key features. Lastly, principally on the basis of the results provided by various researchers, but also supported by our own experience, we compare the state-of-the-art methods in terms of reported performance, and provide guidelines on how to choose a suitable artifact removal algorithm for a given scenario. With this review we have concluded that, without prior knowledge of the recorded EEG signal or the contaminants, the safest approach is to correct the measured EEG using independent component analysis-to be precise, an algorithm based on second-order statistics such as second-order blind identification (SOBI). Other effective alternatives include extended information maximization (InfoMax) and an adaptive mixture of independent component analyzers (AMICA), based on higher order statistics. All of these algorithms have proved particularly effective with simulations and, more importantly, with data collected in controlled recording conditions. Moreover, whenever prior knowledge is available, then a constrained form of the chosen method should be used in order to incorporate such additional information. Finally, since which algorithm is the best performing is highly dependent on the type of the EEG signal, the artifacts and the signal to contaminant ratio, we believe that the optimal method for removing artifacts from the EEG consists in combining more than one algorithm to correct the signal using multiple processing stages, even though this is an option largely unexplored by researchers in the area."
},
{
"pmid": "10851802",
"title": "Independent component approach to the analysis of EEG and MEG recordings.",
"abstract": "Multichannel recordings of the electromagnetic fields emerging from neural currents in the brain generate large amounts of data. Suitable feature extraction methods are, therefore, useful to facilitate the representation and interpretation of the data. Recently developed independent component analysis (ICA) has been shown to be an efficient tool for artifact identification and extraction from electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings. In addition, ICA has been applied to the analysis of brain signals evoked by sensory stimuli. This paper reviews our recent results in this field."
},
{
"pmid": "15032997",
"title": "Automatic removal of eye movement and blink artifacts from EEG data using blind component separation.",
"abstract": "Signals from eye movements and blinks can be orders of magnitude larger than brain-generated electrical potentials and are one of the main sources of artifacts in electroencephalographic (EEG) data. Rejecting contaminated trials causes substantial data loss, and restricting eye movements/blinks limits the experimental designs possible and may impact the cognitive processes under investigation. This article presents a method based on blind source separation (BSS) for automatic removal of electroocular artifacts from EEG data. BBS is a signal-processing methodology that includes independent component analysis (ICA). In contrast to previously explored ICA-based methods for artifact removal, this method is automated. Moreover, the BSS algorithm described herein can isolate correlated electroocular components with a high degree of accuracy. Although the focus is on eliminating ocular artifacts in EEG data, the approach can be extended to other sources of EEG contamination such as cardiac signals, environmental noise, and electrode drift, and adapted for use with magnetoencephalographic (MEG) data, a magnetic correlate of EEG."
},
{
"pmid": "16828877",
"title": "Recovering EEG brain signals: artifact suppression with wavelet enhanced independent component analysis.",
"abstract": "Independent component analysis (ICA) has been proven useful for suppression of artifacts in EEG recordings. It involves separation of measured signals into statistically independent components or sources, followed by rejection of those deemed artificial. We show that a \"leak\" of cerebral activity of interest into components marked as artificial means that one is going to lost that activity. To overcome this problem we propose a novel wavelet enhanced ICA method (wICA) that applies a wavelet thresholding not to the observed raw EEG but to the demixed independent components as an intermediate step. It allows recovering the neural activity present in \"artificial\" components. Employing semi-simulated and real EEG recordings we quantify the distortions of the cerebral part of EEGs introduced by the ICA and wICA artifact suppressions in the time and frequency domains. In the context of studying cortical circuitry we also evaluate spectral and partial spectral coherences over ICA/wICA-corrected EEGs. Our results suggest that ICA may lead to an underestimation of the neural power spectrum and to an overestimation of the coherence between different cortical sites. wICA artifact suppression preserves both spectral (amplitude) and coherence (phase) characteristics of the underlying neural activity."
},
{
"pmid": "15865141",
"title": "Removal of eye blinking artifact from the electro-encephalogram, incorporating a new constrained blind source separation algorithm.",
"abstract": "A robust constrained blind source separation (CBSS) algorithm has been developed as an effective means to remove ocular artifacts (OAs) from electro-encephalograms (EEGs). Currently, clinicians reject a data segment if the patient blinked or spoke during the observation interval. The rejected data segment could contain important information masked by the artifact. In the CBSS technique, a reference signal was exploited as a constraint. The constrained problem was then converted to an unconstrained problem by means of non-linear penalty functions weighted by the penalty terms. This led to the modification of the overall cost function, which was then minimised with the natural gradient algorithm. The effectiveness of the algorithm was also examined for the removal of other interfering signals such as electrocardiograms. The CBSS algorithm was tested with ten sets of data containing OAs. The proposed algorithm yielded, on average, a 19% performance improvement over Parra's BSS algorithm for removing OAs."
},
{
"pmid": "10946390",
"title": "Independent component analysis: algorithms and applications.",
"abstract": "A fundamental problem in neural network research, as well as in many other disciplines, is finding a suitable representation of multivariate data, i.e. random vectors. For reasons of computational and conceptual simplicity, the representation is often sought as a linear transformation of the original data. In other words, each component of the representation is a linear combination of the original variables. Well-known linear transformation methods include principal component analysis, factor analysis, and projection pursuit. Independent component analysis (ICA) is a recently developed method in which the goal is to find a linear representation of non-Gaussian data so that the components are statistically independent, or as independent as possible. Such a representation seems to capture the essential structure of the data in many applications, including feature extraction and signal separation. In this paper, we present the basic theory and applications of ICA, and our recent work on the subject."
},
{
"pmid": "20654646",
"title": "FASTER: Fully Automated Statistical Thresholding for EEG artifact Rejection.",
"abstract": "Electroencephalogram (EEG) data are typically contaminated with artifacts (e.g., by eye movements). The effect of artifacts can be attenuated by deleting data with amplitudes over a certain value, for example. Independent component analysis (ICA) separates EEG data into neural activity and artifact; once identified, artifactual components can be deleted from the data. Often, artifact rejection algorithms require supervision (e.g., training using canonical artifacts). Many artifact rejection methods are time consuming when applied to high-density EEG data. We describe FASTER (Fully Automated Statistical Thresholding for EEG artifact Rejection). Parameters were estimated for various aspects of data (e.g., channel variance) in both the EEG time series and in the independent components of the EEG: outliers were detected and removed. FASTER was tested on both simulated EEG (n=47) and real EEG (n=47) data on 128-, 64-, and 32-scalp electrode arrays. FASTER was compared to supervised artifact detection by experts and to a variant of the Statistical Control for Dense Arrays of Sensors (SCADS) method. FASTER had >90% sensitivity and specificity for detection of contaminated channels, eye movement and EMG artifacts, linear trends and white noise. FASTER generally had >60% sensitivity and specificity for detection of contaminated epochs, vs. 0.15% for SCADS. FASTER also aggregates the ERP across subject datasets, and detects outlier datasets. The variance in the ERP baseline, a measure of noise, was significantly lower for FASTER than either the supervised or SCADS methods. ERP amplitude did not differ significantly between FASTER and the supervised approach."
},
{
"pmid": "23638169",
"title": "DETECT: a MATLAB toolbox for event detection and identification in time series, with applications to artifact detection in EEG signals.",
"abstract": "Recent advances in sensor and recording technology have allowed scientists to acquire very large time-series datasets. Researchers often analyze these datasets in the context of events, which are intervals of time where the properties of the signal change relative to a baseline signal. We have developed DETECT, a MATLAB toolbox for detecting event time intervals in long, multi-channel time series. Our primary goal is to produce a toolbox that is simple for researchers to use, allowing them to quickly train a model on multiple classes of events, assess the accuracy of the model, and determine how closely the results agree with their own manual identification of events without requiring extensive programming knowledge or machine learning experience. As an illustration, we discuss application of the DETECT toolbox for detecting signal artifacts found in continuous multi-channel EEG recordings and show the functionality of the tools found in the toolbox. We also discuss the application of DETECT for identifying irregular heartbeat waveforms found in electrocardiogram (ECG) data as an additional illustration."
},
{
"pmid": "20636297",
"title": "ADJUST: An automatic EEG artifact detector based on the joint use of spatial and temporal features.",
"abstract": "A successful method for removing artifacts from electroencephalogram (EEG) recordings is Independent Component Analysis (ICA), but its implementation remains largely user-dependent. Here, we propose a completely automatic algorithm (ADJUST) that identifies artifacted independent components by combining stereotyped artifact-specific spatial and temporal features. Features were optimized to capture blinks, eye movements, and generic discontinuities on a feature selection dataset. Validation on a totally different EEG dataset shows that (1) ADJUST's classification of independent components largely matches a manual one by experts (agreement on 95.2% of the data variance), and (2) Removal of the artifacted components detected by ADJUST leads to neat reconstruction of visual and auditory event-related potentials from heavily artifacted data. These results demonstrate that ADJUST provides a fast, efficient, and automatic way to use ICA for artifact removal."
},
{
"pmid": "27751622",
"title": "Methods for artifact detection and removal from scalp EEG: A review.",
"abstract": "Electroencephalography (EEG) is the most popular brain activity recording technique used in wide range of applications. One of the commonly faced problems in EEG recordings is the presence of artifacts that come from sources other than brain and contaminate the acquired signals significantly. Therefore, much research over the past 15 years has focused on identifying ways for handling such artifacts in the preprocessing stage. However, this is still an active area of research as no single existing artifact detection/removal method is complete or universal. This article presents an extensive review of the existing state-of-the-art artifact detection and removal methods from scalp EEG for all potential EEG-based applications and analyses the pros and cons of each method. First, a general overview of the different artifact types that are found in scalp EEG and their effect on particular applications are presented. In addition, the methods are compared based on their ability to remove certain types of artifacts and their suitability in relevant applications (only functional comparison is provided not performance evaluation of methods). Finally, the future direction and expected challenges of current research is discussed. Therefore, this review is expected to be helpful for interested researchers who will develop and/or apply artifact handling algorithm/technique in future for their applications as well as for those willing to improve the existing algorithms or propose a new solution in this particular area of research."
},
{
"pmid": "24845273",
"title": "Automated removal of EKG artifact from EEG data using independent component analysis and continuous wavelet transformation.",
"abstract": "The electrical potential produced by the cardiac activity sometimes contaminates electroencephalogram (EEG) recordings, resulting in spiky activities that are referred to as electrocardiographic (EKG) artifact. For a variety of reasons it is often desirable to automatically detect and remove these artifacts. Especially, for accurate source localization of epileptic spikes in an EEG recording from a patient with epilepsy, it is of great importance to remove any concurrent artifact. Due to similarities in morphology between the EKG artifacts and epileptic spikes, any automated artifact removal algorithm must have an extremely low false-positive rate in addition to a high detection rate. In this paper, an automated algorithm for removal of EKG artifact is proposed that satisfies such criteria. The proposed method, which uses combines independent component analysis and continuous wavelet transformation, uses both temporal and spatial characteristics of EKG related potentials to identify and remove the artifacts. The method outperforms algorithms that use general statistical features such as entropy and kurtosis for artifact rejection."
},
{
"pmid": "20561810",
"title": "An automated ECG-artifact removal method for trunk muscle surface EMG recordings.",
"abstract": "This study aimed at developing a method for automated electrocardiography (ECG) artifact detection and removal from trunk electromyography signals. Independent Component Analysis (ICA) method was applied to the simulated data set of ECG-corrupted surface electromyography (SEMG) signals. Independent Components (ICs) correspond to ECG artifact were then identified by an automated detection algorithm and subsequently removed. The detection performance of the algorithm was compared to that by visual inspection, while the artifact elimination performance was compared with Butterworth high pass filter at 30 Hz cutoff (BW HPF 30). The automated ECG-artifact detection algorithm successfully recognized the ECG source components in all data sets with a sensitivity of 100% and specificity of 99%. Better performance indicated by a significantly higher correlation coefficient (p<0.001) with the original EMG recordings was found in the SEMG data cleaned by the ICA-based method, than that by BW HPF 30. The automated ECG-artifact removal method for trunk SEMG recordings proposed in this study was demonstrated to produce a very good detection rate and preserved essential EMG components while keeping its distortion to minimum. The automatic nature of our method has solved the problem of visual inspection by standard ICA methods and brings great clinical benefits."
},
{
"pmid": "21097374",
"title": "Fully automated reduction of ocular artifacts in high-dimensional neural data.",
"abstract": "The reduction of artifacts in neural data is a key element in improving analysis of brain recordings and the development of effective brain-computer interfaces. This complex problem becomes even more difficult as the number of channels in the neural recording is increased. Here, new techniques based on wavelet thresholding and independent component analysis (ICA) are developed for use in high-dimensional neural data. The wavelet technique uses a discrete wavelet transform with a Haar basis function to localize artifacts in both time and frequency before removing them with thresholding. Wavelet decomposition level is automatically selected based on the smoothness of artifactual wavelet approximation coefficients. The ICA method separates the signal into independent components, detects artifactual components by measuring the offset between the mean and median of each component, and then removing the correct number of components based on the aforementioned offset and the power of the reconstructed signal. A quantitative method for evaluating these techniques is also presented. Through this evaluation, the novel adaptation of wavelet thresholding is shown to produce superior reduction of ocular artifacts when compared to regression, principal component analysis, and ICA."
},
{
"pmid": "26306657",
"title": "Informed decomposition of electroencephalographic data.",
"abstract": "BACKGROUND\nBlind source separation techniques have become the de facto standard for decomposing electroencephalographic (EEG) data. These methods are poorly suited for incorporating prior information into the decomposition process. While alternative techniques to this problem, such as the use of constrained optimization techniques, have been proposed, these alternative techniques tend to only minimally satisfy the prior constraints. In addition, the experimenter must preset a number of parameters describing both this minimal limit as well as the size of the target subspaces.\n\n\nNEW METHOD\nWe propose an informed decomposition approach that builds upon the constrained optimization approaches for independent components analysis to better model and separate distinct subspaces within EEG data. We use a likelihood function to adaptively determine the optimal model size for each target subspace.\n\n\nRESULTS\nUsing our method we are able to produce ordered independent subspaces that exhibit less residual mixing than those obtained with other methods. The results show an improvement in modeling specific features of the EEG space, while also showing a simultaneous reduction in the number of components needed for each model.\n\n\nCOMPARISON WITH EXISTING METHOD(S)\nWe first compare our approach to common methods in the field of EEG decomposition, such as Infomax, FastICA, PCA, JADE, and SOBI for the task of modeling and removing both EOG and EMG artifacts. We then demonstrate the utility of our approach for the more complex problem of modeling neural activity.\n\n\nCONCLUSIONS\nBy working in a one-size-fits-all fashion current EEG decomposition methods do not adapt to the specifics of each data set and are not well designed to incorporate additional information about the decomposition problem. However, by adding specific information about the problem to the decomposition task, we improve the identification and separation of distinct subspaces within the original data and show better preservation of the remaining data."
},
{
"pmid": "7584893",
"title": "An information-maximization approach to blind separation and blind deconvolution.",
"abstract": "We derive a new self-organizing learning algorithm that maximizes the information transferred in a network of nonlinear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximization has extra properties not found in the linear case (Linsker 1989). The nonlinearities in the transfer function are able to pick up higher-order moments of the input distributions and perform something akin to true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalization of principal components analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to 10 speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information maximization provides a unifying framework for problems in \"blind\" signal processing."
},
{
"pmid": "21810266",
"title": "Automatic classification of artifactual ICA-components for artifact removal in EEG signals.",
"abstract": "BACKGROUND\nArtifacts contained in EEG recordings hamper both, the visual interpretation by experts as well as the algorithmic processing and analysis (e.g. for Brain-Computer Interfaces (BCI) or for Mental State Monitoring). While hand-optimized selection of source components derived from Independent Component Analysis (ICA) to clean EEG data is widespread, the field could greatly profit from automated solutions based on Machine Learning methods. Existing ICA-based removal strategies depend on explicit recordings of an individual's artifacts or have not been shown to reliably identify muscle artifacts.\n\n\nMETHODS\nWe propose an automatic method for the classification of general artifactual source components. They are estimated by TDSEP, an ICA method that takes temporal correlations into account. The linear classifier is based on an optimized feature subset determined by a Linear Programming Machine (LPM). The subset is composed of features from the frequency-, the spatial- and temporal domain. A subject independent classifier was trained on 640 TDSEP components (reaction time (RT) study, n = 12) that were hand labeled by experts as artifactual or brain sources and tested on 1080 new components of RT data of the same study. Generalization was tested on new data from two studies (auditory Event Related Potential (ERP) paradigm, n = 18; motor imagery BCI paradigm, n = 80) that used data with different channel setups and from new subjects.\n\n\nRESULTS\nBased on six features only, the optimized linear classifier performed on level with the inter-expert disagreement (<10% Mean Squared Error (MSE)) on the RT data. On data of the auditory ERP study, the same pre-calculated classifier generalized well and achieved 15% MSE. On data of the motor imagery paradigm, we demonstrate that the discriminant information used for BCI is preserved when removing up to 60% of the most artifactual source components.\n\n\nCONCLUSIONS\nWe propose a universal and efficient classifier of ICA components for the subject independent removal of artifacts from EEG data. Based on linear methods, it is applicable for different electrode placements and supports the introspection of results. Trained on expert ratings of large data sets, it is not restricted to the detection of eye- and muscle artifacts. Its performance and generalization ability is demonstrated on data of different EEG studies."
},
{
"pmid": "10459681",
"title": "Development of the polysomnographic database on CD-ROM.",
"abstract": "We have developed a polysomnographic database on CD-ROM. The data were obtained from 16 subjects with sleep apnea syndrome. The physiological signals include electroencephalogram, electromyogram, electrooculogram, invasive blood pressure, respiratory wave, oxygen saturation, and cardiac volume as measured by VEST method. The CD-ROM also include programs to analyze polysomnography (PSG) data. The CD-ROM has values: (i) for researchers investigating clinical physiology or non-linear dynamics during sleep apnea syndrome; (ii) for engineers developing a new algorithm for the computerized analysis of PSG data related to sleep apnea syndrome; (iii) for students learning sleep physiology."
},
{
"pmid": "20811092",
"title": "Documenting, modelling and exploiting P300 amplitude changes due to variable target delays in Donchin's speller.",
"abstract": "The P300 is an endogenous event-related potential (ERP) that is naturally elicited by rare and significant external stimuli. P300s are used increasingly frequently in brain-computer interfaces (BCIs) because the users of ERP-based BCIs need no special training. However, P300 waves are hard to detect and, therefore, multiple target stimulus presentations are needed before an interface can make a reliable decision. While significant improvements have been made in the detection of P300s, no particular attention has been paid to the variability in shape and timing of P300 waves in BCIs. In this paper we start filling this gap by documenting, modelling and exploiting a modulation in the amplitude of P300s related to the number of non-targets preceding a target in a Donchin speller. The basic idea in our approach is to use an appropriately weighted average of the responses produced by a classifier during multiple stimulus presentations, instead of the traditional plain average. This makes it possible to weigh more heavily events that are likely to be more informative, thereby increasing the accuracy of classification. The optimal weights are determined through a mathematical model that precisely estimates the accuracy of our speller as well as the expected performance improvement w.r.t. the traditional approach. Tests with two independent datasets show that our approach provides a marked statistically significant improvement in accuracy over the top-performing algorithm presented in the literature to date. The method and the theoretical models we propose are general and can easily be used in other P300-based BCIs with minimal changes."
},
{
"pmid": "10851218",
"title": "PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals.",
"abstract": "The newly inaugurated Research Resource for Complex Physiologic Signals, which was created under the auspices of the National Center for Research Resources of the National Institutes of Health, is intended to stimulate current research and new investigations in the study of cardiovascular and other complex biomedical signals. The resource has 3 interdependent components. PhysioBank is a large and growing archive of well-characterized digital recordings of physiological signals and related data for use by the biomedical research community. It currently includes databases of multiparameter cardiopulmonary, neural, and other biomedical signals from healthy subjects and from patients with a variety of conditions with major public health implications, including life-threatening arrhythmias, congestive heart failure, sleep apnea, neurological disorders, and aging. PhysioToolkit is a library of open-source software for physiological signal processing and analysis, the detection of physiologically significant events using both classic techniques and novel methods based on statistical physics and nonlinear dynamics, the interactive display and characterization of signals, the creation of new databases, the simulation of physiological and other signals, the quantitative evaluation and comparison of analysis methods, and the analysis of nonstationary processes. PhysioNet is an on-line forum for the dissemination and exchange of recorded biomedical signals and open-source software for analyzing them. It provides facilities for the cooperative analysis of data and the evaluation of proposed new algorithms. In addition to providing free electronic access to PhysioBank data and PhysioToolkit software via the World Wide Web (http://www.physionet. org), PhysioNet offers services and training via on-line tutorials to assist users with varying levels of expertise."
},
{
"pmid": "17517386",
"title": "An automatic analysis method for detecting and eliminating ECG artifacts in EEG.",
"abstract": "An automated method for detecting and eliminating electrocardiograph (ECG) artifacts from electroencephalography (EEG) without an additional synchronous ECG channel is proposed in this paper. Considering the properties of wavelet filters and the relationship between wavelet basis and characteristics of ECG artifacts, the concepts for selecting a suitable wavelet basis and scales used in the process are developed. The analysis via the selected basis is without suffering time shift for decomposition and detection/elimination procedures after wavelet transformation. The detection rates, above 97.5% for MIT/BIH and NTUH recordings, show a pretty good performance in ECG artifact detection and elimination."
},
{
"pmid": "29425248",
"title": "Electrooculography-based continuous eye-writing recognition system for efficient assistive communication systems.",
"abstract": "Human-computer interface systems whose input is based on eye movements can serve as a means of communication for patients with locked-in syndrome. Eye-writing is one such system; users can input characters by moving their eyes to follow the lines of the strokes corresponding to characters. Although this input method makes it easy for patients to get started because of their familiarity with handwriting, existing eye-writing systems suffer from slow input rates because they require a pause between input characters to simplify the automatic recognition process. In this paper, we propose a continuous eye-writing recognition system that achieves a rapid input rate because it accepts characters eye-written continuously, with no pauses. For recognition purposes, the proposed system first detects eye movements using electrooculography (EOG), and then a hidden Markov model (HMM) is applied to model the EOG signals and recognize the eye-written characters. Additionally, this paper investigates an EOG adaptation that uses a deep neural network (DNN)-based HMM. Experiments with six participants showed an average input speed of 27.9 character/min using Japanese Katakana as the input target characters. A Katakana character-recognition error rate of only 5.0% was achieved using 13.8 minutes of adaptation data."
},
{
"pmid": "19002707",
"title": "Helmet-based physiological signal monitoring system.",
"abstract": "A helmet-based system that was able to monitor the drowsiness of a soldier was developed. The helmet system monitored the electrocardiogram, electrooculogram and electroencephalogram (alpha waves) without constraints. Six dry electrodes were mounted at five locations on the helmet: both temporal sides, forehead region and upper and lower jaw strips. The electrodes were connected to an amplifier that transferred signals to a laptop computer via Bluetooth wireless communication. The system was validated by comparing the signal quality with conventional recording methods. Data were acquired from three healthy male volunteers for 12 min twice a day whilst they were sitting in a chair wearing the sensor-installed helmet. Experimental results showed that physiological signals for the helmet user were measured with acceptable quality without any intrusions on physical activities. The helmet system discriminated between the alert and drowsiness states by detecting blinking and heart rate variability (HRV) parameters extracted from ECG. Blinking duration and eye reopening time were increased during the sleepiness state compared to the alert state. Also, positive peak values of the sleepiness state were much higher, and the negative peaks were much lower than that of the alert state. The LF/HF ratio also decreased during drowsiness. This study shows the feasibility for using this helmet system: the subjects' health status and mental states could be monitored without constraints whilst they were working."
},
{
"pmid": "24512692",
"title": "Artifact characterization and removal for in vivo neural recording.",
"abstract": "BACKGROUND\nIn vivo neural recordings are often corrupted by different artifacts, especially in a less-constrained recording environment. Due to limited understanding of the artifacts appeared in the in vivo neural data, it is more challenging to identify artifacts from neural signal components compared with other applications. The objective of this work is to analyze artifact characteristics and to develop an algorithm for automatic artifact detection and removal without distorting the signals of interest.\n\n\nNEW METHOD\nThe proposed algorithm for artifact detection and removal is based on the stationary wavelet transform with selected frequency bands of neural signals. The selection of frequency bands is based on the spectrum characteristics of in vivo neural data. Further, to make the proposed algorithm robust under different recording conditions, a modified universal-threshold value is proposed.\n\n\nRESULTS\nExtensive simulations have been performed to evaluate the performance of the proposed algorithm in terms of both amount of artifact removal and amount of distortion to neural signals. The quantitative results reveal that the algorithm is quite robust for different artifact types and artifact-to-signal ratio.\n\n\nCOMPARISON WITH EXISTING METHODS\nBoth real and synthesized data have been used for testing the proposed algorithm in comparison with other artifact removal algorithms (e.g., ICA, wICA, wCCA, EMD-ICA, and EMD-CCA) found in the literature. Comparative testing results suggest that the proposed algorithm performs better than the available algorithms.\n\n\nCONCLUSION\nOur work is expected to be useful for future research on in vivo neural signal processing and eventually to develop a real-time neural interface for advanced neuroscience and behavioral experiments."
},
{
"pmid": "23086501",
"title": "The use of ensemble empirical mode decomposition with canonical correlation analysis as a novel artifact removal technique.",
"abstract": "Biosignal measurement and processing is increasingly being deployed in ambulatory situations particularly in connected health applications. Such an environment dramatically increases the likelihood of artifacts which can occlude features of interest and reduce the quality of information available in the signal. If multichannel recordings are available for a given signal source, then there are currently a considerable range of methods which can suppress or in some cases remove the distorting effect of such artifacts. There are, however, considerably fewer techniques available if only a single-channel measurement is available and yet single-channel measurements are important where minimal instrumentation complexity is required. This paper describes a novel artifact removal technique for use in such a context. The technique known as ensemble empirical mode decomposition with canonical correlation analysis (EEMD-CCA) is capable of operating on single-channel measurements. The EEMD technique is first used to decompose the single-channel signal into a multidimensional signal. The CCA technique is then employed to isolate the artifact components from the underlying signal using second-order statistics. The new technique is tested against the currently available wavelet denoising and EEMD-ICA techniques using both electroencephalography and functional near-infrared spectroscopy data and is shown to produce significantly improved results."
},
{
"pmid": "24968340",
"title": "Unsupervised eye blink artifact denoising of EEG data with modified multiscale sample entropy, Kurtosis, and wavelet-ICA.",
"abstract": "Brain activities commonly recorded using the electroencephalogram (EEG) are contaminated with ocular artifacts. These activities can be suppressed using a robust independent component analysis (ICA) tool, but its efficiency relies on manual intervention to accurately identify the independent artifactual components. In this paper, we present a new unsupervised, robust, and computationally fast statistical algorithm that uses modified multiscale sample entropy (mMSE) and Kurtosis to automatically identify the independent eye blink artifactual components, and subsequently denoise these components using biorthogonal wavelet decomposition. A 95% two-sided confidence interval of the mean is used to determine the threshold for Kurtosis and mMSE to identify the blink related components in the ICA decomposed data. The algorithm preserves the persistent neural activity in the independent components and removes only the artifactual activity. Results have shown improved performance in the reconstructed EEG signals using the proposed unsupervised algorithm in terms of mutual information, correlation coefficient, and spectral coherence in comparison with conventional zeroing-ICA and wavelet enhanced ICA artifact removal techniques. The algorithm achieves an average sensitivity of 90% and an average specificity of 98%, with average execution time for the datasets ( N = 7) of 0.06 s ( SD = 0.021) compared to the conventional wICA requiring 0.1078 s ( SD = 0.004). The proposed algorithm neither requires manual identification for artifactual components nor additional electrooculographic channel. The algorithm was tested for 12 channels, but might be useful for dense EEG systems."
},
{
"pmid": "17765604",
"title": "Reliability of quantitative EEG features.",
"abstract": "OBJECTIVE\nTo investigate the reliability of several well-known quantitative EEG (qEEG) features in the elderly in the resting, eyes closed condition and study the effects of epoch length and channel derivations on reliability.\n\n\nMETHODS\nFifteen healthy adults, over 50 years of age, underwent 10 EEG recordings over a 2-month period. Various qEEG features derived from power spectral, coherence, entropy and complexity analysis of the EEG were computed. Reliability was quantified using an intraclass correlation coefficient.\n\n\nRESULTS\nThe highest reliability was obtained with the average montage, reliability increased with epoch length up to 40s, longer epochs gave only marginal improvement. The reliability of the qEEG features was highest for power spectral parameters, followed by regularity measures based on entropy and complexity, coherence being least reliable.\n\n\nCONCLUSIONS\nMontage and epoch length had considerable effects on reliability. Several apparently unrelated regularity measures had similar stability. Reliability of coherence measures was strongly dependent on channel location and frequency bands.\n\n\nSIGNIFICANCE\nThe reliability of regularity measures has until now received limited attention. Low reliability of coherence measures in general may limit their usefulness in the clinical setting."
},
{
"pmid": "10731767",
"title": "Removing electroencephalographic artifacts by blind source separation.",
"abstract": "Eye movements, eye blinks, cardiac signals, muscle noise, and line noise present serious problems for electroencephalographic (EEG) interpretation and analysis when rejecting contaminated EEG segments results in an unacceptable data loss. Many methods have been proposed to remove artifacts from EEG recordings, especially those arising from eye movements and blinks. Often regression in the time or frequency domain is performed on parallel EEG and electrooculographic (EOG) recordings to derive parameters characterizing the appearance and spread of EOG artifacts in the EEG channels. Because EEG and ocular activity mix bidirectionally, regressing out eye artifacts inevitably involves subtracting relevant EEG signals from each record as well. Regression methods become even more problematic when a good regressing channel is not available for each artifact source, as in the case of muscle artifacts. Use of principal component analysis (PCA) has been proposed to remove eye artifacts from multichannel EEG. However, PCA cannot completely separate eye artifacts from brain signals, especially when they have comparable amplitudes. Here, we propose a new and generally applicable method for removing a wide variety of artifacts from EEG records based on blind source separation by independent component analysis (ICA). Our results on EEG data collected from normal and autistic subjects show that ICA can effectively detect, separate, and remove contamination from a wide variety of artifactual sources in EEG records with results comparing favorably with those obtained using regression and PCA methods. ICA can also be used to analyze blink-related brain activity."
}
] |
Journal of Big Data | 31998599 | PMC6956914 | 10.1186/s40537-017-0106-3 | A computing platform for pairs-trading online implementation via a blended Kalman-HMM filtering approach | This paper addresses the problem of designing an efficient platform for pairs-trading implementation in real time. Capturing the stylised features of a spread process, i.e., the evolution of the differential between the returns from a pair of stocks, exhibiting a heavy-tailed mean-reverting process is also dealt with. Likewise, the optimal recovery of time-varying parameters in a return-spread model is tackled. It is important to solve such issues in an integrated manner to carry out the execution of trading strategies in a dynamic market environment. The Kalman and hidden Markov model (HMM) multi-regime dynamic filtering approaches are fused together to provide a powerful method for pairs-trading actualisation. Practitioners’ considerations are taken into account in the way the new filtering method is automated. The synthesis of the HMM’s expectation–maximisation algorithm and Kalman filtering procedure gives rise to a set of self-updating optimal parameter estimates. The method put forward in this paper is a hybridisation of signal-processing algorithms. It highlights the critical role and beneficial utility of data fusion methods. Its appropriateness and novelty support the advancements of accurate predictive analytics involving big financial data sets. The algorithm’s performance is tested on historical return spread between Coca-Cola and Pepsi Inc.’s equities. Through a back-testing trade, a hypothetical trader might earn a non-zero profit under the assumption of no transaction costs and bid-ask spreads. The method’s success is illustrated by a trading simulation. The findings from this work show that there is high potential to gain when the transaction fees are low, and an investor is able to benefit from the proposed interplay of the two filtering methods. | Background and related workA motivation of this work, within the context of big data (i.e., large financial data sets in real time), resembles to that of Cerchiello et al. [8] and Ouahilal et al. [9], which is the development and evaluation of trading models and systems to support investors. In this paper, the idea of an automated algorithm proposed by Elliott and Krishnamurthy [3] is extended under a model with non-normal noise. An HMM is used to drive the dynamics of model parameters; although the HMM in this discussion evolves in discrete time, it is also possible to modulate model parameters with an HMM in continuous time [10] with time-varying jump intensities. The HMM embedding in turn will enable the capturing of non-normality of returns [11], which is common in the spread portfolio. Lévy-type processes are noted to yield excellent statistical fit to the non-normal asset-returns process but they are difficult to interpret from a financial perspective. The novelty of the proposed approach lies on the modification and dynamic interplay of filtering algorithms due to Elliott and Krishnamurthy [3] and Erlwein and Mamon [12], and customised to support the implementation of trading strategies.There are many examples demonstrating a markedly better fit to financial data when using an HMM compared to a simple autoregressive model. The method being put forward is more flexible than the Gaussian mean-reverting model, even for the generalised mean-reverting framework considered in Chen et al. [13, 14]. A mixture of normal models is employed and this features a couple of apparent advantages: (a) the modelling framework and methodology for the non-normal noise case depends only on a combination of already known and simple techniques for the normal case and (b) an easy economic interpretation could be put forward for the sudden changes in the behaviour of the process. From the practitioners’ point of view, the most desirable characteristic of the HMM-modulated model is its tractability for applications in the financial industry.Although pairs trading is deemed relatively safe for market or sector arbitrageurs, nothing is really safe from the price swings within a sector. Such price swings could obliterate a short-term profit, and in some cases, bankrupt a financially stable hedger; see for example, Goldstein [15]. Typically, this risk is dealt with by unwinding a short position and attempting to absorb minimal losses. Such a strategy is taken by aggressive high-frequency algorithmic traders as their positions are usually neutral at the end of each trading day. Alternatively, as described in Khandani and Lo [16], a trader needs to monitor closely a constantly changing reverting level of the portfolio. This is because exit from an unprofitable trade is necessary when reversion to the mean does not materialise. This clearly implies that spread modelling and generating of forecast error bounds are critical as they provide the foundation of decision-making on when trade exits must occur.The aim of this work complements that of [17], which uses a stochastic-control approach to pairs trading but with a numerical validation that relied on simulated data. In the case of this work, a method is developed that optimally and dynamically processes financial market signals thereby training model parameters so that they adapt well to market changes. This will precipitate a decrease in the size of the traders’ positions and consequently, will decrease potential risks as well. Akin to such an effect is the drawback that the hedger’s position, comprising of a short and a long leg within the spread, will also decrease the potential profit. This paper includes results from a numerical implementation that offer insights for a better understanding of how pairs trading works along with its financial implications. The integrated Kalman-HMM filtering algorithms could be interfaced with today’s computing technologies to support the successful pairs-trading implementation in the industry, which undoubtedly relies on the accurate modelling of the spread series. | [
"16709283"
] | [
{
"pmid": "16709283",
"title": "Selecting among three-mode principal component models of different types and complexities: a numerical convex hull based method.",
"abstract": "Several three-mode principal component models can be considered for the modelling of three-way, three-mode data, including the Candecomp/Parafac, Tucker3, Tucker2, and Tucker1 models. The following question then may be raised: given a specific data set, which of these models should be selected, and at what complexity (i.e. with how many components)? We address this question by proposing a numerical model selection heuristic based on a convex hull. Simulation results show that this heuristic performs almost perfectly, except for Tucker3 data arrays with at least one small mode and a relatively large amount of error."
}
] |
Frontiers in Neuroscience | 31998065 | PMC6962136 | 10.3389/fnins.2019.01395 | A Multi-Task Representation Learning Architecture for Enhanced Graph Classification | Composed of nodes and edges, graph structured data are organized in the non-Euclidean geometric space and ubiquitous especially in chemical compounds, proteins, etc. They usually contain rich structure information, and how to effectively extract inherent features of them is of great significance on the determination of function or traits in medicine and biology. Recently, there is a growing interest in learning graph-level representations for graph classification. Existing graph classification strategies based on graph neural networks broadly follow a single-task learning framework and manage to learn graph-level representations through aggregating node-level representations. However, they lack the efficient utilization of labels of nodes in a graph. In this paper, we propose a novel multi-task representation learning architecture coupled with the task of supervised node classification for enhanced graph classification. Specifically, the node classification task enforces node-level representations to take full advantage of node labels available in the graph and the graph classification task allows for learning graph-level representations in an end-to-end manner. Experimental results on multiple benchmark datasets demonstrate that the proposed architecture performs significantly better than various single-task graph neural network methods for graph classification. | 2. Related WorkRepresentation learning (Bengio et al., 2013) has been widely utilized in various fields such as computer vision (Du and Wang, 2015; Butepage et al., 2017) and natural language processing (Janner et al., 2018). With the rapid development of biology, chemistry, and medical science, the microscopic structure of molecular compounds as proteins and genes are paid more attention. This kind of graph-structured data attracts the interests of researchers in graph classification, and various methods are presented to learn graph representations.Recently, a wide variety of GNN models have been proposed, including approaches inspired by convolutional neural networks (Defferrard et al., 2016; Kipf and Welling, 2016; Lei et al., 2017), recursive neural networks (Scarselli et al., 2008) and recurrent neural networks (Li et al., 2016). These methods have been applied to various tasks, such as graph classification (Dai et al., 2016; Zhang et al., 2018) and node classification (Kipf and Welling, 2016; Hamilton et al., 2017a). Instead of using hand-crafted features suited for specific tasks, deep learning techniques enable models to automatically learn features and representations for each node. In the context of graph classification, which is our main task, the major challenge is going from node embeddings to the representation of the entire graph. Most methods (Duvenaud et al., 2015; Li et al., 2016; Gilmer et al., 2017) have the limitation that they simply pool all the node embeddings in a single layer and do not learn the hierarchical representations, so they are unable to capture the natural structures of large graphs. Some recent approaches have focused on alleviating this problem by adopting novel aggregation approaches.A latest research (Xu et al., 2019) developed theoretical foundations for reasoning about the expressive power of GNNs and presented a Graph Isomorphism Network (GIN) under the neighborhood aggregation framework. They proved that GNNs are at most as powerful as the Weisfeiler-Lehman (WL) test in distinguishing graph structures, and showed the discriminative power of GIN is equal to that of the WL test. They developed a “deep multisets" theory, which parameterizes universal multiset functions with the neural network, and a multiset is a generalized concept of a set that allows elements in it have multiple instances. Besides, multi-layer perceptrons (MLPs) are utilized in the model so that different graph structures could be discriminated through aggregation, combination and READOUT strategy. GIN updates node representations as:(1)hv(l)=MLP(l)((1+ϵ(l))·hv(l-1)+∑u∈N(v)hu(l-1)).They applied the sum aggregator that adds all neighbors of the current node, and set the combination method as (1 + ϵ(l)) in lth layer, so that all nodes can be effectively integrated and mapped to the next layer. As a theoretical framework, GIN outperforms popular GNN variants, while some other researchers focus on coarsening the input graph inspired by the pooling method in convolutional neural networks.DIFFPOOL (Ying et al., 2018) is a differentiable graph pooling module that can be adapted to various GNN architectures in a hierarchical and end-to-end fashion. DIFFPOOL learns a cluster assignment for nodes at each layer, which then forms the coarsened input for the next layer, and it is able to extract the complex hierarchical structure of graphs. Given the input adjacency matrix and node embedding matrix, the DIFFPOOL layer coarsens the input graph and generates a coarsened adjacency matrix as well as a new embedding matrix for each node or clusters in the coarsened graph. In particular, they applied the two following equations:(2)X(l+1)=S(l)TZ(l)∈ℝnl+1×d,(3)A(l+1)=S(l)TA(l)S(l)∈ℝnl+1×nl+1,where A(l) represents the adjacency matrix at this layer. Z(l) and X(l) denote the input node embedding matrix and the cluster embedding matrix respectively. S(l) is the probabilistic assignment matrix that assigns each node at layer l to a specific cluster in the next coarsened layer l + 1. Each row of S(l) corresponds to a node or cluster at layer l, and each column corresponds to a target cluster at layer l + 1. The assignment matrix is generated from the pooling GNN using input cluster features X(l) and the cluster adjacency matrix A(l):(4)S(l)=softmax(GNNl,pool(A(l),X(l))),where the softmax function is utilized in a row-wise fashion. The output dimension of GNNl,pool is pre-defined as the hyperparameter of the model, which corresponds to the maximum number of clusters in each layer. Besides, the embedding GNN is a standard GNN module applied to A(l) and X(l):(5)Z(l)=GNNl,embed(A(l),X(l)).The adjacency matrix between the cluster nodes A(l) from Equation (3) and the pooled features for clusters X(l) from Equation (2) are passed through a standard GNN to obtain new embeddings Z(l) for the cluster nodes. GIN and DIFFPOOL can learn to discriminate and capture the meaningful structure of graphs in terms of aggregation and pooling, respectively, and they are powerful in the graph classification task.In many real-world applications, such as network analysis and molecule classification, the input data is observed with a fraction of labeled graphs and labeled nodes. Thus it is desirable for the model to predict the labels of graphs and nodes simultaneously in a multi-task learning setting. Multi-task learning (MTL) refers to the paradigm of learning several related tasks together, which has been broadly used in natural language processing (Chen et al., 2018; Schulz et al., 2018; Sanh et al., 2019), computer vision (Choi et al., 2018; Kendall et al., 2018; Liu et al., 2019) and genomics (Yang et al., 2018). To be specific, SaEF-AKT (Huang et al., 2019) introduces a general similarity measure and an adaptive knowledge transfer mechanism to assist the knowledge transfer among tasks. EMT (Evolutionary multitasking) via autoencoding (Feng et al., 2018) allows the incorporation of multiple search mechanisms with different biases in the EMT paradigm. MTL is inspired by human learning activities where people could transfer the knowledge learned from the previous problems to facilitate learning a new task. Similar to human learning, the knowledge contained in a problem can be leveraged by related problems in the multi-task machine learning process. A main assumption of MTL is that there is an optimal shared parameter space for all problems, which is regularized by a specific loss, manually defined relationships or other automatic methods that estimate the latent structure of relationships among problems. Due to the shared processes that give rise to strong dependencies of multiple tasks, the MTL approach is able to explore and leverage the commonalities among related tasks in the learning process. | [
"23787338",
"15961493",
"1995902",
"12850146",
"29994415",
"19068426",
"14681450",
"12835260",
"29844324"
] | [
{
"pmid": "23787338",
"title": "Representation learning: a review and new perspectives.",
"abstract": "The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning."
},
{
"pmid": "15961493",
"title": "Protein function prediction via graph kernels.",
"abstract": "MOTIVATION\nComputational approaches to protein function prediction infer protein function by finding proteins with similar sequence, structure, surface clefts, chemical properties, amino acid motifs, interaction partners or phylogenetic profiles. We present a new approach that combines sequential, structural and chemical information into one graph model of proteins. We predict functional class membership of enzymes and non-enzymes using graph kernels and support vector machine classification on these protein graphs.\n\n\nRESULTS\nOur graph model, derivable from protein sequence and structure only, is competitive with vector models that require additional protein information, such as the size of surface pockets. If we include this extra information into our graph model, our classifier yields significantly higher accuracy levels than the vector models. Hyperkernels allow us to select and to optimally combine the most relevant node attributes in our protein graphs. We have laid the foundation for a protein function prediction system that integrates protein information from various sources efficiently and effectively.\n\n\nAVAILABILITY\nMore information available via www.dbs.ifi.lmu.de/Mitarbeiter/borgwardt.html."
},
{
"pmid": "1995902",
"title": "Structure-activity relationship of mutagenic aromatic and heteroaromatic nitro compounds. Correlation with molecular orbital energies and hydrophobicity.",
"abstract": "A review of the literature yielded data on over 200 aromatic and heteroaromatic nitro compounds tested for mutagenicity in the Ames test using S. typhimurium TA98. From the data, a quantitative structure-activity relationship (QSAR) has been derived for 188 congeners. The main determinants of mutagenicity are the hydrophobicity (modeled by octanol/water partition coefficients) and the energies of the lowest unoccupied molecular orbitals calculated using the AM1 method. It is also shown that chemicals possessing three or more fused rings possess much greater mutagenic potency than compounds with one or two fused rings. Since the QSAR is based on a very wide range in structural variation, aromatic rings from benzene to coronene are included as well as many different types of heterocycles, it is a significant step toward a predictive toxicology of value in the design of less mutagenic bioactive compounds."
},
{
"pmid": "12850146",
"title": "Distinguishing enzyme structures from non-enzymes without alignments.",
"abstract": "The ability to predict protein function from structure is becoming increasingly important as the number of structures resolved is growing more rapidly than our capacity to study function. Current methods for predicting protein function are mostly reliant on identifying a similar protein of known function. For proteins that are highly dissimilar or are only similar to proteins also lacking functional annotations, these methods fail. Here, we show that protein function can be predicted as enzymatic or not without resorting to alignments. We describe 1178 high-resolution proteins in a structurally non-redundant subset of the Protein Data Bank using simple features such as secondary-structure content, amino acid propensities, surface properties and ligands. The subset is split into two functional groupings, enzymes and non-enzymes. We use the support vector machine-learning algorithm to develop models that are capable of assigning the protein class. Validation of the method shows that the function can be predicted to an accuracy of 77% using 52 features to describe each protein. An adaptive search of possible subsets of features produces a simplified model based on 36 features that predicts at an accuracy of 80%. We compare the method to sequence-based methods that also avoid calculating alignments and predict a recently released set of unrelated proteins. The most useful features for distinguishing enzymes from non-enzymes are secondary-structure content, amino acid frequencies, number of disulphide bonds and size of the largest cleft. This method is applicable to any structure as it does not require the identification of sequence or structural similarity to a protein of known function."
},
{
"pmid": "29994415",
"title": "Evolutionary Multitasking via Explicit Autoencoding.",
"abstract": "Evolutionary multitasking (EMT) is an emerging research topic in the field of evolutionary computation. In contrast to the traditional single-task evolutionary search, EMT conducts evolutionary search on multiple tasks simultaneously. It aims to improve convergence characteristics across multiple optimization problems at once by seamlessly transferring knowledge among them. Due to the efficacy of EMT, it has attracted lots of research attentions and several EMT algorithms have been proposed in the literature. However, existing EMT algorithms are usually based on a common mode of knowledge transfer in the form of implicit genetic transfer through chromosomal crossover. This mode cannot make use of multiple biases embedded in different evolutionary search operators, which could give better search performance when properly harnessed. Keeping this in mind, this paper proposes an EMT algorithm with explicit genetic transfer across tasks, namely EMT via autoencoding, which allows the incorporation of multiple search mechanisms with different biases in the EMT paradigm. To confirm the efficacy of the proposed EMT algorithm with explicit autoencoding, comprehensive empirical studies have been conducted on both the single- and multi-objective multitask optimization problems."
},
{
"pmid": "19068426",
"title": "The graph neural network model.",
"abstract": "Many underlying relationships among data in several areas of science and engineering, e.g., computer vision, molecular chemistry, molecular biology, pattern recognition, and data mining, can be represented in terms of graphs. In this paper, we propose a new neural network model, called graph neural network (GNN) model, that extends existing neural network methods for processing the data represented in graph domains. This GNN model, which can directly process most of the practically useful types of graphs, e.g., acyclic, cyclic, directed, and undirected, implements a function tau(G,n) is an element of IR(m) that maps a graph G and one of its nodes n into an m-dimensional Euclidean space. A supervised learning algorithm is derived to estimate the parameters of the proposed GNN model. The computational cost of the proposed algorithm is also considered. Some experimental results are shown to validate the proposed learning algorithm, and to demonstrate its generalization capabilities."
},
{
"pmid": "14681450",
"title": "BRENDA, the enzyme database: updates and major new developments.",
"abstract": "BRENDA (BRaunschweig ENzyme DAtabase) represents a comprehensive collection of enzyme and metabolic information, based on primary literature. The database contains data from at least 83,000 different enzymes from 9800 different organisms, classified in approximately 4200 EC numbers. BRENDA includes biochemical and molecular information on classification and nomenclature, reaction and specificity, functional parameters, occurrence, enzyme structure, application, engineering, stability, disease, isolation and preparation, links and literature references. The data are extracted and evaluated from approximately 46,000 references, which are linked to PubMed as long as the reference is cited in PubMed. In the past year BRENDA has undergone major changes including a large increase in updating speed with >50% of all data updated in 2002 or in the first half of 2003, the development of a new EC-tree browser, a taxonomy-tree browser, a chemical substructure search engine for ligand structure, the development of controlled vocabulary, an ontology for some information fields and a thesaurus for ligand names. The database is accessible free of charge to the academic community at http://www.brenda. uni-koeln.de."
},
{
"pmid": "12835260",
"title": "Statistical evaluation of the Predictive Toxicology Challenge 2000-2001.",
"abstract": "MOTIVATION\nThe development of in silico models to predict chemical carcinogenesis from molecular structure would help greatly to prevent environmentally caused cancers. The Predictive Toxicology Challenge (PTC) competition was organized to test the state-of-the-art in applying machine learning to form such predictive models.\n\n\nRESULTS\nFourteen machine learning groups generated 111 models. The use of Receiver Operating Characteristic (ROC) space allowed the models to be uniformly compared regardless of the error cost function. We developed a statistical method to test if a model performs significantly better than random in ROC space. Using this test as criteria five models performed better than random guessing at a significance level p of 0.05 (not corrected for multiple testing). Statistically the best predictor was the Viniti model for female mice, with p value below 0.002. The toxicologically most interesting models were Leuven2 for male mice, and Kwansei for female rats. These models performed well in the statistical analysis and they are in the middle of ROC space, i.e. distant from extreme cost assumptions. These predictive models were also independently judged by domain experts to be among the three most interesting, and are believed to include a small but significant amount of empirically learned toxicological knowledge.\n\n\nAVAILABILITY\nPTC details and data can be found at: http://www.predictive-toxicology.org/ptc/."
},
{
"pmid": "29844324",
"title": "Linking drug target and pathway activation for effective therapy using multi-task learning.",
"abstract": "Despite the abundance of large-scale molecular and drug-response data, the insights gained about the mechanisms underlying treatment efficacy in cancer has been in general limited. Machine learning algorithms applied to those datasets most often are used to provide predictions without interpretation, or reveal single drug-gene association and fail to derive robust insights. We propose to use Macau, a bayesian multitask multi-relational algorithm to generalize from individual drugs and genes and explore the interactions between the drug targets and signaling pathways' activation. A typical insight would be: \"Activation of pathway Y will confer sensitivity to any drug targeting protein X\". We applied our methodology to the Genomics of Drug Sensitivity in Cancer (GDSC) screening, using gene expression of 990 cancer cell lines, activity scores of 11 signaling pathways derived from the tool PROGENy as cell line input and 228 nominal targets for 265 drugs as drug input. These interactions can guide a tissue-specific combination treatment strategy, for example suggesting to modulate a certain pathway to maximize the drug response for a given tissue. We confirmed in literature drug combination strategies derived from our result for brain, skin and stomach tissues. Such an analysis of interactions across tissues might help target discovery, drug repurposing and patient stratification strategies."
}
] |
Scientific Reports | 31941918 | PMC6962320 | 10.1038/s41598-019-56989-5 | Classification of Interstitial Lung Abnormality Patterns with an Ensemble of Deep Convolutional Neural Networks | Subtle interstitial changes in the lung parenchyma of smokers, known as Interstitial Lung Abnormalities (ILA), have been associated with clinical outcomes, including mortality, even in the absence of Interstitial Lung Disease (ILD). Although several methods have been proposed for the automatic identification of more advanced Interstitial Lung Disease (ILD) patterns, few have tackled ILA, which likely precedes the development ILD in some cases. In this context, we propose a novel methodology for automated identification and classification of ILA patterns in computed tomography (CT) images. The proposed method is an ensemble of deep convolutional neural networks (CNNs) that detect more discriminative features by incorporating two, two-and-a-half and three- dimensional architectures, thereby enabling more accurate classification. This technique is implemented by first training each individual CNN, and then combining its output responses to form the overall ensemble output. To train and test the system we used 37424 radiographic tissue samples corresponding to eight different parenchymal feature classes from 208 CT scans. The resulting ensemble performance including an average sensitivity of 91,41% and average specificity of 98,18% suggests it is potentially a viable method to identify radiographic patterns that precede the development of ILD. | Related workThere have been several approaches to computer-aided analysis and automated identification and classification of ILD using CT images. For example, in patients with IPF, densitometric measures, such as the skewness, kurtosis and the mean of the histogram of distribution of lung density, have been found to be associated with outcomes such as pulmonary function and transplant free survival11,12. Hand-crafted texture features such as gray level co-occurrence matrices (GLCM), run length matrices (RLM), and fractal analysis, have been also widely used13–15 for classification of patterns of interstitial change as have local binary patterns (LBP)16 and local histogram-based measures10,17. For example, Park et al. proposed a CAD scheme to detect, but not classify, early ILD using low-dose CT images based on hand-crafted features such as RLM and histogram features on 30 patients18.More recent approaches have proposed the use of features learned directly from the data, instead of hand-crafted ones. Some of these methods are based on unsupervised learning algorithms such as restricted Boltzmann machine (RBM)19 or k-means and k-SVD to construct a set of learned features20,21.Multiple deep learning approaches have been used to prognosticate and characterize disease. Convolutional Neural Networks (CNN) have been trained for a number of purposes, including characterization of disease severity and prediction of clinical outcomes22, pulmonary artery-vein separation23, biomarker regression24, pulmonary fissure detection25 or emphysema classification26,27 using CT data from the COPDGene study, a large multi-center study with over 10,000 subjects.For ILD characterization, there has been proposed a CNN-based method for automatic classification of patients with fibrotic lung disease28, as well as a method for classifying the image-level label of ILD types29. There has also been an attempt to classify in an holistic manner whole lung slices by using a pre-trained and fine-tuned AlexNet architecture30. However, a holistic classification of a whole slice as a unique label does not consider multiple tissue subtypes in the same slice thus it only provides a rough quantification method of the disease. Regarding ILD radiographic subtype classification, other recent notable approaches include the use of a variety of CNNs. These include CNNs to classify 2D patches as interstitial patterns with or without pre-training using a variety of texture datasets31–33 as well as the use of the well-known architectures such as AlexNet and GoogleNet pre-trained on ImageNet34.One potential approach for improving the performance of all of these techniques is to combine multiple classifiers into an ensemble. While this has not been attempted for ILD classification, others have used this approach for the identification of other lung structures such as nodules. These efforts have included both the combination of multiple CNN with the same configuration, each of them having as input a different nodule view35 and the combination of MLP, KNN and SVM to classify hand-crafted features for nodule diagnosis36.In this work we propose the first deep learning-based method to identify and classify radiographic patterns of ILA, which likely represents early or subtle ILD in some cases, that implies the characterization of 8 different parenchymal features types. We propose a methodology based on an ensemble of various 2D, 2,5D and 3D CNN specific architectures to tackle subtle parenchymal patterns, including multi-context and multi-stage architectures, proving the superiority with respect to previous methods specifically designed to address late-stage interstitial diseases revealing the need of specific designs and research to tackle ILA properly. Additionally, this work precisely defines how to perform and optimize the ensemble of different CNN architectures which has rarely been addressed. | [
"21388308",
"19542480",
"19781963",
"22366047",
"23692170",
"21506741",
"20935110",
"24836310",
"27989445",
"21412102",
"25728361",
"21603289",
"19864701",
"21311944",
"20129855",
"21263171",
"28892454",
"29993996",
"30106711",
"29623248",
"26955021",
"29043528",
"26886976",
"26955024",
"20214461",
"24003531",
"19056717",
"26441412",
"28113302"
] | [
{
"pmid": "21388308",
"title": "Lung volumes and emphysema in smokers with interstitial lung abnormalities.",
"abstract": "BACKGROUND\nCigarette smoking is associated with emphysema and radiographic interstitial lung abnormalities. The degree to which interstitial lung abnormalities are associated with reduced total lung capacity and the extent of emphysema is not known.\n\n\nMETHODS\nWe looked for interstitial lung abnormalities in 2416 (96%) of 2508 high-resolution computed tomographic (HRCT) scans of the lung obtained from a cohort of smokers. We used linear and logistic regression to evaluate the associations between interstitial lung abnormalities and HRCT measurements of total lung capacity and emphysema.\n\n\nRESULTS\nInterstitial lung abnormalities were present in 194 (8%) of the 2416 HRCT scans evaluated. In statistical models adjusting for relevant covariates, interstitial lung abnormalities were associated with reduced total lung capacity (-0.444 liters; 95% confidence interval [CI], -0.596 to -0.292; P<0.001) and a lower percentage of emphysema defined by lung-attenuation thresholds of -950 Hounsfield units (-3%; 95% CI, -4 to -2; P<0.001) and -910 Hounsfield units (-10%; 95% CI, -12 to -8; P<0.001). As compared with participants without interstitial lung abnormalities, those with abnormalities were more likely to have a restrictive lung deficit (total lung capacity <80% of the predicted value; odds ratio, 2.3; 95% CI, 1.4 to 3.7; P<0.001) and were less likely to meet the diagnostic criteria for chronic obstructive pulmonary disease (COPD) (odds ratio, 0.53; 95% CI, 0.37 to 0.76; P<0.001). The effect of interstitial lung abnormalities on total lung capacity and emphysema was dependent on COPD status (P<0.02 for the interactions). Interstitial lung abnormalities were positively associated with both greater exposure to tobacco smoke and current smoking.\n\n\nCONCLUSIONS\nIn smokers, interstitial lung abnormalities--which were present on about 1 of every 12 HRCT scans--were associated with reduced total lung capacity and a lesser amount of emphysema. (Funded by the National Institutes of Health and the Parker B. Francis Foundation; ClinicalTrials.gov number, NCT00608764.)."
},
{
"pmid": "19542480",
"title": "Cigarette smoking is associated with subclinical parenchymal lung disease: the Multi-Ethnic Study of Atherosclerosis (MESA)-lung study.",
"abstract": "RATIONALE\nCigarette smoking is a risk factor for diffuse parenchymal lung disease. Risk factors for subclinical parenchymal lung disease have not been described.\n\n\nOBJECTIVES\nTo determine if cigarette smoking is associated with subclinical parenchymal lung disease, as measured by spirometric restriction and regions of high attenuation on computed tomography (CT) imaging.\n\n\nMETHODS\nWe examined 2,563 adults without airflow obstruction or clinical cardiovascular disease in the Multi-Ethnic Study of Atherosclerosis, a population-based cohort sampled from six communities in the United States. Cumulative and current cigarette smoking were assessed by pack-years and urine cotinine, respectively. Spirometric restriction was defined as a forced vital capacity less than the lower limit of normal. High attenuation areas on the lung fields of cardiac CT scans were defined as regions having an attenuation between -600 and -250 Hounsfield units, reflecting ground-glass and reticular abnormalities. Generalized additive models were used to adjust for age, gender, race/ethnicity, smoking status, anthropometrics, center, and CT scan parameters.\n\n\nMEASUREMENTS AND MAIN RESULTS\nThe prevalence of spirometric restriction was 10.0% (95% confidence interval [CI], 8.9-11.2%) and increased relatively by 8% (95% CI, 3-12%) for each 10 cigarette pack-years in multivariate analysis. The median volume of high attenuation areas was 119 cm(3) (interquartile range, 100-143 cm(3)). The volume of high attenuation areas increased by 1.6 cm(3) (95% CI, 0.9-2.4 cm(3)) for each 10 cigarette pack-years in multivariate analysis.\n\n\nCONCLUSIONS\nSmoking may cause subclinical parenchymal lung disease detectable by spirometry and CT imaging, even among a generally healthy cohort."
},
{
"pmid": "19781963",
"title": "Identification of early interstitial lung disease in smokers from the COPDGene Study.",
"abstract": "RATIONALE AND OBJECTIVES\nThe aim of this study is to compare two subjective methods for the identification of changes suggestive of early interstitial lung disease (ILD) on chest computed tomographic (CT) scans.\n\n\nMATERIALS AND METHODS\nThe CT scans of the first 100 subjects enrolled in the COPDGene Study from a single institution were examined using a sequential reader and a group consensus interpretation scheme. CT scans were evaluated for the presence of parenchymal changes consistent with ILD using the following scoring system: 0 = normal, 1 = equivocal for the presence of ILD, 2 = highly suspicious for ILD, and 3 = classic ILD changes. A statistical comparison of patients with early ILD to normal subjects was performed.\n\n\nRESULTS\nThere was a high degree of agreement between methods (kappa = 0.84; 95% confidence interval, 0.73-0.94; P < .0001 for the sequential and consensus methods). The sequential reading method had both high positive (1.0) and negative (0.97) predictive values for a consensus read despite a 58% reduction in the number of chest CT evaluations. Regardless of interpretation method, the prevalence of chest CT changes consistent with early ILD in this subset of smokers from COPDGene varied between 5% and 10%. Subjects with early ILD tended to have greater tobacco smoke exposure than subjects without early ILD (P = .053).\n\n\nCONCLUSIONS\nA sequential CT interpretation scheme is an efficient method for the visual interpretation of CT data. Further investigation is required to independently confirm our findings and further characterize early ILD in smokers."
},
{
"pmid": "22366047",
"title": "Subclinical interstitial lung disease: why you should care.",
"abstract": "The widespread use of high-resolution computed tomography in clinical and research settings has increased the detection of interstitial lung abnormalities (ILA) in asymptomatic and undiagnosed individuals. We reported that in smokers, ILA were present in about 1 of every 12 high-resolution computed tomographic scans; however, the long-term significance of these subclinical changes remains unclear. Studies in families affected with pulmonary fibrosis, smokers with chronic obstructive pulmonary disease, and patients with inflammatory lung disease have shown that asymptomatic and undiagnosed individuals with ILA have reductions in lung volume, functional limitations, increased pulmonary symptoms, histopathologic changes, and molecular profiles similar to those observed in patients with clinically significant interstitial lung disease (ILD). These findings suggest that, in select at-risk populations, ILA may represent early stages of pulmonary fibrosis or subclinical ILD. The growing interest surrounding this topic is motivated by our poor understanding of the inciting events and natural history of ILD, coupled with a lack of effective therapies. In this perspective, we outline past and current research focused on validating radiologic, physiological, and molecular methods to detect subclinical ILD. We discuss the limitations of the available cross-sectional studies and the need for future longitudinal studies to determine the prognostic and therapeutic implications of subclinical ILD in populations at risk of developing clinically significant ILD."
},
{
"pmid": "23692170",
"title": "MUC5B promoter polymorphism and interstitial lung abnormalities.",
"abstract": "BACKGROUND\nA common promoter polymorphism (rs35705950) in MUC5B, the gene encoding mucin 5B, is associated with idiopathic pulmonary fibrosis. It is not known whether this polymorphism is associated with interstitial lung disease in the general population.\n\n\nMETHODS\nWe performed a blinded assessment of interstitial lung abnormalities detected in 2633 participants in the Framingham Heart Study by means of volumetric chest computed tomography (CT). We evaluated the relationship between the abnormalities and the genotype at the rs35705950 locus.\n\n\nRESULTS\nOf the 2633 chest CT scans that were evaluated, interstitial lung abnormalities were present in 177 (7%). Participants with such abnormalities were more likely to have shortness of breath and chronic cough and reduced measures of total lung and diffusion capacity, as compared with participants without such abnormalities. After adjustment for covariates, for each copy of the minor rs35705950 allele, the odds of interstitial lung abnormalities were 2.8 times greater (95% confidence interval [CI], 2.0 to 3.9; P<0.001), and the odds of definite CT evidence of pulmonary fibrosis were 6.3 times greater (95% CI, 3.1 to 12.7; P<0.001). Although the evidence of an association between the MUC5B genotype and interstitial lung abnormalities was greater among participants who were older than 50 years of age, a history of cigarette smoking did not appear to influence the association.\n\n\nCONCLUSIONS\nThe MUC5B promoter polymorphism was found to be associated with interstitial lung disease in the general population. Although this association was more apparent in older persons, it did not appear to be influenced by cigarette smoking. (Funded by the National Institutes of Health and others; ClinicalTrials.gov number, NCT00005121.)."
},
{
"pmid": "21506741",
"title": "A common MUC5B promoter polymorphism and pulmonary fibrosis.",
"abstract": "BACKGROUND\nThe mutations that have been implicated in pulmonary fibrosis account for only a small proportion of the population risk.\n\n\nMETHODS\nUsing a genomewide linkage scan, we detected linkage between idiopathic interstitial pneumonia and a 3.4-Mb region of chromosome 11p15 in 82 families. We then evaluated genetic variation in this region in gel-forming mucin genes expressed in the lung among 83 subjects with familial interstitial pneumonia, 492 subjects with idiopathic pulmonary fibrosis, and 322 controls. MUC5B expression was assessed in lung tissue.\n\n\nRESULTS\nLinkage and fine mapping were used to identify a region of interest on the p-terminus of chromosome 11 that included gel-forming mucin genes. The minor-allele of the single-nucleotide polymorphism (SNP) rs35705950, located 3 kb upstream of the MUC5B transcription start site, was present at a frequency of 34% among subjects with familial interstitial pneumonia, 38% among subjects with idiopathic pulmonary fibrosis, and 9% among controls (allelic association with familial interstitial pneumonia, P=1.2×10(-15); allelic association with idiopathic pulmonary fibrosis, P=2.5×10(-37)). The odds ratios for disease among subjects who were heterozygous and those who were homozygous for the minor allele of this SNP were 6.8 (95% confidence interval [CI], 3.9 to 12.0) and 20.8 (95% CI, 3.8 to 113.7), respectively, for familial interstitial pneumonia and 9.0 (95% CI, 6.2 to 13.1) and 21.8 (95% CI, 5.1 to 93.5), respectively, for idiopathic pulmonary fibrosis. MUC5B expression in the lung was 14.1 times as high in subjects who had idiopathic pulmonary fibrosis as in those who did not (P<0.001). The variant allele of rs35705950 was associated with up-regulation in MUC5B expression in the lung in unaffected subjects (expression was 37.4 times as high as in unaffected subjects homozygous for the wild-type allele, P<0.001). MUC5B protein was expressed in lesions of idiopathic pulmonary fibrosis.\n\n\nCONCLUSIONS\nA common polymorphism in the promoter of MUC5B is associated with familial interstitial pneumonia and idiopathic pulmonary fibrosis. Our findings suggest that dysregulated MUC5B expression in the lung may be involved in the pathogenesis of pulmonary fibrosis. (Funded by the National Heart, Lung, and Blood Institute and others.)."
},
{
"pmid": "20935110",
"title": "Clinical course and prediction of survival in idiopathic pulmonary fibrosis.",
"abstract": "Idiopathic pulmonary fibrosis (IPF) is a progressive, life-threatening, interstitial lung disease of unknown etiology. The median survival of patients with IPF is only 2 to 3 years, yet some patients live much longer. Respiratory failure resulting from disease progression is the most frequent cause of death. To date we have limited information as to predictors of mortality in patients with IPF, and research in this area has failed to yield prediction models that can be reliably used in clinical practice to predict individual risk of mortality. The goal of this concise clinical review is to examine and summarize the current data on the clinical course, individual predictors of survival, and proposed clinical prediction models in IPF. Finally, we will discuss challenges and future directions related to predicting survival in IPF."
},
{
"pmid": "24836310",
"title": "Efficacy and safety of nintedanib in idiopathic pulmonary fibrosis.",
"abstract": "BACKGROUND\nNintedanib (formerly known as BIBF 1120) is an intracellular inhibitor that targets multiple tyrosine kinases. A phase 2 trial suggested that treatment with 150 mg of nintedanib twice daily reduced lung-function decline and acute exacerbations in patients with idiopathic pulmonary fibrosis.\n\n\nMETHODS\nWe conducted two replicate 52-week, randomized, double-blind, phase 3 trials (INPULSIS-1 and INPULSIS-2) to evaluate the efficacy and safety of 150 mg of nintedanib twice daily as compared with placebo in patients with idiopathic pulmonary fibrosis. The primary end point was the annual rate of decline in forced vital capacity (FVC). Key secondary end points were the time to the first acute exacerbation and the change from baseline in the total score on the St. George's Respiratory Questionnaire, both assessed over a 52-week period.\n\n\nRESULTS\nA total of 1066 patients were randomly assigned in a 3:2 ratio to receive nintedanib or placebo. The adjusted annual rate of change in FVC was -114.7 ml with nintedanib versus -239.9 ml with placebo (difference, 125.3 ml; 95% confidence interval [CI], 77.7 to 172.8; P<0.001) in INPULSIS-1 and -113.6 ml with nintedanib versus -207.3 ml with placebo (difference, 93.7 ml; 95% CI, 44.8 to 142.7; P<0.001) in INPULSIS-2. In INPULSIS-1, there was no significant difference between the nintedanib and placebo groups in the time to the first acute exacerbation (hazard ratio with nintedanib, 1.15; 95% CI, 0.54 to 2.42; P=0.67); in INPULSIS-2, there was a significant benefit with nintedanib versus placebo (hazard ratio, 0.38; 95% CI, 0.19 to 0.77; P=0.005). The most frequent adverse event in the nintedanib groups was diarrhea, with rates of 61.5% and 18.6% in the nintedanib and placebo groups, respectively, in INPULSIS-1 and 63.2% and 18.3% in the two groups, respectively, in INPULSIS-2.\n\n\nCONCLUSIONS\nIn patients with idiopathic pulmonary fibrosis, nintedanib reduced the decline in FVC, which is consistent with a slowing of disease progression; nintedanib was frequently associated with diarrhea, which led to discontinuation of the study medication in less than 5% of patients. (Funded by Boehringer Ingelheim; INPULSIS-1 and INPULSIS-2 ClinicalTrials.gov numbers, NCT01335464 and NCT01335477.)."
},
{
"pmid": "27989445",
"title": "The Objective Identification and Quantification of Interstitial Lung Abnormalities in Smokers.",
"abstract": "RATIONALE AND OBJECTIVES\nPrevious investigation suggests that visually detected interstitial changes in the lung parenchyma of smokers are highly clinically relevant and predict outcomes, including death. Visual subjective analysis to detect these changes is time-consuming, insensitive to subtle changes, and requires training to enhance reproducibility. Objective detection of such changes could provide a method of disease identification without these limitations. The goal of this study was to develop and test a fully automated image processing tool to objectively identify radiographic features associated with interstitial abnormalities in the computed tomography scans of a large cohort of smokers.\n\n\nMATERIALS AND METHODS\nAn automated tool that uses local histogram analysis combined with distance from the pleural surface was used to detect radiographic features consistent with interstitial lung abnormalities in computed tomography scans from 2257 individuals from the Genetic Epidemiology of COPD study, a longitudinal observational study of smokers. The sensitivity and specificity of this tool was determined based on its ability to detect the visually identified presence of these abnormalities.\n\n\nRESULTS\nThe tool had a sensitivity of 87.8% and a specificity of 57.5% for the detection of interstitial lung abnormalities, with a c-statistic of 0.82, and was 100% sensitive and 56.7% specific for the detection of the visual subtype of interstitial abnormalities called fibrotic parenchymal abnormalities, with a c-statistic of 0.89.\n\n\nCONCLUSIONS\nIn smokers, a fully automated image processing tool is able to identify those individuals who have interstitial lung abnormalities with moderate sensitivity and specificity."
},
{
"pmid": "21412102",
"title": "Quantitative computed tomographic indexes in diffuse interstitial lung disease: correlation with physiologic tests and computed tomography visual scores.",
"abstract": "PURPOSE\nTo assess the correlation among quantitative indexes of computed tomography (CT), spirometric pulmonary function tests (PFTs), and visual scores (VSs) of CT in patients with diffuse interstitial lung disease (DILD) and to prove the estimated value of CT quantification for the prediction of the possibility of pulmonary function impairment.\n\n\nMETHODS AND MATERIALS\nA total of 157 patients (male to female ratio, 96:61; mean age, 63 ± 11 years) with DILD were enrolled in this study. All patients underwent volume thin-section CT in the supine position at full inspiration. During the same period, 23 people (male to female ratio, 10:13; mean age, 55 ± 13 years) with no history of DILD and with normal PFTs and CT findings were used as a control group. Quantitative indexes were obtained using a commercial CAD system (Brilliance Workspace v3.0; Philips Medical Systems). Quantitative indexes included total lung volume (TLV), mean lung attenuation, variation of lung attenuation, emphysema volume (<-950 Hounsfield units [HU]), functioning lung volume (-700 HU > pixel > -950 HU), and interstitial lung disease volume (>-700 HU). Visual scores were measured semiquantitatively and included the overall extent of pulmonary parenchymal abnormality as well as the extent of consolidation, ground glass opacity, reticulation, and honeycomb opacities. Quantitative indexes were correlated with PFT and VSs using the Pearson correlation test.\n\n\nRESULTS\nQuantitative indexes, PFT results, and VSs differed significantly between the DILD group and the control group, except for emphysematous parameters (P < 0.05).Pulmonary function test results showed significant correlation with quantitative indexes in the DILD group. Functioning lung volume showed positive correlation with forced vital capacity and forced expiratory volume in 1 second (r = 0.80 and 0.73, P < 0.001). Total lung capacity showed positive correlation with TLV (r = 0.83, P < 0.001).Visual scores were correlated with the ratio of a specific volume to TLV (indicated as ®). Interstitial lung disease volume® showed positive correlation (r = 0.53, P < 0.001), and FLV® showed a negative correlation with the overall extent of ILD (r = -0.52, P < 0.001). variation of lung attenuation showed a positive correlation with honeycombing extent (r = 0.37, P < 0.001), and mean lung attenuation showed a positive correlation with reticulation extent (r = 0.42, P < 0.001).\n\n\nCONCLUSIONS\nQuantitative indexes measured by a commercial workstation showed good correlation not only with the extent of DILD estimated by visual inspection but also with PFT results. Quantitative indexes can be used as an objective tool for quantitative evaluation of disease extent and for follow-up of the progression or improvement of a DILD."
},
{
"pmid": "25728361",
"title": "Quantitative CT evaluation in patients with combined pulmonary fibrosis and emphysema: correlation with pulmonary function.",
"abstract": "RATIONALE AND OBJECTIVES\nThe purpose of this study was to evaluate the correlations between objective quantitative computed tomography (CT) measurements of the extent of emphysematous and fibrotic lesions and the results of pulmonary function tests (PFTs) in patients with combined pulmonary fibrosis and emphysema (CPFE).\n\n\nMATERIALS AND METHODS\nThis study involved 43 CPFE patients who underwent CT and PFTs. The extent of emphysematous lesions was obtained by calculating the percentage of low attenuation area (%LAA) values lower than -950 Hounsfield units (HU). Fibrotic lesions were defined as high attenuation area (HAA) using thresholds with pixels between 0 and -700 HU, and the extent of fibrosis was obtained by calculating the percentage of HAA (%HAA). The correlations of %LAA and %HAA with PFTs were evaluated by the Spearman rank correlation coefficients and multiple linear regression analysis.\n\n\nRESULTS\nA significant negative correlation was found between %HAA and diffusing capacity of the lung for carbon monoxide (DLco) %predicted (ρ = -0.747; P < .001), whereas no significant correlation was found between %LAA and DLco %predicted. On multiple linear regression analysis, although the %HAA and %LAA were independent contributors to DLco %predicted, the predictive power of %HAA was superior to that of %LAA.\n\n\nCONCLUSIONS\nIn CPFE, the extent of fibrosis has a more significant impact on DLco than emphysema."
},
{
"pmid": "21603289",
"title": "Comparison of usual interstitial pneumonia and nonspecific interstitial pneumonia: quantification of disease severity and discrimination between two diseases on HRCT using a texture-based automated system.",
"abstract": "OBJECTIVE\nTo evaluate the usefulness of an automated system for quantification and discrimination of usual interstitial pneumonia (UIP) and nonspecific interstitial pneumonia (NSIP).\n\n\nMATERIALS AND METHODS\nAn automated system to quantify six regional high-resolution CT (HRCT) patterns: normal, NL; ground-glass opacity, GGO; reticular opacity, RO; honeycombing, HC; emphysema, EMPH; and consolidation, CONS, was developed using texture and shape features. Fifty-four patients with pathologically proven UIP (n = 26) and pathologically proven NSIP (n = 28) were included as part of this study. Inter-observer agreement in measuring the extent of each HRCT pattern between the system and two thoracic radiologists were assessed in 26 randomly selected subsets using an interclass correlation coefficient (ICC). A linear regression analysis was used to assess the contribution of each disease pattern to the pulmonary function test parameters. The discriminating capacity of the system between UIP and NSIP was evaluated using a binomial logistic regression.\n\n\nRESULTS\nThe overall ICC showed acceptable agreement among the system and the two radiologists (r = 0.895 for the abnormal lung volume fraction, 0.706 for the fibrosis fraction, 0.895 for NL, 0.625 for GGO, 0.626 for RO, 0.893 for HC, 0.800 for EMPH, and 0.430 for CONS). The volumes of NL, GGO, RO, and EMPH contribute to forced expiratory volume during one second (FEV₁) (r = 0.72, β values, 0.84, 0.34, 0.34 and 0.24, respectively) and forced vital capacity (FVC) (r = 0.76, β values, 0.82, 0.28, 0.21 and 0.34, respectively). For diffusing capacity (DL(co)), the volumes of NL and HC were independent contributors in opposite directions (r = 0.65, β values, 0.64, -0.21, respectively). The automated system can help discriminate between UIP and NSIP with an accuracy of 82%.\n\n\nCONCLUSION\nThe automated quantification system of regional HRCT patterns can be useful in the assessment of disease severity and may provide reliable agreement with the radiologists' results. In addition, this system may be useful in differentiating between UIP and NSIP."
},
{
"pmid": "19864701",
"title": "Computerized detection of diffuse lung disease in MDCT: the usefulness of statistical texture features.",
"abstract": "Accurate detection of diffuse lung disease is an important step for computerized diagnosis and quantification of this disease. It is also a difficult clinical task for radiologists. We developed a computerized scheme to assist radiologists in the detection of diffuse lung disease in multi-detector computed tomography (CT). Two radiologists selected 31 normal and 37 abnormal CT scans with ground glass opacity, reticular, honeycombing and nodular disease patterns based on clinical reports. The abnormal cases in our database must contain at least an abnormal area with a severity of moderate or severe level that was subjectively rated by the radiologists. Because statistical texture features may lack the power to distinguish a nodular pattern from a normal pattern, the abnormal cases that contain only a nodular pattern were excluded. The areas that included specific abnormal patterns in the selected CT images were then delineated as reference standards by an expert chest radiologist. The lungs were first segmented in each slice by use of a thresholding technique, and then divided into contiguous volumes of interest (VOIs) with a 64 x 64 x 64 matrix size. For each VOI, we determined and employed statistical texture features, such as run-length and co-occurrence matrix features, to distinguish abnormal from normal lung parenchyma. In particular, we developed new run-length texture features with clear physical meanings to considerably improve the accuracy of our detection scheme. A quadratic classifier was employed for distinguishing between normal and abnormal VOIs by the use of a leave-one-case-out validation scheme. A rule-based criterion was employed to further determine whether a case was normal or abnormal. We investigated the impact of new and conventional texture features, VOI size and the dimensionality for regions of interest on detecting diffuse lung disease. When we employed new texture features for 3D VOIs of 64 x 64 x 64 voxels, our system achieved the highest performance level: a sensitivity of 86% and a specificity of 90% for the detection of abnormal VOIs, and a sensitivity of 89% and a specificity of 90% for the detection of abnormal cases. Our computerized scheme would be useful for assisting radiologists in the diagnosis of diffuse lung disease."
},
{
"pmid": "21311944",
"title": "Regional context-sensitive support vector machine classifier to improve automated identification of regional patterns of diffuse interstitial lung disease.",
"abstract": "We propose the use of a context-sensitive support vector machine (csSVM) to enhance the performance of a conventional support vector machine (SVM) for identifying diffuse interstitial lung disease (DILD) in high-resolution computerized tomography (HRCT) images. Nine hundred rectangular regions of interest (ROIs), each 20 × 20 pixels in size and consisting of 150 ROIs representing six regional disease patterns (normal, ground-glass opacity, reticular opacity, honeycombing, emphysema, and consolidation), were marked by two experienced radiologists using consensus HRCT images of various DILD. Twenty-one textual and shape features were evaluated to characterize the ROIs. The csSVM classified an ROI by simultaneously using the decision value of each class and information from the neighboring ROIs, such as neighboring region feature distances and class differences. Sequential forward-selection was used to select the relevant features. To validate our results, we used 900 ROIs with fivefold cross-validation and 84 whole lung images categorized by a radiologist. The accuracy of the proposed method for ROI and whole lung classification (89.88 ± 0.02%, and 60.30 ± 13.95%, respectively) was significantly higher than that provided by the conventional SVM classifier (87.39 ± 0.02%, and 57.69 ± 13.31%, respectively; paired t test, p < 0.01, and p < 0.01, respectively). We conclude that our csSVM provides better overall quantification of DILD."
},
{
"pmid": "20129855",
"title": "Quantitative analysis of pulmonary emphysema using local binary patterns.",
"abstract": "We aim at improving quantitative measures of emphysema in computed tomography (CT) images of the lungs. Current standard measures, such as the relative area of emphysema (RA), rely on a single intensity threshold on individual pixels, thus ignoring any interrelations between pixels. Texture analysis allows for a much richer representation that also takes the local structure around pixels into account. This paper presents a texture classification-based system for emphysema quantification in CT images. Measures of emphysema severity are obtained by fusing pixel posterior probabilities output by a classifier. Local binary patterns (LBP) are used as texture features, and joint LBP and intensity histograms are used for characterizing regions of interest (ROIs). Classification is then performed using a k nearest neighbor classifier with a histogram dissimilarity measure as distance. A 95.2% classification accuracy was achieved on a set of 168 manually annotated ROIs, comprising the three classes: normal tissue, centrilobular emphysema, and paraseptal emphysema. The measured emphysema severity was in good agreement with a pulmonary function test (PFT) achieving correlation coefficients of up to |r| = 0.79 in 39 subjects. The results were compared to RA and to a Gaussian filter bank, and the texture-based measures correlated significantly better with PFT than did RA."
},
{
"pmid": "21263171",
"title": "Computer-aided detection of early interstitial lung diseases using low-dose CT images.",
"abstract": "This study aims to develop a new computer-aided detection (CAD) scheme to detect early interstitial lung disease (ILD) using low-dose computed tomography (CT) examinations. The CAD scheme classifies each pixel depicted on the segmented lung areas into positive or negative groups for ILD using a mesh-grid-based region growth method and a multi-feature-based artificial neural network (ANN). A genetic algorithm was applied to select optimal image features and the ANN structure. In testing each CT examination, only pixels selected by the mesh-grid region growth method were analyzed and classified by the ANN to improve computational efficiency. All unselected pixels were classified as negative for ILD. After classifying all pixels into the positive and negative groups, CAD computed a detection score based on the ratio of the number of positive pixels to all pixels in the segmented lung areas, which indicates the likelihood of the test case being positive for ILD. When applying to an independent testing dataset of 15 positive and 15 negative cases, the CAD scheme yielded the area under receiver operating characteristic curve (AUC = 0.884 ± 0.064) and 80.0% sensitivity at 85.7% specificity. The results demonstrated the feasibility of applying the CAD scheme to automatically detect early ILD using low-dose CT examinations."
},
{
"pmid": "28892454",
"title": "Disease Staging and Prognosis in Smokers Using Deep Learning in Chest Computed Tomography.",
"abstract": "RATIONALE\nDeep learning is a powerful tool that may allow for improved outcome prediction.\n\n\nOBJECTIVES\nTo determine if deep learning, specifically convolutional neural network (CNN) analysis, could detect and stage chronic obstructive pulmonary disease (COPD) and predict acute respiratory disease (ARD) events and mortality in smokers.\n\n\nMETHODS\nA CNN was trained using computed tomography scans from 7,983 COPDGene participants and evaluated using 1,000 nonoverlapping COPDGene participants and 1,672 ECLIPSE participants. Logistic regression (C statistic and the Hosmer-Lemeshow test) was used to assess COPD diagnosis and ARD prediction. Cox regression (C index and the Greenwood-Nam-D'Agnostino test) was used to assess mortality.\n\n\nMEASUREMENTS AND MAIN RESULTS\nIn COPDGene, the C statistic for the detection of COPD was 0.856. A total of 51.1% of participants in COPDGene were accurately staged and 74.95% were within one stage. In ECLIPSE, 29.4% were accurately staged and 74.6% were within one stage. In COPDGene and ECLIPSE, the C statistics for ARD events were 0.64 and 0.55, respectively, and the Hosmer-Lemeshow P values were 0.502 and 0.380, respectively, suggesting no evidence of poor calibration. In COPDGene and ECLIPSE, CNN predicted mortality with fair discrimination (C indices, 0.72 and 0.60, respectively), and without evidence of poor calibration (Greenwood-Nam-D'Agnostino P values, 0.307 and 0.331, respectively).\n\n\nCONCLUSIONS\nA deep-learning approach that uses only computed tomography imaging data can identify those smokers who have COPD and predict who are most likely to have ARD events and those with the highest mortality. At a population level CNN analysis may be a powerful tool for risk assessment."
},
{
"pmid": "29993996",
"title": "Pulmonary Artery-Vein Classification in CT Images Using Deep Learning.",
"abstract": "Recent studies show that pulmonary vascular diseases may specifically affect arteries or veins through different physiologic mechanisms. To detect changes in the two vascular trees, physicians manually analyze the chest computed tomography (CT) image of the patients in search of abnormalities. This process is time consuming, difficult to standardize, and thus not feasible for large clinical studies or useful in real-world clinical decision making. Therefore, automatic separation of arteries and veins in CT images is becoming of great interest, as it may help physicians to accurately diagnose pathological conditions. In this paper, we present a novel, fully automatic approach to classify vessels from chest CT images into arteries and veins. The algorithm follows three main steps: first, a scale-space particles segmentation to isolate vessels; then a 3-D convolutional neural network (CNN) to obtain a first classification of vessels; finally, graph-cuts' optimization to refine the results. To justify the usage of the proposed CNN architecture, we compared different 2-D and 3-D CNNs that may use local information from bronchus- and vessel-enhanced images provided to the network with different strategies. We also compared the proposed CNN approach with a random forests (RFs) classifier. The methodology was trained and evaluated on the superior and inferior lobes of the right lung of 18 clinical cases with noncontrast chest CT scans, in comparison with manual classification. The proposed algorithm achieves an overall accuracy of 94%, which is higher than the accuracy obtained using other CNN architectures and RF. Our method was also validated with contrast-enhanced CT scans of patients with chronic thromboembolic pulmonary hypertension to demonstrate that our model generalizes well to contrast-enhanced modalities. The proposed method outperforms state-of-the-art methods, paving the way for future use of 3-D CNN for artery/vein classification in CT images."
},
{
"pmid": "30106711",
"title": "FissureNet: A Deep Learning Approach For Pulmonary Fissure Detection in CT Images.",
"abstract": "Pulmonary fissure detection in computed tomography (CT) is a critical component for automatic lobar segmentation. The majority of fissure detection methods use feature descriptors that are hand-crafted, low-level, and have local spatial extent. The design of such feature detectors is typically targeted toward normal fissure anatomy, yielding low sensitivity to weak, and abnormal fissures that are common in clinical data sets. Furthermore, local features commonly suffer from low specificity, as the complex textures in the lung can be indistinguishable from the fissure when the global context is not considered. We propose a supervised discriminative learning framework for simultaneous feature extraction and classification. The proposed framework, called FissureNet, is a coarse-to-fine cascade of two convolutional neural networks. The coarse-to-fine strategy alleviates the challenges associated with training a network to segment a thin structure that represents a small fraction of the image voxels. FissureNet was evaluated on a cohort of 3706 subjects with inspiration and expiration 3DCT scans from the COPDGene clinical trial and a cohort of 20 subjects with 4DCT scans from a lung cancer clinical trial. On both data sets, FissureNet showed superior performance compared with a deep learning approach using the U-Net architecture and a Hessian-based fissure detection method in terms of area under the precision-recall curve (PR-AUC). The overall PR-AUC for FissureNet, U-Net, and Hessian on the COPDGene (lung cancer) data set was 0.980 (0.966), 0.963 (0.937), and 0.158 (0.182), respectively. On a subset of 30 COPDGene scans, FissureNet was compared with a recently proposed advanced fissure detection method called derivative of sticks (DoS) and showed superior performance with a PR-AUC of 0.991 compared with 0.668 for DoS."
},
{
"pmid": "29623248",
"title": "Holistic classification of CT attenuation patterns for interstitial lung diseases via deep convolutional neural networks.",
"abstract": "Interstitial lung diseases (ILD) involve several abnormal imaging patterns observed in computed tomography (CT) images. Accurate classification of these patterns plays a significant role in precise clinical decision making of the extent and nature of the diseases. Therefore, it is important for developing automated pulmonary computer-aided detection systems. Conventionally, this task relies on experts' manual identification of regions of interest (ROIs) as a prerequisite to diagnose potential diseases. This protocol is time consuming and inhibits fully automatic assessment. In this paper, we present a new method to classify ILD imaging patterns on CT images. The main difference is that the proposed algorithm uses the entire image as a holistic input. By circumventing the prerequisite of manual input ROIs, our problem set-up is significantly more difficult than previous work but can better address the clinical workflow. Qualitative and quantitative results using a publicly available ILD database demonstrate state-of-the-art classification accuracy under the patch-based classification and shows the potential of predicting the ILD type using holistic image."
},
{
"pmid": "26955021",
"title": "Lung Pattern Classification for Interstitial Lung Diseases Using a Deep Convolutional Neural Network.",
"abstract": "Automated tissue characterization is one of the most crucial components of a computer aided diagnosis (CAD) system for interstitial lung diseases (ILDs). Although much research has been conducted in this field, the problem remains challenging. Deep learning techniques have recently achieved impressive results in a variety of computer vision problems, raising expectations that they might be applied in other domains, such as medical image analysis. In this paper, we propose and evaluate a convolutional neural network (CNN), designed for the classification of ILD patterns. The proposed network consists of 5 convolutional layers with 2 × 2 kernels and LeakyReLU activations, followed by average pooling with size equal to the size of the final feature maps and three dense layers. The last dense layer has 7 outputs, equivalent to the classes considered: healthy, ground glass opacity (GGO), micronodules, consolidation, reticulation, honeycombing and a combination of GGO/reticulation. To train and evaluate the CNN, we used a dataset of 14696 image patches, derived by 120 CT scans from different scanners and hospitals. To the best of our knowledge, this is the first deep CNN designed for the specific problem. A comparative analysis proved the effectiveness of the proposed CNN against previous methods in a challenging dataset. The classification performance ( ~ 85.5%) demonstrated the potential of CNNs in analyzing lung patterns. Future work includes, extending the CNN to three-dimensional data provided by CT volume scans and integrating the proposed method into a CAD system that aims to provide differential diagnosis for ILDs as a supportive tool for radiologists."
},
{
"pmid": "29043528",
"title": "Comparison of Shallow and Deep Learning Methods on Classifying the Regional Pattern of Diffuse Lung Disease.",
"abstract": "This study aimed to compare shallow and deep learning of classifying the patterns of interstitial lung diseases (ILDs). Using high-resolution computed tomography images, two experienced radiologists marked 1200 regions of interest (ROIs), in which 600 ROIs were each acquired using a GE or Siemens scanner and each group of 600 ROIs consisted of 100 ROIs for subregions that included normal and five regional pulmonary disease patterns (ground-glass opacity, consolidation, reticular opacity, emphysema, and honeycombing). We employed the convolution neural network (CNN) with six learnable layers that consisted of four convolution layers and two fully connected layers. The classification results were compared with the results classified by a shallow learning of a support vector machine (SVM). The CNN classifier showed significantly better performance for accuracy compared with that of the SVM classifier by 6-9%. As the convolution layer increases, the classification accuracy of the CNN showed better performance from 81.27 to 95.12%. Especially in the cases showing pathological ambiguity such as between normal and emphysema cases or between honeycombing and reticular opacity cases, the increment of the convolution layer greatly drops the misclassification rate between each case. Conclusively, the CNN classifier showed significantly greater accuracy than the SVM classifier, and the results implied structural characteristics that are inherent to the specific ILD patterns."
},
{
"pmid": "26886976",
"title": "Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning.",
"abstract": "Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks."
},
{
"pmid": "26955024",
"title": "Pulmonary Nodule Detection in CT Images: False Positive Reduction Using Multi-View Convolutional Networks.",
"abstract": "We propose a novel Computer-Aided Detection (CAD) system for pulmonary nodules using multi-view convolutional networks (ConvNets), for which discriminative features are automatically learnt from the training data. The network is fed with nodule candidates obtained by combining three candidate detectors specifically designed for solid, subsolid, and large nodules. For each candidate, a set of 2-D patches from differently oriented planes is extracted. The proposed architecture comprises multiple streams of 2-D ConvNets, for which the outputs are combined using a dedicated fusion method to get the final classification. Data augmentation and dropout are applied to avoid overfitting. On 888 scans of the publicly available LIDC-IDRI dataset, our method reaches high detection sensitivities of 85.4% and 90.1% at 1 and 4 false positives per scan, respectively. An additional evaluation on independent datasets from the ANODE09 challenge and DLCST is performed. We showed that the proposed multi-view ConvNets is highly suited to be used for false positive reduction of a CAD system."
},
{
"pmid": "20214461",
"title": "Genetic epidemiology of COPD (COPDGene) study design.",
"abstract": "BACKGROUND\nCOPDGene is a multicenter observational study designed to identify genetic factors associated with COPD. It will also characterize chest CT phenotypes in COPD subjects, including assessment of emphysema, gas trapping, and airway wall thickening. Finally, subtypes of COPD based on these phenotypes will be used in a comprehensive genome-wide study to identify COPD susceptibility genes.\n\n\nMETHODS/RESULTS\nCOPDGene will enroll 10,000 smokers with and without COPD across the GOLD stages. Both Non-Hispanic white and African-American subjects are included in the cohort. Inspiratory and expiratory chest CT scans will be obtained on all participants. In addition to the cross-sectional enrollment process, these subjects will be followed regularly for longitudinal studies. A genome-wide association study (GWAS) will be done on an initial group of 4000 subjects to identify genetic variants associated with case-control status and several quantitative phenotypes related to COPD. The initial findings will be verified in an additional 2000 COPD cases and 2000 smoking control subjects, and further validation association studies will be carried out.\n\n\nCONCLUSIONS\nCOPDGene will provide important new information about genetic factors in COPD, and will characterize the disease process using high resolution CT scans. Understanding genetic factors and CT phenotypes that define COPD will potentially permit earlier diagnosis of this disease and may lead to the development of treatments to modify progression."
},
{
"pmid": "24003531",
"title": "Whole-lung volume and density in spirometrically-gated inspiratory and expiratory CT in systemic sclerosis: correlation with static volumes at pulmonary function tests.",
"abstract": "BACKGROUND\nSpiral low-dose computed tomography (LDCT) permits to measure whole-lung volume and density in a single breath-hold.\n\n\nOBJECTIVE\nTo evaluate the agreement between static lung volumes measured with LDCT and pulmonary function test (PFT) and the correlation between the LDCT volumes and lung density in restrictive lung disease.\n\n\nDESIGN\nPatients with Systemic Sclerosis (SSc) with (n = 24) and without (n = 16) pulmonary involvement on sequential thin-section CT and patients with chronic obstructive pulmonary disease (COPD)(n = 29) underwent spirometrically-gated LDCT at 90% and 10% of vital capacity to measure inspiratory and expiratory lung volumes and mean lung attenuation (MLA). Total lung capacity and residual volume were measured the same day of CT.\n\n\nRESULTS\nInspiratory [95% limits of agreement (95% LoA)--43.8% and 39.2%] and expiratory (95% LoA -45.8% and 37.1%) lung volumes measured on LDCT and PFT showed poor agreement in SSc patients with pulmonary involvement, whereas they were in substantial agreement (inspiratory 95% LoA -14.1% and 16.1%; expiratory 95% LoA -13.5% and 23%) in SSc patients without pulmonary involvement and in inspiratory scans only (95% LoA -23.1% and 20.9%) of COPD patients. Inspiratory and expiratory LDCT volumes, MLA and their deltas differentiated both SSc patients with or without pulmonary involvement from COPD patients. LDCT lung volumes and density were not correlated in SSc patients with pulmonary involvement, whereas they did correlate in SSc without pulmonary involvement and in COPD patients.\n\n\nCONCLUSIONS\nIn restrictive lung disease due to SSc there is poor agreement between static lung volumes measured using LDCT and PFT and the relationship between volume and density values on CT is altered."
},
{
"pmid": "19056717",
"title": "Volume correction in computed tomography densitometry for follow-up studies on pulmonary emphysema.",
"abstract": "Lung densitometry in drug evaluation trials can be confounded by changes in inspiration levels between computed tomography (CT) scans, limiting its sensitivity to detect changes over time. Therefore our aim was to explore whether the sensitivity of lung densitometry could be improved by correcting the measurements for changes in lung volume, based on the estimated relation between density (as measured with the 15th percentile point) and lung volume. We compared four correction methods, using CT data of 143 patients from five European countries. Patients were scanned, generally twice per visit, at baseline and after 2.5 years. The methods included one physiological model and three linear mixed-effects models using a volume-density relation: (1) estimated over the entire population with one scan per visit (model A) and two scans per visit (model B); and (2) estimated for each patient individually (model C). Both log-transformed and original volume and density values were evaluated and the differences in goodness-of-fit between methods were tested. Model C fitted best (P < 0.0001, P < 0.0001, and P = 0.064), when two scans were available. The most consistent progression estimation was obtained between sites, when both volume and density were log-transformed. Sensitivity was improved using repeated CT scans by applying volume correction to individual patient data. Volume correction reduces the variability in progression estimation by a factor of two, and is therefore recommended."
},
{
"pmid": "26441412",
"title": "Improving Computer-Aided Detection Using Convolutional Neural Networks and Random View Aggregation.",
"abstract": "Automated computer-aided detection (CADe) has been an important tool in clinical practice and research. State-of-the-art methods often show high sensitivities at the cost of high false-positives (FP) per patient rates. We design a two-tiered coarse-to-fine cascade framework that first operates a candidate generation system at sensitivities ∼ 100% of but at high FP levels. By leveraging existing CADe systems, coordinates of regions or volumes of interest (ROI or VOI) are generated and function as input for a second tier, which is our focus in this study. In this second stage, we generate 2D (two-dimensional) or 2.5D views via sampling through scale transformations, random translations and rotations. These random views are used to train deep convolutional neural network (ConvNet) classifiers. In testing, the ConvNets assign class (e.g., lesion, pathology) probabilities for a new set of random views that are then averaged to compute a final per-candidate classification probability. This second tier behaves as a highly selective process to reject difficult false positives while preserving high sensitivities. The methods are evaluated on three data sets: 59 patients for sclerotic metastasis detection, 176 patients for lymph node detection, and 1,186 patients for colonic polyp detection. Experimental results show the ability of ConvNets to generalize well to different medical imaging CADe applications and scale elegantly to various data sets. Our proposed methods improve performance markedly in all cases. Sensitivities improved from 57% to 70%, 43% to 77%, and 58% to 75% at 3 FPs per patient for sclerotic metastases, lymph nodes and colonic polyps, respectively."
},
{
"pmid": "28113302",
"title": "Multilevel Contextual 3-D CNNs for False Positive Reduction in Pulmonary Nodule Detection.",
"abstract": "OBJECTIVE\nFalse positive reduction is one of the most crucial components in an automated pulmonary nodule detection system, which plays an important role in lung cancer diagnosis and early treatment. The objective of this paper is to effectively address the challenges in this task and therefore to accurately discriminate the true nodules from a large number of candidates.\n\n\nMETHODS\nWe propose a novel method employing three-dimensional (3-D) convolutional neural networks (CNNs) for false positive reduction in automated pulmonary nodule detection from volumetric computed tomography (CT) scans. Compared with its 2-D counterparts, the 3-D CNNs can encode richer spatial information and extract more representative features via their hierarchical architecture trained with 3-D samples. More importantly, we further propose a simple yet effective strategy to encode multilevel contextual information to meet the challenges coming with the large variations and hard mimics of pulmonary nodules.\n\n\nRESULTS\nThe proposed framework has been extensively validated in the LUNA16 challenge held in conjunction with ISBI 2016, where we achieved the highest competition performance metric (CPM) score in the false positive reduction track.\n\n\nCONCLUSION\nExperimental results demonstrated the importance and effectiveness of integrating multilevel contextual information into 3-D CNN framework for automated pulmonary nodule detection in volumetric CT data.\n\n\nSIGNIFICANCE\nWhile our method is tailored for pulmonary nodule detection, the proposed framework is general and can be easily extended to many other 3-D object detection tasks from volumetric medical images, where the targeting objects have large variations and are accompanied by a number of hard mimics."
}
] |
Diagnostics | 31581453 | PMC6963281 | 10.3390/diagnostics9040135 | A Pervasive Healthcare System for COPD Patients | Chronic obstructive pulmonary disease (COPD) is one of the most severe public health problems worldwide. Pervasive computing technology creates a new opportunity to redesign the traditional pattern of medical system. While many pervasive healthcare systems are currently found in the literature, there is little published research on the effectiveness of these paradigms in the medical context. This paper designs and validates a rule-based ontology framework for COPD patients. Unlike conventional systems, this work presents a new vision of telemedicine and remote care solutions that will promote individual self-management and autonomy for COPD patients through an advanced decision-making technique. Rules accuracy estimates were 89% for monitoring vital signs, and environmental factors, and 87% for nutrition facts, and physical activities. | 2. Related WorksFor almost two decades now, the use of medical ontologies has no longer been limited to defining medical terminologies such as the systematized nomenclature of medicine—clinical terms (SNOMED CT) or the unified medical language system (UMLS), but has also become one of the most powerful solutions for tackling serious health problems and supporting the management of large amounts of complex data. Ontologies have also been used in hundreds of research projects concerned with medical issues such as diagnosis, self-management, and treatment [8,9,10,11,12].The ontological approach proved its effectiveness in the remote healthcare arena; for instance, Lasierra [13] and Rubio et al. [14] have presented robust examples of ontology usage in the telemonitoring domain for generic and specific chronic diseases. Lasierra proposed an autonomic computing ontology for integrated management at home using medical sensors. Rubio provides a formal representation of knowledge to describe the effect of technological context variations on clinical data quality and its impact on a patient’s treatment. Another example can be found in [15]: Benyahia et al. developed a generic ontology for monitoring patients diagnosed with chronic diseases. The proposed architecture aims to detect any anomalies or dangerous situations by collecting physiological and lifestyle data. Hristoskova et al. [16] presented an ontology-based ambient intelligence framework that supports real-time physiological monitoring of patients suffering from congestive heart failure. Ryu et al. [17] proposed a ubiquitous healthcare context model using an ontology; the model extracts the contextual information for implementing the healthcare service, taking into consideration the medical references and environments. Jong et al. [18] has designed an interactive healthcare system with wearable sensors that provides personalized services with formal ontology-driven specifications. In the same setting, an ontology-based context-aware framework for customized care has been presented by Ko et al. [19] as a form of wearable biomedical technology. An interesting projection of ontology in this domain can be found in [20], in which the authors built a context-aware mobile service aiming at supporting mobile caregivers and sharing information to improve the quality of life of people living with chronic diseases.In addition to this obvious interest in ontology, most healthcare projects related to computer-assisted medical decision-making are often modeled using rule-based approaches. Semantic web rule language (SWRL) has emerged over existing W3C web ontology language (OWL) axioms to promote the expressiveness of the semantic web. The combination of OWL and SWRL specifications provides further inference capabilities beyond the inductive classification of description logics, with 78 built-in functions categorized across the comparisons, mathematics, Boolean values, strings, date, time and duration, URIs, and lists [21]. In the medical environment, there are several uses of rules; for example, if‒then rules can be used for chaining or mapping ontologies properties to achieve knowledge integration. By applying rules, the pattern of behaviors of all entities can be expressed, which would produce new facts and tailored services. Some examples of the incorporation of rules in healthcare ontologies as an essential component of decision support applications can be found in [22,23,24,25]. These rules are written in specific terms to infer useful information and then provide personalized care services to chronic patients according to their situations. For example, [22] established a set of predefined rules to trigger alarms when critical threshold levels are exceeded, while [23,24,25] used ontology-based rules for ubiquitous computing that allow for monitoring health anytime and anywhere. Furthermore, a few research projects have studied the use of SWRL to aid in diagnosis. These include [26] and [27], which provided rule-based ontologies to diagnose heart diseases and diabetes, respectively.The use of ontology in COPD is only restricted to certain aspects of patients’ lives [8,11]. For example, the authors of [12] developed an ontology inspired by the autonomic computing paradigm that provides configurable services to support home-based care. The authors of [14] proposed a predictive model to extract relevant attributes and enable the early detection of deteriorations, but the proposed ontology aims at describing the basic structure of the application. Although a significant amount of research has been done to assess the importance of telehealth in COPD, the concept of integrated care services is still in its infancy. The use of semantic mapping between the physiological parameters, environmental factors, symptoms, physical activity, and patient-specific data to construct a telemonitoring system for COPD using ontologies was not found in the literature. This work will be the first building block for creating a comprehensive primary e-healthcare delivery system, capable of organizing various daily life scenarios for COPD patients in a healthy and safe environment. | [
"24433744",
"29848578",
"26962013",
"28210321",
"20510592",
"22269224",
"26474836",
"23122633",
"20185360",
"23567539",
"27170903",
"21075729",
"20843247",
"27897995",
"18367496",
"28567499",
"4042547",
"15330778",
"17062658",
"27774480",
"19593155",
"23554858",
"21737561",
"26876040",
"10362051",
"29346418",
"21262460",
"25953970",
"2764404",
"28243206",
"24111929",
"28003742",
"21890573",
"19252992",
"22505744",
"15370758",
"28244799",
"23343360",
"19533541"
] | [
{
"pmid": "24433744",
"title": "A home telehealth program for patients with severe COPD: the PROMETE study.",
"abstract": "BACKGROUND\nAcute exacerbations of chronic obstructive pulmonary disease (AECOP) are key events in the natural history of the disease. Patients with more AECOPD have worse prognosis. There is a need of innovative models of care for patients with severe COPD and frequent AECOPD, and Telehealth (TH) is part of these programs.\n\n\nMETHODS\nIn a cluster assignment, controlled trial study design, we recruited 60 patients, 30 in home telehealth (HT) and 30 in conventional care (CC). All participants had a prior diagnosis of COPD with a post-bronchodilator forced expiratory volume (FEV1)% predicted <50%, age ≥ 50 years, were on long-term home oxygen therapy, and non-smokers. Patients in the HT group measured their vital signs on a daily bases, and data were transmitted automatically to a Clinical Monitoring Center for followed-up, and who escalated clinical alerts to a Pneumologist.\n\n\nRESULTS\nAfter 7-month of monitoring and follow-up, there was a significant reduction in ER visits (20 in HT vs. 57 in CC), hospitalizations (12 vs. 33), length of hospital stay in (105 vs. 276 days), and even need for non-invasive mechanical ventilation (0 vs. 8), all p < 0.05. Time to the first severe AECOPD increased from 77 days in CC to 141 days in HT (K-M p < 0.05). There was no study withdrawals associated with technology. All patients showed a high level of satisfaction with the HT program.\n\n\nCONCLUSIONS\nWe conclude that HT in elderly, severe COPD patients with multiple comorbidities is safe and efficacious in reducing healthcare resources utilization."
},
{
"pmid": "26962013",
"title": "Randomised crossover trial of telemonitoring in chronic respiratory patients (TeleCRAFT trial).",
"abstract": "DESIGN\nRandomised crossover trial with 6 months of standard best practice clinical care (control group) and 6 months with the addition of telemonitoring.\n\n\nPARTICIPANTS\n68 patients with chronic lung disease (38 with COPD; 30 with chronic respiratory failure due to other causes), who had a hospital admission for an exacerbation within 6 months of randomisation and either used long-term oxygen therapy or had an arterial oxygen saturation (SpO2) of <90% on air during the previous admission. Individuals received telemonitoring (second-generation system) via broadband link to a hospital-based care team.\n\n\nOUTCOME MEASURES\nPrimary outcome measure was time to first hospital admission for an acute exacerbation. Secondary outcome measures were hospital admissions, general practitioner (GP) consultations and home visits by nurses, quality of life measured by EuroQol-5D and hospital anxiety and depression (HAD) scale, and self-efficacy score (Stanford).\n\n\nRESULTS\nMedian (IQR) number of days to first admission showed no difference between the two groups—77 (114) telemonitoring, 77.5 (61) control ( p=0.189). Hospital admission rate at 6 months increased (0.63 telemonitoring vs 0.32 control p=0.026). Home visits increased during telemonitoring; GP consultations were unchanged. Self-efficacy fell, while HAD depression score improved marginally during telemonitoring.\n\n\nCONCLUSIONS\nTelemonitoring added to standard care did not alter time to next acute hospital admission, increased hospital admissions and home visits overall, and did not improve quality of life in chronic respiratory patients.\n\n\nTRIAL REGISTRATION NUMBER\nNCT02180919 (ClinicalTrials.gov)."
},
{
"pmid": "28210321",
"title": "Telemedicine in chronic obstructive pulmonary disease.",
"abstract": "Telemedicine is a medical application of advanced technology to disease management. This modality may provide benefits also to patients with chronic obstructive pulmonary disease (COPD). Different devices and systems are used. The legal problems associated with telemedicine are still controversial. Economic advantages for healthcare systems, though potentially high, are still poorly investigated. A European Respiratory Society Task Force has defined indications, follow-up, equipment, facilities, legal and economic issues of tele-monitoring of COPD patients including those undergoing home mechanical ventilation.\n\n\nKEY POINTS\nThe costs of care assistance in chronic disease patients are dramatically increasing.Telemedicine may be a very useful application of information and communication technologies in high-quality healthcare services.Many remote health monitoring systems are available, ensuring safety, feasibility, effectiveness, sustainability and flexibility to face different patients' needs.The legal problems associated with telemedicine are still controversial.National and European Union governments should develop guidelines and ethical, legal, regulatory, technical, administrative standards for remote medicine.The economic advantages, if any, of this new approach must be compared to a \"gold standard\" of homecare that is very variable among different European countries and within each European country.The efficacy of respiratory disease telemedicine projects is promising (i.e. to tailor therapeutic intervention; to avoid useless hospital and emergency department admissions, and reduce general practitioner and specialist visits; and to involve the patients and their families).Different programmes based on specific and local situations, and on specific diseases and levels of severity with a high level of flexibility should be utilised.A European Respiratory Society Task Force produced a statement on commonly accepted clinical criteria for indications, follow-up, equipment, facilities, legal and economic issues also of telemonitoring of ventilator-dependent chronic obstructive pulmonary disease patients.Much more research is needed before considering telemonitoring a real improvement in the management of these patients.\n\n\nEDUCATIONAL AIMS\nTo clarify definitions of aspects of telemedicineTo describe different tools of telemedicineTo provide information on the main clinical resultsTo define recommendations and limitations."
},
{
"pmid": "20510592",
"title": "A four stage approach for ontology-based health information system design.",
"abstract": "OBJECTIVE\nTo describe and illustrate a four stage methodological approach to capture user knowledge in a biomedical domain area, use that knowledge to design an ontology, and then implement and evaluate the ontology as a health information system (HIS).\n\n\nMETHODS AND MATERIALS\nA hybrid participatory design-grounded theory (GT-PD) method was used to obtain data and code them for ontology development. Prototyping was used to implement the ontology as a computer-based tool. Usability testing evaluated the computer-based tool.\n\n\nRESULTS\nAn empirically derived domain ontology and set of three problem-solving approaches were developed as a formalized model of the concepts and categories from the GT coding. The ontology and problem-solving approaches were used to design and implement a HIS that tested favorably in usability testing.\n\n\nCONCLUSIONS\nThe four stage approach illustrated in this paper is useful for designing and implementing an ontology as the basis for a HIS. The approach extends existing ontology development methodologies by providing an empirical basis for theory incorporated into ontology design."
},
{
"pmid": "22269224",
"title": "An ontology-based personalization of health-care knowledge to support clinical decisions for chronically ill patients.",
"abstract": "Chronically ill patients are complex health care cases that require the coordinated interaction of multiple professionals. A correct intervention of these sort of patients entails the accurate analysis of the conditions of each concrete patient and the adaptation of evidence-based standard intervention plans to these conditions. There are some other clinical circumstances such as wrong diagnoses, unobserved comorbidities, missing information, unobserved related diseases or prevention, whose detection depends on the capacities of deduction of the professionals involved. In this paper, we introduce an ontology for the care of chronically ill patients and implement two personalization processes and a decision support tool. The first personalization process adapts the contents of the ontology to the particularities observed in the health-care record of a given concrete patient, automatically providing a personalized ontology containing only the clinical information that is relevant for health-care professionals to manage that patient. The second personalization process uses the personalized ontology of a patient to automatically transform intervention plans describing health-care general treatments into individual intervention plans. For comorbid patients, this process concludes with the semi-automatic integration of several individual plans into a single personalized plan. Finally, the ontology is also used as the knowledge base of a decision support tool that helps health-care professionals to detect anomalous circumstances such as wrong diagnoses, unobserved comorbidities, missing information, unobserved related diseases, or preventive actions. Seven health-care centers participating in the K4CARE project, together with the group SAGESA and the Local Health System in the town of Pollenza have served as the validation platform for these two processes and tool. Health-care professionals participating in the evaluation agree about the average quality 84% (5.9/7.0) and utility 90% (6.3/7.0) of the tools and also about the correct reasoning of the decision support tool, according to clinical standards."
},
{
"pmid": "26474836",
"title": "Integrating HL7 RIM and ontology for unified knowledge and data representation in clinical decision support systems.",
"abstract": "BACKGROUND AND OBJECTIVES\nThe broad adoption of clinical decision support systems within clinical practice has been hampered mainly by the difficulty in expressing domain knowledge and patient data in a unified formalism. This paper presents a semantic-based approach to the unified representation of healthcare domain knowledge and patient data for practical clinical decision making applications.\n\n\nMETHODS\nA four-phase knowledge engineering cycle is implemented to develop a semantic healthcare knowledge base based on an HL7 reference information model, including an ontology to model domain knowledge and patient data and an expression repository to encode clinical decision making rules and queries. A semantic clinical decision support system is designed to provide patient-specific healthcare recommendations based on the knowledge base and patient data.\n\n\nRESULTS\nThe proposed solution is evaluated in the case study of type 2 diabetes mellitus inpatient management. The knowledge base is successfully instantiated with relevant domain knowledge and testing patient data. Ontology-level evaluation confirms model validity. Application-level evaluation of diagnostic accuracy reaches a sensitivity of 97.5%, a specificity of 100%, and a precision of 98%; an acceptance rate of 97.3% is given by domain experts for the recommended care plan orders.\n\n\nCONCLUSIONS\nThe proposed solution has been successfully validated in the case study as providing clinical decision support at a high accuracy and acceptance rate. The evaluation results demonstrate the technical feasibility and application prospect of our approach."
},
{
"pmid": "23122633",
"title": "Towards an ontology for data quality in integrated chronic disease management: a realist review of the literature.",
"abstract": "PURPOSE\nEffective use of routine data to support integrated chronic disease management (CDM) and population health is dependent on underlying data quality (DQ) and, for cross system use of data, semantic interoperability. An ontological approach to DQ is a potential solution but research in this area is limited and fragmented.\n\n\nOBJECTIVE\nIdentify mechanisms, including ontologies, to manage DQ in integrated CDM and whether improved DQ will better measure health outcomes.\n\n\nMETHODS\nA realist review of English language studies (January 2001-March 2011) which addressed data quality, used ontology-based approaches and is relevant to CDM.\n\n\nRESULTS\nWe screened 245 papers, excluded 26 duplicates, 135 on abstract review and 31 on full-text review; leaving 61 papers for critical appraisal. Of the 33 papers that examined ontologies in chronic disease management, 13 defined data quality and 15 used ontologies for DQ. Most saw DQ as a multidimensional construct, the most used dimensions being completeness, accuracy, correctness, consistency and timeliness. The majority of studies reported tool design and development (80%), implementation (23%), and descriptive evaluations (15%). Ontological approaches were used to address semantic interoperability, decision support, flexibility of information management and integration/linkage, and complexity of information models.\n\n\nCONCLUSION\nDQ lacks a consensus conceptual framework and definition. DQ and ontological research is relatively immature with little rigorous evaluation studies published. Ontology-based applications could support automated processes to address DQ and semantic interoperability in repositories of routinely collected data to deliver integrated CDM. We advocate moving to ontology-based design of information systems to enable more reliable use of routine data to measure health mechanisms and impacts."
},
{
"pmid": "20185360",
"title": "Using ontologies for structuring organizational knowledge in Home Care assistance.",
"abstract": "PURPOSE\nInformation Technologies and Knowledge-based Systems can significantly improve the management of complex distributed health systems, where supporting multidisciplinarity is crucial and communication and synchronization between the different professionals and tasks becomes essential. This work proposes the use of the ontological paradigm to describe the organizational knowledge of such complex healthcare institutions as a basis to support their management. The ontology engineering process is detailed, as well as the way to maintain the ontology updated in front of changes. The paper also analyzes how such an ontology can be exploited in a real healthcare application and the role of the ontology in the customization of the system. The particular case of senior Home Care assistance is addressed, as this is a highly distributed field as well as a strategic goal in an ageing Europe.\n\n\nMATERIALS AND METHODS\nThe proposed ontology design is based on a Home Care medical model defined by an European consortium of Home Care professionals, framed in the scope of the K4Care European project (FP6). Due to the complexity of the model and the knowledge gap existing between the - textual - medical model and the strict formalization of an ontology, an ontology engineering methodology (On-To-Knowledge) has been followed.\n\n\nRESULTS\nAfter applying the On-To-Knowledge steps, the following results were obtained: the feasibility study concluded that the ontological paradigm and the expressiveness of modern ontology languages were enough to describe the required medical knowledge; after the kick-off and refinement stages, a complete and non-ambiguous definition of the Home Care model, including its main components and interrelations, was obtained; the formalization stage expressed HC medical entities in the form of ontological classes, which are interrelated by means of hierarchies, properties and semantically rich class restrictions; the evaluation, carried out by exploiting the ontology into a knowledge-driven e-health application running on a real scenario, showed that the ontology design and its exploitation brought several benefits with regards to flexibility, adaptability and work efficiency from the end-user point of view; for the maintenance stage, two software tools are presented, aimed to address the incorporation and modification of healthcare units and the personalization of ontological profiles.\n\n\nCONCLUSIONS\nThe paper shows that the ontological paradigm and the expressiveness of modern ontology languages can be exploited not only to represent terminology in a non-ambiguous way, but also to formalize the interrelations and organizational structures involved in a real and distributed healthcare environment. This kind of ontologies facilitates the adaptation in front of changes in the healthcare organization or Care Units, supports the creation of profile-based interaction models in a transparent and seamless way, and increases the reusability and generality of the developed software components. As a conclusion of the exploitation of the developed ontology in a real medical scenario, we can say that an ontology formalizing organizational interrelations is a key component for building effective distributed knowledge-driven e-health systems."
},
{
"pmid": "23567539",
"title": "A three stage ontology-driven solution to provide personalized care to chronic patients at home.",
"abstract": "PURPOSE\nThe goal of this work is to contribute to personalized clinical management in home-based telemonitoring scenarios by developing an ontology-driven solution that enables a wide range of remote chronic patients to be monitored at home.\n\n\nMETHODS\nThrough three stages, the challenges of integration and management were met through the ontology development and evaluation. The first stage dealt with the ontology design and implementation. The second stage dealt with the ontology application study in order to specifically address personalization issues. For both stages, interviews and working sessions were planned with clinicians. Clinical guidelines and MDs (medical device) interoperability were taken into account as well during these stages. Finally the third stage dealt with a software prototype implementation.\n\n\nRESULTS\nAn ontology was developed as an outcome of the first stage. The structure, based on the autonomic computing paradigm, provides a clear and simple manner to automate and integrate the data management procedure. During the second stage, the application of the ontology was studied to monitor patients with different and multiple morbidities. After this task, the ontology design was successfully adjusted to provide useful personalized medical care. In the third and final stage, a proof-of-concept on the software required to remote monitor patients by means of the ontology-based solution was developed and evaluated.\n\n\nCONCLUSIONS\nOur proposed ontology provides an understandable and simple solution to address integration and personalized care challenges in home-based telemonitoring scenarios. Furthermore, our three-stage approach contributes to enhance the understanding, re-usability and transferability of our solution."
},
{
"pmid": "27170903",
"title": "An Ontology for Telemedicine Systems Resiliency to Technological Context Variations in Pervasive Healthcare.",
"abstract": "Clinical data are crucial for any medical case to study and understand a patient's condition and to give the patient the best possible treatment. Pervasive healthcare systems apply information and communication technology to enable the usage of ubiquitous clinical data by authorized medical persons. However, quality of clinical data in these applications is, to a large extent, determined by the technological context of the patient. A technological context is characterized by potential technological disruptions that affect optimal functioning of technological resources. The clinical data based on input from these technological resources can therefore have quality degradations. If these degradations are not noticed, the use of this clinical data can lead to wrong treatment decisions, which potentially puts the patient's safety at risk. This paper presents an ontology that specifies the relation among technological context, quality of clinical data, and patient treatment. The presented ontology provides a formal way to represent the knowledge to specify the effect of technological context variations in the clinical data quality and the impact of the clinical data quality on a patient's treatment. Accordingly, this ontology is the foundation for a quality of data framework that enables the development of telemedicine systems that are capable of adapting the treatment when the quality of the clinical data degrades, and thus guaranteeing patients' safety even when technological context varies."
},
{
"pmid": "21075729",
"title": "An ontology-based system for context-aware and configurable services to support home-based continuous care.",
"abstract": "Continuous care models for chronic diseases pose several technology-oriented challenges for home-based care, where assistance services rely on a close collaboration among different stakeholders, such as health operators, patient relatives, and social community members. This paper describes an ontology-based context model and a related context management system providing a configurable and extensible service-oriented framework to ease the development of applications for monitoring and handling patient chronic conditions. The system has been developed in a prototypal version, and integrated with a service platform for supporting operators of home-based care networks in cooperating and sharing patient-related information and coordinating mutual interventions for handling critical and alarm situations. Finally, we discuss experimentation results and possible further research directions."
},
{
"pmid": "20843247",
"title": "Susceptibility to exacerbation in chronic obstructive pulmonary disease.",
"abstract": "BACKGROUND\nAlthough we know that exacerbations are key events in chronic obstructive pulmonary disease (COPD), our understanding of their frequency, determinants, and effects is incomplete. In a large observational cohort, we tested the hypothesis that there is a frequent-exacerbation phenotype of COPD that is independent of disease severity.\n\n\nMETHODS\nWe analyzed the frequency and associations of exacerbation in 2138 patients enrolled in the Evaluation of COPD Longitudinally to Identify Predictive Surrogate Endpoints (ECLIPSE) study. Exacerbations were defined as events that led a care provider to prescribe antibiotics or corticosteroids (or both) or that led to hospitalization (severe exacerbations). Exacerbation frequency was observed over a period of 3 years.\n\n\nRESULTS\nExacerbations became more frequent (and more severe) as the severity of COPD increased; exacerbation rates in the first year of follow-up were 0.85 per person for patients with stage 2 COPD (with stage defined in accordance with Global Initiative for Chronic Obstructive Lung Disease [GOLD] stages), 1.34 for patients with stage 3, and 2.00 for patients with stage 4. Overall, 22% of patients with stage 2 disease, 33% with stage 3, and 47% with stage 4 had frequent exacerbations (two or more in the first year of follow-up). The single best predictor of exacerbations, across all GOLD stages, was a history of exacerbations. The frequent-exacerbation phenotype appeared to be relatively stable over a period of 3 years and could be predicted on the basis of the patient's recall of previous treated events. In addition to its association with more severe disease and prior exacerbations, the phenotype was independently associated with a history of gastroesophageal reflux or heartburn, poorer quality of life, and elevated white-cell count.\n\n\nCONCLUSIONS\nAlthough exacerbations become more frequent and more severe as COPD progresses, the rate at which they occur appears to reflect an independent susceptibility phenotype. This has implications for the targeting of exacerbation-prevention strategies across the spectrum of disease severity. (Funded by GlaxoSmithKline; ClinicalTrials.gov number, NCT00292552.)"
},
{
"pmid": "27897995",
"title": "Monitoring of Physiological Parameters to Predict Exacerbations of Chronic Obstructive Pulmonary Disease (COPD): A Systematic Review.",
"abstract": "INTRODUCTION\nThe value of monitoring physiological parameters to predict chronic obstructive pulmonary disease (COPD) exacerbations is controversial. A few studies have suggested benefit from domiciliary monitoring of vital signs, and/or lung function but there is no existing systematic review.\n\n\nOBJECTIVES\nTo conduct a systematic review of the effectiveness of monitoring physiological parameters to predict COPD exacerbation.\n\n\nMETHODS\nAn electronic systematic search compliant with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines was conducted. The search was updated to April 6, 2016. Five databases were examined: Medical Literature Analysis and Retrieval System Online, or MEDLARS Online (Medline), Excerpta Medica dataBASE (Embase), Allied and Complementary Medicine Database (AMED), Cumulative Index of Nursing and Allied Health Literature (CINAHL) and the Cochrane clinical trials database.\n\n\nRESULTS\nSixteen articles met the pre-specified inclusion criteria. Fifteen of these articules reported positive results in predicting COPD exacerbation via monitoring of physiological parameters. Nine studies showed a reduction in peripheral oxygen saturation (SpO₂%) prior to exacerbation onset. Three studies for peak flow, and two studies for respiratory rate reported a significant variation prior to or at exacerbation onset. A particular challenge is accounting for baseline heterogeneity in parameters between patients.\n\n\nCONCLUSION\nThere is currently insufficient information on how physiological parameters vary prior to exacerbation to support routine domiciliary monitoring for the prediction of exacerbations in COPD. However, the method remains promising."
},
{
"pmid": "18367496",
"title": "Home warmth and health status of COPD patients.",
"abstract": "BACKGROUND\nHome Energy Efficiency guidelines recommend domestic indoor temperatures of 21 degrees C for at least 9 h per day in living areas. Is health status of patients with Chronic Obstructive Pulmonary Disease (COPD) associated with maintaining this level of warmth in their homes?\n\n\nMETHODS\nIn a cross-sectional observational study of patients, living in their own homes, living room (LR) and bedroom (BR) temperatures were measured at 30 min intervals over 1 week using electronic dataloggers. Health status was measured with the St George's Respiratory Questionnaire (SGRQ) and EuroQol: EQ VAS. Outdoor temperatures were provided by Met Office.\n\n\nRESULTS\nOne hundred and forty eight patients consented to temperature monitoring. Patients' mean age was 69 (SD 8.5) years, 67 (45%) male, mean percentage of predicted Forced Expiratory Volume in one second (FEV(1)) 41.7 (SD 17.4). Fifty-eight (39%) were current smokers. Independent of age, lung function, smoking and outdoor temperatures, poorer respiratory health status was significantly associated (P = 0.01) with fewer days with 9 h of warmth at 21 degrees C in the LR. A sub analysis showed that patients who smoked experienced more health effects than non-smokers (P < 0.01).\n\n\nCONCLUSION\nMaintaining the warmth guideline of 21 degrees C in living areas for at least 9 h per day was associated with better health status for COPD patients. Patients who were continuing smokers were more vulnerable to reduction in warmth."
},
{
"pmid": "28567499",
"title": "Synergistic effects of temperature and humidity on the symptoms of COPD patients.",
"abstract": "This panel study investigates how temperature, humidity, and their interaction affect chronic obstructive pulmonary disease (COPD) patients' self-reported symptoms. One hundred and six COPD patients from Shanghai, China, were enrolled, and age, smoking status, St. George Respiratory Questionnaire (SGRQ) score, and lung function index were recorded at baseline. The participants were asked to record their indoor temperature, humidity, and symptoms on diary cards between January 2011 and June 2012. Altogether, 82 patients finished the study. There was a significant interactive effect between temperature and humidity (p < 0.0001) on COPD patients. When the indoor humidity was low, moderate, and high, the indoor temperature ORs were 0.969 (95% CI 0.922 to 1.017), 0.977 (0.962 to 0.999), and 0.920 (95% CI 0.908 to 0.933), respectively. Low temperature was a risk factor for COPD patients, and high humidity enhanced its risk on COPD. The indoor temperature should be kept at least on average at 18.2 °C, while the humidity should be less than 70%. This study demonstrates that temperature and humidity were associated with COPD patients' symptoms, and high humidity would enhance the risk of COPD due to low temperature."
},
{
"pmid": "4042547",
"title": "Effects of age on body temperature and blood pressure in cold environments.",
"abstract": "Mean deep body temperature fell by 0.4 +/- 0.1 (SD) degrees C in five sedentary, clothed 63-70 year old men and by 0.1 +/- 0.1 degrees C in four young adults after 2 h exposure in still air at 6 degrees C (P less than 0.001). The mean increase in systolic and diastolic pressure was significantly greater (P less than 0.002) in the older subjects (24 +/- 4 mmHg systolic, 13 +/- 4 mmHg diastolic) than in the young (14 +/- 6 mmHg systolic, 7 +/- 3 mmHg diastolic) after 2 h at 6 degrees C. A small rise in blood pressure occurred in the older men at 12 degrees C, but there was no increase in either group at 15 degrees C. The association of variables is particularly marked between systolic blood pressure and core temperature changes at 6 degrees C. There were no appreciable cold-adaptive changes in blood pressure or thermoregulatory responses after 7-10 days repeated exposure to 6 degrees C for 4 h each day. Blood pressure elevation in the cold was slower but more marked in the older men. These changes in blood pressure may provide a possible basis for delineating low domestic limiting temperature conditions."
},
{
"pmid": "15330778",
"title": "Summary of human responses to ventilation.",
"abstract": "UNLABELLED\nIt is known that ventilation is necessary to remove indoor-generated pollutants from indoor air or dilute their concentration to acceptable levels. But as the limit values of all pollutants are not known the exact determination of required ventilation rates based on pollutant concentrations is seldom possible. The selection of ventilation rates has to be based also on epidemiological research, laboratory and field experiments and experience. The existing literature indicates that ventilation has a significant impact on several important human outcomes including: (1) communicable respiratory illnesses; (2) sick building syndrome symptoms; (3) task performance and productivity, and (4) perceived air quality (PAQ) among occupants or sensory panels (5) respiratory allergies and asthma. In many studies, prevalence of sick building syndrome symptoms has also been associated with characteristics of HVAC-systems. Often the prevalence of SBS symptoms is higher in air-conditioned buildings than in naturally ventilated buildings. The evidence suggests that better hygiene, commissioning, operation and maintenance of air handling systems may be particularly important for reducing the negative effects of HVAC systems. Ventilation may also have harmful effects on indoor air quality and climate if not properly designed, installed, maintained and operated. Ventilation may bring indoors harmful substances or deteriorate indoor environment. Ventilation interacts also with the building envelope and may deteriorate the structures of the building. Ventilation changes the pressure differences across the structures of building and may cause or prevent infiltration of pollutants from structures or adjacent spaces. Ventilation is also in many cases used to control the thermal environment or humidity in buildings. The paper summarises the current knowledge on positive and negative effects of ventilation on health and other human responses. The focus is on office-type working environment and residential buildings.\n\n\nPRACTICAL IMPLICATIONS\nThe review shows that ventilation has various positive impacts on health and productivity of building occupants. Ventilation reduces the prevalence of airborne infectious diseases and thus the number of sick leave days. In office environment a ventilation rate up to 20-25 L/s per person seem to decrease the prevalence of SBS-symptoms. Air conditioning systems may increase the prevalence of SBS-symptoms relative to natural ventilation if not clean. In residential buildings the air change rate in cold climates should not be below app. 0.5 ach. Ventilation systems may cause pressure differences over the building envelope and bring harmful pollutants indoors."
},
{
"pmid": "17062658",
"title": "The effect of breathing an ambient low-density, hyperoxic gas on the perceived effort of breathing and maximal performance of exercise in well-trained athletes.",
"abstract": "BACKGROUND\nThe role of the perception of breathing effort in the regulation of performance of maximal exercise remains unclear.\n\n\nAIMS\nTo determine whether the perceived effort of ventilation is altered through substituting a less dense gas for normal ambient air and whether this substitution affects performance of maximal incremental exercise in trained athletes.\n\n\nMETHODS\nEight highly trained cyclists (mean SD) maximal oxygen consumption (VO(2)max) = 69.9 (7.9) (mlO(2)/kg/min) performed two randomised maximal tests in a hyperbaric chamber breathing ambient air composed of either 35% O(2)/65% N(2) (nitrox) or 35% O(2)/65% He (heliox). A ramp protocol was used in which power output was incremented at 0.5 W/s. The trials were separated by at least 48 h. The perceived effort of breathing was obtained via Borg Category Ratio Scales at 3-min intervals and at fatigue. Oxygen consumption (VO(2)) and minute ventilation (V(E)) were monitored continuously.\n\n\nRESULTS\nBreathing heliox did not change the sensation of dyspnoea: there were no differences between trials for the Borg scales at any time point. Exercise performance was not different between the nitrox and heliox trials (peak power output = 451 (58) and 453 (56) W), nor was VO(2)max (4.96 (0.61) and 4.88 (0.65) l/min) or maximal V(E) (157 (24) and 163 (22) l/min). Between-trial variability in peak power output was less than either VO(2)max or maximal V(E).\n\n\nCONCLUSION\nBreathing a less dense gas does not improve maximal performance of exercise or reduce the perception of breathing effort in highly trained athletes, although an attenuated submaximal tidal volume and V(E) with a concomitant reduction in VO(2) suggests an improved gas exchange and reduced O(2) cost of ventilation when breathing heliox."
},
{
"pmid": "27774480",
"title": "Frequently asked questions in hypoxia research.",
"abstract": "\"What is the O2 concentration in a normoxic cell culture incubator?\" This and other frequently asked questions in hypoxia research will be answered in this review. Our intention is to give a simple introduction to the physics of gases that would be helpful for newcomers to the field of hypoxia research. We will provide background knowledge about questions often asked, but without straightforward answers. What is O2 concentration, and what is O2 partial pressure? What is normoxia, and what is hypoxia? How much O2 is experienced by a cell residing in a culture dish in vitro vs in a tissue in vivo? By the way, the O2 concentration in a normoxic incubator is 18.6%, rather than 20.9% or 20%, as commonly stated in research publications. And this is strictly only valid for incubators at sea level."
},
{
"pmid": "19593155",
"title": "Extreme high temperatures and hospital admissions for respiratory and cardiovascular diseases.",
"abstract": "BACKGROUND\nAlthough the association of high temperatures with mortality is well-documented, the association with morbidity has seldom been examined. We assessed the potential impact of hot weather on hospital admissions due to cardiovascular and respiratory diseases in New York City. We also explored whether the weather-disease relationship varies with socio-demographic variables.\n\n\nMETHOD\nWe investigated effects of temperature and humidity on health by linking the daily cardiovascular and respiratory hospitalization counts with meteorologic conditions during summer, 1991-2004. We used daily mean temperature, mean apparent temperature, and 3-day moving average of apparent temperature as the exposure indicators. Threshold effects for health risks of meteorologic conditions were assessed by log-linear threshold models, after controlling for ozone, day of week, holidays, and long-term trend. Stratified analyses were used to evaluate temperature-demographic interactions.\n\n\nRESULTS\nFor all 3 exposure indicators, each degree C above the threshold of the temperature-health effect curve (29 degrees C-36 degrees C) was associated with a 2.7%-3.1% increase in same-day hospitalizations due to respiratory diseases, and an increase of 1.4%-3.6% in lagged hospitalizations due to cardiovascular diseases. These increases for respiratory admissions were greater for Hispanic persons (6.1%/ degrees C) and the elderly (4.7%/ degrees C). At high temperatures, admission rates increased for chronic airway obstruction, asthma, ischemic heart disease, and cardiac dysrhythmias, but decreased for hypertension and heart failure.\n\n\nCONCLUSIONS\nExtreme high temperature appears to increase hospital admissions for cardiovascular and respiratory disorders in New York City. Elderly and Hispanic residents may be particularly vulnerable to the temperature effects on respiratory illnesses."
},
{
"pmid": "23554858",
"title": "The effect of cold temperature on increased exacerbation of chronic obstructive pulmonary disease: a nationwide study.",
"abstract": "BACKGROUND\nSeasonal variations in the acute exacerbation of chronic obstructive pulmonary disease (COPD) have been reported. However, the influence of air temperature and other meteorological factors on COPD exacerbation remains unclear.\n\n\nMETHODS\nNational Health Insurance registry data from January 1, 1999 to December 1, 2009 and meteorological variables from the Taiwan Central Weather Bureau for the same period were analyzed. A case-crossover study design was used to investigate the association between COPD exacerbation and meteorological variables.\n\n\nRESULTS\nA total of 16,254 cases who suffered from COPD exacerbation were enrolled. We found that a 1°C decrease in air temperature was associated with a 0.8% increase in the exacerbation rate on event-days (95% confidence interval (CI), 1.015-1.138, p = 0.015). With a 5°C decrease in mean temperature, the cold temperature (28-day average temperature) had a long-term effect on the exacerbation of COPD (odds ratio (OR), 1.106, 95% CI 1.063-1.152, p<0.001). In addition, elderly patients and those who did not receive inhaled medication tended to suffer an exacerbation when the mean temperature dropped 5°C. Higher barometric pressure, more hours of sunshine, and lower humidity were associated with an increase in COPD exacerbation.\n\n\nCONCLUSIONS\nThis study demonstrated the effect of cold temperatures on the COPD exacerbation rate. Elderly patients and those without inhaled medicine before the exacerbation event were affected significantly by lower mean temperatures. A more comprehensive program to prevent cold stress in COPD patients may lead to a reduction in the exacerbations rate of COPD."
},
{
"pmid": "21737561",
"title": "Seasonality and determinants of moderate and severe COPD exacerbations in the TORCH study.",
"abstract": "We investigated the impact of season relative to other determinants of chronic obstructive pulmonary disease (COPD) exacerbation frequency in a long-term international study of patients with forced expiratory volume in 1 s (FEV(1)) <60% predicted. COPD exacerbations were defined by worsening symptoms requiring systemic corticosteroids and/or antibiotics (moderate) or hospital admission (severe). Seasonality effect was calculated as the proportion of patients experiencing an exacerbation each month. Exacerbations in the northern and southern regions showed an almost two-fold increase in the winter months. No seasonal pattern occurred in the tropics. Overall, 38% of exacerbations were treated with antibiotics only, 19% with systemic corticosteroids only and 43% with both, while 20% required hospital admission irrespective of the season. Exacerbation frequency was associated with older age, lower body mass index, lower FEV(1) % pred and history of prior exacerbations. Females and patients with worse baseline breathlessness, assessed using the Medical Research Council (MRC) dyspnoea scale, exacerbated more often (rate ratio (RR) for male versus female 0.7, 95% CI 0.7-0.8 (p<0.001); RR for MRC dyspnoea score 3 versus 1 and 2 combined 1.1, 95% CI 1.1-1.2 (p<0.001)). The effect of season was independent of these risk factors. COPD exacerbations and hospitalisations were more frequent in winter."
},
{
"pmid": "26876040",
"title": "Outdoor Temperature, Heart Rate and Blood Pressure in Chinese Adults: Effect Modification by Individual Characteristics.",
"abstract": "We collected data from Kailuan cohort study from 2006 to 2011 to examine whether short-term effects of ambient temperature on heart rate (HR) and blood pressure (BP) are non-linear or linear, and their potential modifying factors. The HR, BP and individual information, including basic characteristics, life style, socio-economic characteristics and other characteristics, were collected for each participant. Daily mean temperature and relative humidity were collected. A regression model was used to evaluate associations of temperature with HR and BP, with a non-linear function for temperature. We also stratified the analyses in different groups divided by individual characteristics. 47,591 residents were recruited. The relationships of temperature with HR and BP were \"V\" shaped with thresholds ranging from 22 °C to 28 °C. Both cold and hot effects were observed on HR and BP. The differences of effect estimates were observed among the strata of individual characteristics. The effect estimate of temperature was higher among older people. The cold effect estimate was higher among people with lower Body Mass Index. However, the differences of effect estimates among other groups were inconsistent. These findings suggest both cold and hot temperatures may have short-term impacts on HR and BP. The individual characteristics could modify these relationships."
},
{
"pmid": "10362051",
"title": "Effect of temperature on lung function and symptoms in chronic obstructive pulmonary disease.",
"abstract": "The present study investigated whether falls in environmental temperature increase morbidity from chronic obstructive pulmonary disease (COPD). Daily lung function and symptom data were collected over 12 months from 76 COPD patients living in East London and related to outdoor and bedroom temperature. Questionnaires were administered which asked primarily about the nature of night-time heating. A fall in outdoor or bedroom temperature was associated with increased frequency of exacerbation, and decline in lung function, irrespective of whether periods of exacerbation were excluded. Forced expiratory volume in one second (FEV1) and forced vital capacity (FVC) fell markedly by a median of 45 mL (95% percentile range: -113-229 mL) and 74 mL (-454-991 mL), respectively, between the warmest and coolest week of the study. The questionnaire revealed that 10% had bedrooms <13 degrees C for 25% of the year, possibly because only 21% heated their bedrooms and 48% kept their windows open in November. Temperature-related reduction in lung function, and increase in exacerbations may contribute to the high level of cold-related morbidity from chronic obstructive pulmonary disease."
},
{
"pmid": "29346418",
"title": "The relationship of lung function with ambient temperature.",
"abstract": "BACKGROUND\nLung function is complex trait with both genetic and environmental factors contributing to variation. It is unknown how geographic factors such as climate affect population respiratory health.\n\n\nOBJECTIVE\nTo determine whether ambient air temperature is associated with lung function (FEV1) in the general population.\n\n\nDESIGN/SETTING\nAssociations between spirometry data from two National Health and Nutrition Examination Survey (NHANES) periods representative of the U.S. non-institutionalized population and mean annual ambient temperature were assessed using survey-weighted multivariate regression.\n\n\nPARTICIPANTS/MEASUREMENTS\nThe NHANES III (1988-94) cohort included 14,088 individuals (55.6% female) and the NHANES 2007-12 cohort included 14,036 individuals (52.3% female), with mean ages of 37.4±23.4 and 34.4±21.8 years old and FEV1 percent predicted values of 99.8±15.8% and 99.2±14.5%, respectively.\n\n\nRESULTS\nAfter adjustment for confounders, warmer ambient temperatures were associated with lower lung function in both cohorts (NHANES III p = 0.020; NHANES 2007-2012 p = 0.014). The effect was similar in both cohorts with a 0.71% and 0.59% predicted FEV1 decrease for every 10°F increase in mean temperature in the NHANES III and NHANES 2007-2012 cohorts, respectively. This corresponds to ~2 percent predicted difference in FEV1 between the warmest and coldest regions in the continental United States.\n\n\nCONCLUSIONS\nIn the general U.S. population, residing in regions with warmer ambient air temperatures was associated with lower lung function with an effect size similar to that of traffic pollution. Rising temperatures associated with climate change could have effects on pulmonary function in the general population."
},
{
"pmid": "21262460",
"title": "Out of thin air: sensory detection of oxygen and carbon dioxide.",
"abstract": "Oxygen (O₂) and carbon dioxide (CO₂) levels vary in different environments and locally fluctuate during respiration and photosynthesis. Recent studies in diverse animals have identified sensory neurons that detect these external variations and direct a variety of behaviors. Detection allows animals to stay within a preferred environment as well as identify potential food or dangers. The complexity of sensation is reflected in the fact that neurons compartmentalize detection into increases, decreases, and short-range and long-range cues. Animals also adjust their responses to these prevalent signals in the context of other cues, allowing for flexible behaviors. In general, the molecular mechanisms for detection suggest that sensory neurons adopted ancient strategies for cellular detection and coupled them to brain activity and behavior. This review highlights the multiple strategies that animals use to extract information about their environment from variations in O₂ and CO₂."
},
{
"pmid": "25953970",
"title": "Relationship between altitude and the prevalence of hypertension in Tibet: a systematic review.",
"abstract": "INTRODUCTION\nHypertension is a leading cause of cardiovascular disease, which is the cause of one-third of global deaths and is a primary and rising contributor to the global disease burden. The objective of this systematic review was to determine the prevalence and awareness of hypertension among the inhabitants of Tibet and its association with altitude, using the data from published observational studies.\n\n\nMETHODS\nWe conducted electronic searches in Medline, Embase, ISI Web of Science and Global Health. No gender or language restrictions were imposed. We assessed the methodological characteristics of included studies using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) criteria. Two reviewers independently determined the eligibility of studies, assessed the methodology of included studies and extracted the data. We used meta-regression to estimate the degree of change in hypertension prevalence with increasing altitude.\n\n\nRESULTS\nWe identified 22 eligible articles of which eight cross-sectional studies with a total of 16 913 participants were included. The prevalence of hypertension ranged between 23% and 56%. A scatter plot of altitude against overall prevalence revealed a statistically significant correlation (r=0.68; p=0.04). Meta-regression analysis revealed a 2% increase in the prevalence of hypertension with every 100 m increase in altitude (p=0.06). The locations and socioeconomic status of subjects affected the awareness and subsequent treatment and control of hypertension.\n\n\nCONCLUSIONS\nThe results from cross-sectional studies suggest that there is a significant correlation between altitude and the prevalence of hypertension among inhabitants of Tibet. The socioeconomic status of the inhabitants can influence awareness and management of hypertension. Very little research into hypertension has been conducted in other prefectures of Tibet where the altitude is much higher. Further research examining the impact of altitude on blood pressure is warranted."
},
{
"pmid": "2764404",
"title": "Hypoxemia during air travel in patients with chronic obstructive pulmonary disease.",
"abstract": "STUDY OBJECTIVE\nTo quantitate and identify determinants of the severity of hypoxemia during air travel in patients with chronic obstructive pulmonary disease.\n\n\nDESIGN\nProspective study of physiologic variables before and during intervention.\n\n\nSETTING\nReferral-based pulmonary disease clinic at a U.S. Army medical center.\n\n\nPATIENTS\nEighteen ambulatory retired servicemen (age 68 +/- 6 [SD] years) with severe chronic obstructive pulmonary disease (forced expiratory volume in the first second [FEV1] 31% +/- 10% of predicted).\n\n\nINTERVENTION\nAltitude simulation equivalent to 2438 meters (8000 feet) above sea level in a hypobaric chamber.\n\n\nMEASUREMENTS AND MAIN RESULTS\nRadial artery catheter blood oxygen tension in the patients declined from a ground value (PaO2G) at sea level of 72.4 +/- 9 mm Hg to an altitude value (PaO2Alt) of 47.4 +/- 6 mm Hg after 45 minutes of steady state hypobaric exposure. The PaO2G correlated with PaO2Alt (r = 0.587; P less than 0.01). Multiple regression analysis revealed that the preflight FEV1 reduced the variability in PaO2Alt not explained by PaO2G in the equation: PaO2Alt = 0.453 [PaO2G] + 0.386 [FEV1% predicted] + 2.440 (r = 0.847; P less than 0.001). Residuals from two previously published formulas using PaO2G also correlated with FEV1 (r greater than or equal to 0.765; P less than 0.001).\n\n\nCONCLUSIONS\nArterial blood oxygen tension declined to clinically significant levels in most patients during hypobaric exposure. When combined with the preflight arterial PO2 at ground level (PaO2G), the measurement of the preflight FEV1 improved prediction of PaO2 at altitude (PaO2Alt) in patients with severe chronic obstructive pulmonary disease."
},
{
"pmid": "28243206",
"title": "SpO2 and Heart Rate During a Real Hike at Altitude Are Significantly Different than at Its Simulation in Normobaric Hypoxia.",
"abstract": "Rationale: Exposures to simulated altitude (normobaric hypoxia, NH) are frequently used in preparation for mountaineering activities at real altitude (hypobaric hypoxia, HH). However, physiological responses to exercise in NH and HH may differ. Unfortunately clinically useful information on such differences is largely lacking. This study therefore compared exercise responses between a simulated hike on a treadmill in NH and a similar field hike in HH. Methods: Six subjects (four men) participated in two trials, one in a NH chamber and a second in HH at an altitude of 4,205 m on the mountain Mauna Kea. Subjects hiked in each setting for 7 h including breaks. In NH, hiking was simulated by walking on a treadmill. To achieve maximal similarity between hikes, subjects used the same nutrition, clothes, and gear weight. Measurements of peripheral oxygen saturation (SpO2), heart rate (HR) and barometrical pressure (PB)/inspired oxygen fraction (FiO2) were taken every 15 min. Acute mountain sickness (AMS) symptoms were assessed using the Lake-Louise-Score at altitudes of 2,800, 3,500, and 4,200 m. Results: Mean SpO2 values of 85.8% in NH were significantly higher compared to those of 80.2% in HH (p = 0.027). Mean HR values of 103 bpm in NH were significantly lower than those of 121 bpm in HH (p = 0.029). AMS scores did not differ significantly between the two conditions. Conclusion: Physiological responses to exercise recorded in NH are different from those provoked by HH. These findings are of clinical importance for subjects using simulated altitude to prepare for activity at real altitude. Trial registration: Registration at DRKS. (Approval No. 359/12, Trial No. DRKS00005241)."
},
{
"pmid": "24111929",
"title": "Exercise endurance in chronic obstructive pulmonary disease patients at an altitude of 2640 meters breathing air and oxygen (FIO2 28% and 35%): a randomized crossover trial.",
"abstract": "BACKGROUND\nAt Bogota's altitude (2640 m), the lower barometric pressure (560 mmHg) causes severe hypoxemia in COPD patients, limiting their exercise capacity. The aim was to compare the effects of breathing oxygen on exercise tolerance.\n\n\nMETHODS\nIn a blind, crossover clinical study, 29 COPD patients (FEV1 42.9 ± 11.9%) breathed room air (RA) or oxygen (FIO2 28% and 35%) during three treadmill exercise tests at 70% of their maximal capacity in a randomized order. Endurance time (ET), inspiratory capacity (IC), arterial blood gases and lactate were compared.\n\n\nRESULTS\nAt the end of the exercise breathing RA, the ET was 9.7 ± 4.2 min, the PaO2 46.5 ± 8.2 mmHg, the lactate increased and the IC decreased. The oxygen significantly increased the ET (p < 0.001), without differences between 28% (16.4 ± 6.8 min) and 35% (17.6 ± 7.0 min) (p = 0.22). Breathing oxygen, there was an increase in the PaO2 and SaO2, higher with FIO2 35%, and a decrease in the lactate level. At \"isotime\" (ET at RA), with oxygen, the SpO2, the oxygen pulse and the IC were higher and the heart rate lower than breathing RA (p < 0.05).\n\n\nCONCLUSION\nOxygen administration for COPD patients in Bogotá significantly increased ET by decreased respiratory load, improved cardiovascular performance and oxygen transport. The higher increases of the PaO2 and SaO2 with 35% FIO2 did not represent a significant advantage in the ET. This finding has important logistic and economic implications for oxygen use in rehabilitation programs of COPD patients at the altitude of Bogotá and similar altitudes."
},
{
"pmid": "28003742",
"title": "Major air pollutants and risk of COPD exacerbations: a systematic review and meta-analysis.",
"abstract": "BACKGROUND\nShort-term exposure to major air pollutants (O3, CO, NO2, SO2, PM10, and PM2.5) has been associated with respiratory risk. However, evidence on the risk of chronic obstructive pulmonary disease (COPD) exacerbations is still limited. The present study aimed at evaluating the associations between short-term exposure to major air pollutants and the risk of COPD exacerbations.\n\n\nMETHODS\nAfter a systematic search up until March 30, 2016, in both English and Chinese electronic databases such as PubMed, EMBASE, and CNKI, the pooled relative risks and 95% confidence intervals were estimated by using the random-effects model. In addition, the population-attributable fractions (PAFs) were also calculated, and a subgroup analysis was conducted. Heterogeneity was assessed by I2.\n\n\nRESULTS\nIn total, 59 studies were included. In the single-pollutant model, the risks of COPD were calculated by each 10 μg/m3 increase in pollutant concentrations, with the exception of CO (100 μg/m3). There was a significant association between short-term exposure and COPD exacerbation risk for all the gaseous and particulate pollutants. The associations were strongest at lag0 and lag3 for gaseous and particulate air pollutants, respectively. The subgroup analysis not only further confirmed the overall adverse effects but also reduced the heterogeneities obviously. When 100% exposure was assumed, PAFs ranged from 0.60% to 4.31%, depending on the pollutants. The adverse health effects of SO2 and NO2 exposure were more significant in low-/middle-income countries than in high-income countries: SO2, relative risk: 1.012 (95% confidence interval: 1.001, 1.023); and NO2, relative risk: 1.019 (95% confidence interval: 1.014, 1.024).\n\n\nCONCLUSION\nShort-term exposure to air pollutants increases the burden of risk of COPD acute exacerbations significantly. Controlling ambient air pollution would provide benefits to COPD patients."
},
{
"pmid": "21890573",
"title": "Long-term exposure to air pollution and asthma hospitalisations in older adults: a cohort study.",
"abstract": "BACKGROUND\nExposure to air pollution in early life contributes to the burden of childhood asthma, but it is not clear whether long-term exposure to air pollution can lead to asthma onset or progression in adulthood.\n\n\nOBJECTIVES\nThe authors studied the effect of exposure to traffic-related air pollution over 35 years on the risk for asthma hospitalisation in older people.\n\n\nMETHODS\n57 053 participants in the Danish Diet, Cancer and Health cohort, aged 50-65 years at baseline (1993-1997), were followed up for first hospital admission for asthma until 2006, and the annual nitrogen dioxide (NO(2)) levels were estimated as a proxy of the exposure to traffic-related air pollution at the residential addresses of the participants since 1971. The association between NO(2) and hospitalisation for asthma was modelled using Cox regression, for the full cohort and in people with and without previous hospitalisations for asthma, and the effect modification by comorbid conditions was assessed.\n\n\nRESULTS\nDuring 10.2 years' median follow-up, 977 (1.9%) of 53 695 eligible people were admitted to hospital for asthma: 821 were first-ever admissions and 176 were readmissions. NO(2) levels were associated with risk for asthma hospitalisation in the full cohort (HR and 95% CI per IQR, 5.8 μg/m(3): 1.12; 1.04-1.22), and for first-ever admissions (1.10; 1.01-1.20), with the highest risk in people with a history of asthma (1.41; 1.15-2.07) or chronic obstructive pulmonary disease (COPD) (1.30; 1.07-1.52) hospitalisation.\n\n\nCONCLUSIONS\nLong-term exposure to traffic-related air pollution increases the risk for asthma hospitalisation in older people. People with previous asthma or COPD hospitalisations are most susceptible."
},
{
"pmid": "19252992",
"title": "Effect of particulate matter, atmospheric gases, temperature, and humidity on respiratory and circulatory diseases' trends in Lisbon, Portugal.",
"abstract": "This study addresses the significant effects of both well-known contaminants (particles, gases) and less-studied variables (temperature, humidity) on serious, if relatively common, respiratory and circulatory diseases. The area of study is Lisbon, Portugal, and time series of health outcome (daily admissions in 12 hospitals) and environmental data (daily averages of air temperature, relative humidity, PM(10), SO(2), NO, NO(2), CO, and O(3)) have been gathered for 1999-2004 to ascertain (1) whether concentrations of air pollutants and levels of temperature and humidity do interfere on human health, as gauged by hospital admissions due to respiratory and circulatory ailments; and (2) whether there is an effect of population age in such admissions. In general terms, statistically significant (p < 0.001) correlations were found between hospital admissions and temperature, humidity, PM(10), and all gaseous pollutants except CO and NO. Age appears to influence respiratory conditions in association with temperature, whereas, for circulatory conditions, such an influence likely involves temperature as well as the gaseous pollutants NO(2) and SO(2)."
},
{
"pmid": "22505744",
"title": "Bronchoconstriction triggered by breathing hot humid air in patients with asthma: role of cholinergic reflex.",
"abstract": "RATIONALE\nHyperventilation of hot humid air induces transient bronchoconstriction in patients with asthma; the underlying mechanism is not known. Recent studies showed that an increase in temperature activates vagal bronchopulmonary C-fiber sensory nerves, which upon activation can elicit reflex bronchoconstriction.\n\n\nOBJECTIVES\nThis study was designed to test the hypothesis that the bronchoconstriction induced by increasing airway temperature in patients with asthma is mediated through cholinergic reflex resulting from activation of these airway sensory nerves.\n\n\nMETHODS\nSpecific airway resistance (SR(aw)) and pulmonary function were measured to determine the airway responses to isocapnic hyperventilation of humidified air at hot (49°C; HA) and room temperature (20-22°C; RA) for 4 minutes in six patients with mild asthma and six healthy subjects. A double-blind design was used to compare the effects between pretreatments with ipratropium bromide and placebo aerosols on the airway responses to HA challenge in these patients.\n\n\nMEASUREMENTS AND MAIN RESULTS\nSR(aw) increased by 112% immediately after hyperventilation of HA and by only 38% after RA in patients with asthma. Breathing HA, but not RA, triggered coughs in these patients. In contrast, hyperventilation of HA did not cause cough and increased SR(aw) by only 22% in healthy subjects; there was no difference between their SR(aw) responses to HA and RA challenges. More importantly, pretreatment with ipratropium completely prevented the HA-induced bronchoconstriction in patients with asthma.\n\n\nCONCLUSIONS\nBronchoconstriction induced by increasing airway temperature in patients with asthma is mediated through the cholinergic reflex pathway. The concomitant increase in cough response further indicates an involvement of airway sensory nerves, presumably the thermosensitive C-fiber afferents."
},
{
"pmid": "15370758",
"title": "ICF Core Sets for obstructive pulmonary diseases.",
"abstract": "OBJECTIVE\nTo report on the results of the consensus process integrating evidence from preliminary studies to develop the first version of the Comprehensive ICF Core Set and a Brief ICF Core Set for obstructive pulmonary diseases.\n\n\nMETHODS\nA formal decision-making and consensus process integrating evidence gathered from preliminary studies was followed. Preliminary studies included a Delphi exercise, a systematic review and an empirical data collection. After training in the ICF and based on these preliminary studies relevant ICF categories were identified in a formal consensus process by international experts from different backgrounds.\n\n\nRESULTS\nThe preliminary studies identified a set of 287 ICF categories at the second, third and fourth ICF levels with 97 categories on body functions, 33 on body structures, 104 on activities and participation, and 53 on environmental factors. Seventeen experts from 8 different countries attended the consensus conference on obstructive pulmonary diseases. Altogether 67 second-level and 4 third-level categories were included in the Comprehensive ICF Core Set with 19 categories from the component \"body functions\", 5 from \"body structures\", 24 from \"activities and participation\" and 23 from \"environmental factors\". The Brief ICF Core Set included a total of 17 second-level categories with 5 on \"body functions\", 3 on \"body structures\", 5 on \"activities and participation\" and 4 on \"environmental factors\".\n\n\nCONCLUSION\nA formal consensus process integrating evidence and expert opinion based on the ICF framework and classification led to the definition of ICF Core Sets for obstructive pulmonary diseases. Both the Comprehensive ICF Core Set and the Brief ICF Core Set were defined."
},
{
"pmid": "28244799",
"title": "Functional Tests in Chronic Obstructive Pulmonary Disease, Part 1: Clinical Relevance and Links to the International Classification of Functioning, Disability, and Health.",
"abstract": "Chronic obstructive pulmonary disease is a major cause of morbidity and mortality worldwide and an important cause of disability. A thorough patient-centered outcome assessment, including not only measures of lung function, exercise capacity, and health-related quality of life, but also functional capacity and performance in activities of daily life, is imperative for a comprehensive management of chronic obstructive pulmonary disease. This American Thoracic Society Seminar Series is devoted to help clinicians substantiate their choice of functional outcome measures in this population. In Part 1 of this two-part seminar series, we describe the various domains of functional status to elucidate terms and key concepts intertwined with functioning and to demonstrate the clinical relevance of assessing functional capacity in the context of activities of daily living in agreement with the International Classification of Functioning, Disability, and Health. We hope that a better understanding of the various defining components of functional status will be instrumental to healthcare providers to optimize chronic obstructive pulmonary disease evaluation and management, ultimately leading to improved quality of life of patients afflicted by this condition. This first article also serves as an introduction to Part 2 of this seminar series, in which the main functional tests available to assess upper and lower body functional capacity of these patients are discussed."
},
{
"pmid": "23343360",
"title": "Comprehensive ICF core set for obstructive pulmonary diseases: validation of the activities and participation component through the patient's perspective.",
"abstract": "PURPOSE\nThis study aimed to validate the Activities and Participation component of the Comprehensive International Classification of Functioning, Disability and Health (ICF) Core Set for Obstructive Pulmonary Diseases (OPD) from the patient's perspective.\n\n\nMETHODS\nA cross-sectional qualitative study was conducted with a convenience sample of outpatients with Chronic Obstructive Pulmonary Disease (COPD). Individual interviews were performed and analysed according to the meaning condensation procedure.\n\n\nRESULTS\nFifty-one participants (70.6% male) with a mean age of 69.5 ± 10.8 years old were included. Twenty-one of the 24 categories contained in the Activities and Participation component of the Comprehensive ICF Core Set for OPD were identified by the participants. Additionally, seven second-level categories that are not covered by the Core Set were reported: complex interpersonal interactions, informal social relationships, family relationships, conversation, maintaining a body position, eating and preparing meals.\n\n\nCONCLUSIONS\nThe activities and participation component of the ICF Core Set for OPD was largely supported by the patient's perspective. The categories included in the ICF Core Set that were not confirmed by the participants and the additional categories that were raised need to be further investigated in order to develop an instrument according to the patient's perspective. This will promote a more patient-centred assessments and rehabilitation interventions. Implications for Rehabilitation The Activities and Participation component of the Comprehensive ICF Core Set for OPD is largely supported by the perspective of patients with COPD and therefore could be used in the assessment of patients' individual and social life. The information collected through the Activities and Participation component of the Comprehensive ICF Core Set for OPD could be used to plan and assess rehabilitation interventions for patients with COPD."
},
{
"pmid": "19533541",
"title": "Free-living physical activity in COPD: assessment with accelerometer and activity checklist.",
"abstract": "To assess physical activity and disability in chronic obstructive pulmonary disease (COPD), we evaluated the use of an accelerometer and checklist to measure free-living physical activity. Seventeen males with stable COPD completed a daily activity checklist for 14 days. Ten subjects concurrently wore an Actiped accelerometer (FitSense, Southborough, Massachussetts) that records steps per day. Regression models assessed relationships between steps per day, number of daily checklist activities performed, and clinical measures of COPD status. The average steps per day ranged from 406 to 4,856. The median intrasubject coefficient of variation for steps per day was 0.52 (interquartile range [IQR] 0.41-0.58) and for number of daily checklist activities performed was 0.28 (IQR 0.22-0.32). A higher number of steps per day was associated with a greater distance walked on the 6-minute walk test and better health-related quality of life. A higher number of daily checklist activities performed was associated with a higher force expiratory volume in 1 s percent predicted and lowerbody mass index, airflow obstruction, dyspnea, exercise capacity (BODE) index. Prospectively measuring free-living physical activity in COPD using an unobtrusive accelerometer and simple activity checklist is feasible. Low intrasubject variation was found in free-living physical activity, which is significantly associated with clinical measures of COPD status."
}
] |
Biomimetics | 31757024 | PMC6963702 | 10.3390/biomimetics4040074 | A Case Study of Adding Proactivity in Indoor Social Robots Using Belief–Desire–Intention (BDI) Model | The rise of robots and robotics has proved to be a benefaction to humankind in different aspects. Robotics evolved from a simple button, has seen massive development over the years. Consequently, it has become an integral part of human life as robots are used for a wide range of applications ranging from indoor uses to interplanetary missions. Recently, the use of social robots, in commercial indoor spaces to offer help or social interaction with people, has been quite popular. As such, taking the increasing use of social robots into consideration, many works have been carried out to develop the robots to make them capable of acting like humans. The notion behind this development is the need for robots to offer services without being asked. Social robots should think more like humans and suggest possible and suitable actions by analyzing the environment where they are. Belief–desire–intention (BDI) is one of the most popular models for developing rational agents based on how humans act based on the information derived from an environment. As such, this work defines a foundation architecture to integrate a BDI framework into a social robot to add “act like a human” feature for proactive behaviors. The work validates the proposed architecture by developing a vision-based proactive action using the PROFETA BDI framework in an indoor social robot, Waldo, operated by the robot operating system (ROS). | 2. Related WorksIn this section, we describe the background of different BDI models and different frameworks built to integrate behavior models in robots. Furthermore, we present different works where researchers have tried to incorporate such models in robots for various daily life applications.2.1. BDI ModelWhenever the design of the cognition model for software agents comes into play, the BDI model is one of the most popular architectural choices. BDI models provide an explicit and declarative representation of informational attitudes, motivational attitudes, and deliberative commitments. Myers et al. [16] divided the BDI models into two broad categories of B-DOING and Delegative models. In the B-DOING model, motivational attitudes are highly adapted, and desires correspond to what the agent wishes. Furthermore, obligations corresponded to the responsibilities of other agents and norms correspond to conventions derived from the agent’s role in the environment. The goal created for the agent needs to be consistent and achievable [17]. According to the definition of the goal, the intentions for executions are planned. In the delegative model, the goals are defined as candidate goals and adopted goals [18]. Candidate goals are those that can be inconsistent internally while Adopted goals are the consistent and coherent ones in the BDI model. This model can even incorporate user-specified guidance and preferences from the user in the form of advice. The B-DOING framework lacks the distinctions between types of goals for proactive assistance, while the delegative BDI framework lacks the distinctions between types of motivational attitudes [16].2.2. BDI FrameworksRussel et al. [19] developed the agent factory framework as an open-source collection of various tools, platforms and languages that ultimately facilitate the development and development of multi-agent systems. Winikoff [20] built a highly portable, robust and cross-platform environment called JACK for building, running and integrating commercial-grade multi-agent systems. In the BDI framework called JADE [21], the agent platform can be distributed among different independent machines and controlled remotely. The configuration can even be changed at run-time by the moving agents from one machine to another one during the implementation. Braubach and Pokahr [22] developed a framework called JADEX, based on XML and Java, that follows BDI model and facilitates easier intelligent agent system construction with an engineering perspective. JASON is a super flexible platform developed as an extension of AgentSpeak [23] by [24], that implements the semantics of the language and provides a good platform for development of multi-agent systems with many customizable features. The comparison between different behavior model platform is given in Table 1.ROS supports the C++ and Python programming languages for communication between different distributed nodes in its ecosystem. Because of various BDI frameworks available in Python, a pythonic framework is considered in this study.2.3. Application of BDI Models in RobotsThe behavior model was adapted to study the natural engagement of robots with humans to show exhibit proactive behaviors. The proactive behaviors in robots were imagined to increase the human-robot interaction and utility value in the use of robots. As such, the works related to proactive behavior in robots were initiated with mixed-initiative approaches. Finzi and Orlandini [25] developed an architecture based on a planner mixed-initiative approach for robots used in search and rescue operations. The study had a model-based execution monitoring and reactive planner for the execution of the tasks. Adams et al. [26] proposed an effect based mixed-initiative interaction approach for human-robot interaction. The robot took initiatives upon changes in human emotions like detecting drowsiness and inattentiveness. The robot, as developed by Acosta et al. [27] showed some proactive behaviors by monitoring activities and defining tasks as a schedule. Satake et al. [28] proposed a behavior model to initiate a conversation with pedestrians walking on the streets. The appropriate instant of time to start the conversation or interaction with people was studied in work done by Shi et al. [29]. Moreover, Garrel et al. [30] proposed a behavior model for a proactive model that tries to convince people to initiate a conversation with different behaviors and emotions. The study carried out by Araiza-Illan et al. [31] proposed the use of the BDI model to increase the level of realism and human-like simulation of the robots. An automated testbench was implemented for simulation of cooperative task assembly between a humanoid robot and people in the robot operating system and Gazebo. A soccer-playing robot based on BDI architecture was developed by Gottifredi et al. [32] which allowed the specification of declarative goal-driven behavior based on high-level reasoning and reactivity when required. The work of Duffy et al. [33] developed a multi-layered BDI architecture with an egocentric robot control strategy to make robots capable of explicit social behavior. Pereira et al. [34] proposed an extension to BDI architecture to support artificial emotions in the form of emotional-BDI architecture.Given the current state of proactivity in social robots, this study tries to extend the capabilities of such robots to include vision-based activity in a social robot. The integration is based on a modular architecture onto which other logical blocks can easily be integrated for more advanced proactive behaviors in a think-like-human fashion. | [
"17301026",
"29890934"
] | [
{
"pmid": "17301026",
"title": "Socially intelligent robots: dimensions of human-robot interaction.",
"abstract": "Social intelligence in robots has a quite recent history in artificial intelligence and robotics. However, it has become increasingly apparent that social and interactive skills are necessary requirements in many application areas and contexts where robots need to interact and collaborate with other robots or humans. Research on human-robot interaction (HRI) poses many challenges regarding the nature of interactivity and 'social behaviour' in robot and humans. The first part of this paper addresses dimensions of HRI, discussing requirements on social skills for robots and introducing the conceptual space of HRI studies. In order to illustrate these concepts, two examples of HRI research are presented. First, research is surveyed which investigates the development of a cognitive robot companion. The aim of this work is to develop social rules for robot behaviour (a 'robotiquette') that is comfortable and acceptable to humans. Second, robots are discussed as possible educational or therapeutic toys for children with autism. The concept of interactive emergence in human-child interactions is highlighted. Different types of play among children are discussed in the light of their potential investigation in human-robot experiments. The paper concludes by examining different paradigms regarding 'social relationships' of robots and people interacting with them."
},
{
"pmid": "29890934",
"title": "Arash: A social robot buddy to support children with cancer in a hospital environment.",
"abstract": "This article presents the thorough design procedure, specifications, and performance of a mobile social robot friend Arash for educational and therapeutic involvement of children with cancer based on their interests and needs. Our research focuses on employing Arash in a pediatric hospital environment to entertain, assist, and educate children with cancer who suffer from physical pain caused by both the disease and its treatment process. Since cancer treatment causes emotional distress, which can reduce the efficiency of medications, using social robots to interact with children with cancer in a hospital environment could decrease this distress, thereby improving the effectiveness of their treatment. Arash is a 15 degree-of-freedom low-cost humanoid mobile robot buddy, carefully designed with appropriate measures and developed to interact with children ages 5-12 years old. The robot has five physical subsystems: the head, arms, torso, waist, and mobile-platform. The robot's final appearance is a significant novel concept; since it was selected based on a survey taken from 50 children with chronic diseases at three pediatric hospitals in Tehran, Iran. Founded on these measures and desires, Arash was designed, built, improved, and enhanced to operate successfully in pediatric cancer hospitals. Two experiments were devised to evaluate the children's level of acceptance and involvement with the robot, assess their feelings about it, and measure how much the robot was similar to the favored conceptual sketch. Both experiments were conducted in the form of storytelling and appearance/performance evaluations. The obtained results confirm high engagement and interest of pediatric cancer patients with the constructed robot."
}
] |
Diagnostics | 31717721 | PMC6963920 | 10.3390/diagnostics9040164 | Accelerated Reliability Growth Test for Magnetic Resonance Imaging System Using Time-of-Flight Three-Dimensional Pulse Sequence | A magnetic resonance imaging (MRI) system is a complex, high cost, and long-life product. It is a widely known fact that performing a system reliability test of a MRI system during the development phase is a challenging task. The major challenges include sample size, high test cost, and long test duration. This paper introduces a novel approach to perform a MRI system reliability test in a reasonably acceptable time with one sample size. Our approach is based on an accelerated reliability growth test, which consists of test cycle made of a very high-energy time-of-flight three-dimensional (TOF3D) pulse sequence representing an actual hospital usage scenario. First, we construct a nominal day usage scenario based on actual data collected from an MRI system used inside the hospital. Then, we calculate the life-time stress based on a usage scenario. Finally, we develop an accelerated reliability growth test cycle based on a TOF3D pulse sequence that exerts highest vibration energy on the gradient coil and MRI system. We use a vibration energy model to map the life-time stress and reduce the test duration from 537 to 55 days. We use a Crow AMSAA plot to demonstrate that system design reaches its useful life after crossing the infant mortality phase. | 2. Related work2.1. Magnetic Resonance Imaging (MRI) System Brief OverviewThe MRI system mainly consists of magnet, gradient, and radio frequency (RF) as critical subsystems. The magnet subsystem produces the main magnetic field. This magnetic field is applied in all three directions (X, Y, and Z). Afterwards, a magnetic gradient is applied in each axis, using gradient coil. Thus, magnetic field varies linearly along each axis. Gradient magnetic field is added or subtracted to the main magnetic field based on the applied gradient field. Due to varying magnetic field, resonance frequency is different for the protons at different places in the anatomy (human body), planned for imaging. The RF coil excites these protons by applying transmit power. Once transmit power is removed then protons relax, and it produces the reflected power. The reflected power is detected by same RF coil and amplified further. Reflected RF power forms the image dataset in the k-space. Fourier transform of the k-space produces the anatomical MRI image. There are various methods to apply magnetic gradient and RF power as per pulse sequence techniques. MRI pulse sequence plays an important role for MRI image based on patient body part and target diagnosis.The MRI system is a very complex, expensive, and reparable product [1]. Hence a long life of 10 years is expected to maximize the value for the money to the customer [1]. During the long-life usage, the MRI system experiences many different failure modes and breakdowns. Some of these failures are quenching and overheating of the magnet, breaking and overheating of the gradient coil, breaking of the RF coil, and several other defects related to hardware and software failures [1]. During these failures in hospital, service engineers have to repair the system. Frequent failures of the MRI system, inside the hospital, reduce the system’s availability for a patient scan thus service costs are increased while quality is decreased. All these failures lead to an unhappy customer and higher cost of ownership. To improve the MRI product reliability, top MRI companies in the world perform parts and software reliability rigorously. However, it still has high defect rate and frequent service (almost once a month). Individual parts, subsystems or software reliability cannot catch the failure due to complex interaction between parts and parts, subsystems and subsystems, parts and software, subsystems and software, and software and software for a complex product like a MRI. To detect the hidden and unknown failures due to these complex interactions a system reliability test is desirable for a MRI product.2.2. Types of Reliability TestIn last several decades, many reliability tests are developed and successfully implemented in different products to achieve high quality. Some of the system reliability tests are explained below:2.2.1. Reliability Growth TestA reliability growth test is a method to perform reliability test to identify failures, fix these failures and continue the test (not needing to restart the test from beginning) after a failure, as introduced by J. T. Duane in 1964 [7]. It is a test for design growth of a new product especially during the development stage [4]. During new product development, design is weak and hence failures are expected. Hence, a reliability growth test gives flexibilities to continue the test after fixing the failure instead of restarting from the beginning. This is biggest advantage of growth test. Due to this flexibility, reliability growth test is adopted by many industries, including aircraft [7], defense [16], automotive [17], cellular telephone [18], solar [3] etc. This test can be applied to system, subsystem, parts, or software [6]. Another advantage of a reliability growth test is small sample size by the Bayes approach [8]. Hall and Mesh [19] introduce a framework for the evaluation of reliability growth with one sample size. Smaller sample size is very important for a complex and expensive MRI system.2.2.2. Reliability Growth Test Using Crow AMSAA ModelLarry H. Crow introduces the reliability analysis for complex and repairable systems in 1975 [8]. Crow AMSAA model is a relation between failures intensity (λ) over the time (t) as shown in Equation (1) [8,9,11]. The failure intensity (λ) depends on the shape parameter (β) and scale parameter (α) [8,9,11]. The shape and scale parameters are calculated based on rank regression [9], maximum likelihood estimation (MLE) [8,9] or the International Electrotechnical Commission (IEC) method [9]. For a repairable system reliability test, MLE fits best [8]. In the MLE method, reliability test can be terminated based on number of failures or test time [9]. We are going to use the MLE time-terminated approach for MRI system reliability test. For the MLE time-terminated case, shape and scale parameters are calculated in Equations (2) and (3), respectively [8,9]. Both shape and scale parameters are calculated based on number of failure (N), total test time (Ts) and time at each failure (Ti) occurring.
(1)λ=α× β × tβ−1
(2)β=N∑i=1Nln(TsTi)
(3)α=NTsβHere,
λ = Failure intensityα = Scale parameterβ = Shape parameterN = Total number of failurest = Test timeTs = Total test timeTi = Time at ith failure occurringIf we plot the failure intensity (λ) over the time t, it produces bath-tub curve as shown in Figure 1 [11]. Bath tub curve is divided into 3 sections [11,13]. First section is for β < 1, in which failures are consistently decreasing over the time [10,11,13]. This section is called as infant mortality or early life period [11]. During infant mortality, a new system encounters several unknown or hidden failures, which are not discovered before the product release. Second section is for β = 1, in which failures are almost constant [10,11]. This section is called as useful life period [11,13], which causes system failures due to unreliability of the product. The third section is for β > 1, where failures keep on increasing due to the system wearing out [10,11,13]. This means the system has completed its useful life.During complex product development of a MRI system, if we perform the reliability growth test before product release then infant mortality failures are discovered proactively. Discovering these failures proactively before product launch improves the quality of the product. A reliability growth test also helps in understanding unreliability left in the product before launch. It helps to determine that product quality is at an acceptable level or not.2.2.3. Reliability Demonstration TestA reliability demonstration test is performed to demonstrate that product meets the quality goal targeted during product planning phase [2]. It is performed during pilot production stage, which is after the verification stage and before mass production [2]. Usually, reliability demonstration is a zero-failure based test [2]. During a reliability demonstration test, failure is not allowed [20,21,22]. If failure occurs during the test, a demonstration test needs to restart from the beginning. This can lead to longer test duration and sample size issue.2.2.4. Accelerated Life TestUsually, a life test is a common test method to prove the life of a part or non-repairable product. In this test, first of all we identify the stress parameters [23,24,25]. Once stress parameters are identified then we develop the test condition by elevating the stress condition to a level above the normal operating condition but below the design limit [23,24]. Later, we perform the test under this elevated stress condition [23,24,25]. As the normal operating condition of a part or product is much lower than the elevated stress condition, it gives acceleration for product aging. Accelerating the test gives the advantage of reducing the test time to respectively acceptable limit. If AF1, AF2, and AFn are acceleration factors due to stress parameter 1, 2, and n then total acceleration factor is a multiple of acceleration factor due to individual stress as shown in Equation (4) [23,24,25]. Based on sample size, product life and total acceleration factor, test time is calculated by Equation (5).
(4)AF=AF1 × AF2 ×……. × AFn
(5)T= LAF × sHere,
AF1 = Acceleration factor due to stress parameter 1AF2 = Acceleration factor due to stress parameter 2AFn = Acceleration factor due to stress parameter nAF = Total acceleration factorL = Product lifes = Sample sizeT = Test timeAn accelerated life test is widely used for electrical circuit board and parts [24]. Usually stress conditions are defined as higher temperature, humidity or other electrical parameters (voltage, current, power) [23,24]. Every stress parameter degrades the parts differently and hence degradation models are different [25]. Some of the degradation models are based on the Arrhenius, inverse power, Coffin–Manson, and Eyring concepts etc, which are widely known but not covered in this paper. | [] | [] |
JMIR mHealth and uHealth | 31899457 | PMC6969385 | 10.2196/13756 | The Mobile-Based 6-Minute Walk Test: Usability Study and Algorithm Development and Validation | BackgroundThe 6-min walk test (6MWT) is a convenient method for assessing functional capacity in patients with cardiopulmonary conditions. It is usually performed in the context of a hospital clinic and thus requires the involvement of hospital staff and facilities, with their associated costs.ObjectiveThis study aimed to develop a mobile phone–based system that allows patients to perform the 6MWT in the community.MethodsWe developed 2 algorithms to compute the distance walked during a 6MWT using sensors embedded in a mobile phone. One algorithm makes use of the global positioning system to track the location of the phone when outdoors and hence computes the distance travelled. The other algorithm is meant to be used indoors and exploits the inertial sensors built into the phone to detect U-turns when patients walk back and forth along a corridor of fixed length. We included these algorithms in a mobile phone app, integrated with wireless pulse oximeters and a back-end server. We performed Bland-Altman analysis of the difference between the distances estimated by the phone and by a reference trundle wheel on 49 indoor tests and 30 outdoor tests, with 11 different mobile phones (both Apple iOS and Google Android operating systems). We also assessed usability aspects related to the app in a discussion group with patients and clinicians using a technology acceptance model to guide discussion.ResultsThe mean difference between the mobile phone-estimated distances and the reference values was −2.013 m (SD 7.84 m) for the indoor algorithm and −0.80 m (SD 18.56 m) for the outdoor algorithm. The absolute maximum difference was, in both cases, below the clinically significant threshold. A total of 2 pulmonary hypertension patients, 1 cardiologist, 2 physiologists, and 1 nurse took part in the discussion group, where issues arising from the use of the 6MWT in hospital were identified. The app was demonstrated to be usable, and the 2 patients were keen to use it in the long term.ConclusionsThe system described in this paper allows patients to perform the 6MWT at a place of their convenience. In addition, the use of pulse oximetry allows more information to be generated about the patient’s health status and, possibly, be more relevant to the real-life impact of their condition. Preliminary assessment has shown that the developed 6MWT app is highly accurate and well accepted by its users. Further tests are needed to assess its clinical value. | Related WorkThe walked distance can be obtained using satellite positioning systems when outdoors and with inertial sensors when indoor.Positioning systems like GPS are already widely used for estimating distance in the automotive sector. Modern GPS receivers provide a signal which is the result of heavy processing and is usually improved and smoothed with well-known techniques [9]. When used with human beings, these systems are known to introduce some error because of the inherent noise that exists in the GPS system and the lower distances travelled [10-12]. Nonetheless, the error is such that it has been considered negligible in previous work when GPS has been applied to estimate distance walked in the 6MWT [13]. As most algorithms used by GPS devices are proprietary, there is a lack of the literature describing how to derive the distance from the raw positions, except for the obvious computation of the distance between the first and the last received positions [12].With regard to the indoor scenario, there is rich literature related to gait analysis with accelerometers [14-17]. From gait analysis, it is possible to compute the number of steps, which, once multiplied by the length of the step, would provide the distance walked.In a study by Schimpl et al [18], 12 different algorithms for extracting human speed (and thus distance walked) from accelerometer data were explored. Some proposed algorithms make use of the step length as an input, whereas others rely purely on the accelerometry. From a validation against data recordings obtained from 17 subjects walking at different speeds, the authors found that the best performing algorithm was a support vector regression algorithm that was previously trained on an independent dataset recorded from 15 subjects who participated in 3 outdoor data collection activities. A similar approach was also followed in a study by Cheng et al [19] but using a mobile phone instead of a dedicated sensor. Data were processed in both the time and frequency domains, and 8 gait parameters were extracted as the inputs to a support vector regression model to estimate gait speed. The approach was validated with 6 COPD patients and 6 healthy subjects performing a 6MWT. These machine learning approaches, although accurate, rely heavily on the training data, and may be biased toward the walking style adopted during the data acquisition or the actual devices used.Gait analysis–based approaches have also been used for the 6MWT. For example, in a study by Schulte et al [20], a telemonitoring system for 6MWT based on body-worn accelerometers was proposed. A simple step-detection algorithm was employed and combined with patient height data to estimate the distance walked. A more sophisticated approach was taken by Capela et al [21,22]; they used a mobile phone app to count steps and identify when the user turns while walking back and forth along a corridor. As the distance walked between U-turns is fixed, it is possible to estimate the step length and, thus, the residual distance walked after the last U-turn by multiplying the number of steps by the stride length. The algorithm uses the azimuth signal provided by Blackberry phones, which is, in turn, estimated from the gyroscope and the magnetometer sensors. Some corrections are introduced to this signal to smooth sudden variations, for example, detecting a turn if the signal changes by more than 100° in 3 seconds. The approach was validated with 15 volunteers and led to less than 1 m average error.In addition to research papers, it is also worth mentioning the Apple Research Kit, an open-source software framework that allows developers to build mobile health (mHealth) apps with a set of already implemented use-cases. One of these use-cases is the timed walk, which can be used to implement the 6MWT. The timed walk activity estimation has already been used in a few studies [23,24], even though the accuracy of the algorithm used to estimate the distance, based on the Core Motion framework, is not publicly disclosed. In a recent study [25], after having used Core Motion for 6MWT with peripheral artery disease (PAD) patients, authors concluded that “the iPhone’s built-in distance algorithm is unable to accurately measure distance, suggesting that custom algorithms are necessary for using iPhones as a platform for monitoring distance walked in PAD patients.” | [
"12890299",
"11157613",
"10673190",
"8697828",
"18625667",
"19732197",
"12091180",
"19713419",
"20859825",
"20861523",
"18091013",
"22031349",
"22438763",
"22400972",
"10426508",
"24768525",
"21850254",
"25889112",
"26364278",
"27973671",
"31304343",
"29983994",
"22428752",
"23984728",
"30344062"
] | [
{
"pmid": "12890299",
"title": "The six-minute walk test.",
"abstract": "The American Thoracic Society has issued guidelines for the 6-minute walk test (6MWT). The 6MWT is safer, easier to administer, better tolerated, and better reflects activities of daily living than other walk tests (such as the shuttle walk test). The primary measurement is 6-min walk distance (6MWD), but during the 6MWT data can also be collected about the patient's blood oxygen saturation and perception of dyspnea during exertion. When conducting the 6MWT do not walk with the patient and do not assist the patient in carrying or pulling his or her supplemental oxygen. The patient should walk alone, not with other patients. Do not use a treadmill on which the patient adjusts the speed and/or the slope. Do not use an oval or circular track. Use standardized phrases while speaking to the patient, because your encouragement and enthusiasm can make a difference of up to 30% in the 6MWD. Count the laps with a lap counter. If the 6MWD is low, thoroughly search for the cause(s) of the impairment. Better 6MWD reference equations will be published in the future, so be sure you are using the best available reference equations."
},
{
"pmid": "11157613",
"title": "A qualitative systematic overview of the measurement properties of functional walk tests used in the cardiorespiratory domain.",
"abstract": "OBJECTIVE\nTo perform a qualitative systematic overview of the measurement properties of the most commonly utilized walk tests in the cardiorespiratory domain: the 2-min walk test (2MWT), 6-min walk test (6MWT), 12-min walk test (12MWT), self-paced walk test (SPWT), and shuttle walk test (SWT).\n\n\nDATA SOURCES\nMEDLINE (1966 to January 2000) and CINAHL (1982 to December 1999) electronic databases were searched. Bibliographies of the retrieved articles were reviewed.\n\n\nSTUDY SELECTION\nClinical trials and observational studies were included if they reported data on the validity, reliability, interpretability, or responsiveness of the 2MWT, 6MWT, 12MWT, SPWT, or SWT. Only studies conducted on patients with cardiac and/or respiratory involvement were included.\n\n\nRESULTS\nFifty-two studies examining measurement properties of the various walk tests were found: 5 studies on the 2MWT, 29 studies on the 6MWT, 13 studies on the 12MWT, 6 studies on the SPWT, and 4 studies on the SWT. Measurement properties were most strongly demonstrated for the 6MWT. Correlations of 6MWT distance and maximal oxygen consumption ranged from 0.51 to 0.90. A change in distance walked of at least 54 m was found to be clinically significant for the 6MWT. Reliability was shown to be optimized when the administration of walk tests was standardized and at least two practice walks were performed. Patients with increased likelihood of postoperative complications, hospitalization, and death were identified by analysis of distance walked.\n\n\nCONCLUSIONS\nMeasurement properties of the 6MWT have been the most extensively researched and established. In addition, the 6MWT is easy to administer, better tolerated, and more reflective of activities of daily living than the other walk tests. Therefore, the 6MWT is currently the test of choice when using a functional walk test for clinical or research purposes."
},
{
"pmid": "10673190",
"title": "Clinical correlates and prognostic significance of six-minute walk test in patients with primary pulmonary hypertension. Comparison with cardiopulmonary exercise testing.",
"abstract": "The six-minute walk test is a submaximal exercise test that can be performed even by a patient with heart failure not tolerating maximal exercise testing. To elucidate the clinical significance and prognostic value of the six-minute walk test in patients with primary pulmonary hypertension (PPH), we sought (1) to assess the relation between distance walked during the six-minute walk test and exercise capacity determined by maximal cardiopulmonary exercise testing, and (2) to investigate the prognostic value of the six-minute walk test in comparison with other noninvasive parameters. The six-minute walk test was performed in 43 patients with PPH, together with echocardiography, right heart catheterization, and measurement of plasma epinephrine and norepinephrine. Symptom-limited cardiopulmonary exercise testing was performed in a subsample of patients (n = 27). Distance walked in 6 min was significantly shorter in patients with PPH than in age- and sex-matched healthy subjects (297 +/- 188 versus 655 +/- 91 m, p < 0. 001). The distance significantly decreased in proportion to the severity of New York Heart Association functional class. The distance walked correlated modestly with baseline cardiac output (r = 0.48, p < 0.05) and total pulmonary resistance (r = -0.49, p < 0. 05), but not significantly with mean pulmonary arterial pressure. In contrast, the distance walked correlated strongly with peak V O(2) (r = 0.70, p < 0.001), oxygen pulse (r = 0.57, p < 0.01), and V E-VCO(2) slope (r = -0.66, p < 0.001) determined by cardiopulmonary exercise testing. During a mean follow-up period of 21 +/- 16 mo, 12 patients died of cardiopulmonary causes. Among noninvasive parameters including clinical, echocardiographic, and neurohumoral parameters, only the distance walked in 6 min was independently related to mortality in PPH by multivariate analysis. Patients walking < 332 m had a significantly lower survival rate than those walking farther, assessed by Kaplan-Meier survival curves (log-rank test, p < 0.01). These results suggest that the six-minute walk test, a submaximal exercise test, reflects exercise capacity determined by maximal cardiopulmonary exercise testing in patients with PPH, and it is the distance walked in 6 min that has a strong, independent association with mortality."
},
{
"pmid": "8697828",
"title": "The six-minute walk test predicts peak oxygen uptake and survival in patients with advanced heart failure.",
"abstract": "BACKGROUND\nThe 6-min walk test (6'WT) is a simple measure of functional capacity and predicts survival in patients with moderate heart failure (HF).\n\n\nMETHODS\nTo assess the role of the 6'WT in the evaluation of patients with advanced HF, 45 patients (age 49 +/- 8 years, mean +/- SD; New York Heart Association class 3.3 +/- 0.6; left ventricular ejection fraction 0.20 +/- 0.06; right ventricular ejection fraction 0.31 +/- 0.11) underwent symptom-limited cardiopulmonary exercise testing and the 6'WT during cardiac transplant evaluation.\n\n\nRESULTS\nMean 6'WT distance ambulated was 310 +/- 100 m and peak oxygen uptake (peak Vo2) was 12.2 +/- 4.5 mL/kg/min. There was a significant correlation between 6'WT distance ambulated and peak Vo2 (r = 0.64, p < 0.001). Multivariate analysis of patient characteristics, resting hemodynamics, and 6'WT results identified the distance ambulated during the 6'WT as the strongest predictor of peak Vo2 (p < 0.001). 6'WT distance ambulated less than 300 m predicted an increased likelihood of death or pretransplant hospital admission for continuous inotropic or mechanical support within 6 months (p = 0.04), but did not predict long-term overall or event-free survival with a mean follow-up of 62 weeks. Peak Vo2 was the best predictor of long-term overall and event-free survival.\n\n\nCONCLUSIONS\nIn patients with advanced HF evaluated for cardiac transplantation, distance ambulated during the 6'WT predicts (1) peak Vo2 and (2) short-term event-free survival."
},
{
"pmid": "18625667",
"title": "Distance and oxygen desaturation during the 6-min walk test as predictors of long-term mortality in patients with COPD.",
"abstract": "RATIONALE\nThe distance walked in the 6-min walk test (6MWT) predicts mortality in patients with severe COPD. Little is known about its prognostic value in patients with a wider range of COPD severity, living in different countries, and the potential additional impact of oxygen desaturation measured during the test.\n\n\nMETHODS\nWe enrolled 576 stable COPD outpatients in Spain and the United States and observed them for at least 3 years (median, 60 months). We measured FEV1, body mass index, Pao2, Charlson comorbidity score, 6-min walk distance (6MWD), and oxygen saturation by pulse oximetry (Spo2) during the 6MWT. Desaturation was defined as a fall in Spo2 > or = 4% or Spo2 < 90%. Regression analysis helped determine the association between these variables and all-cause and respiratory mortality.\n\n\nRESULTS\nThe 6MWD was a good predictor of all-cause and respiratory mortality primarily in patients with FEV1 < 50% of predicted (p < 0.001) after adjusting for all covariates. Patients with desaturation during the 6MWT had a higher mortality rate than patients without desaturation (67% vs 38%, p < 0.001). Oxygen desaturation predicted mortality (relative risk, 2.63; 95% confidence interval, 1.53 to 4.51; p < 0.001) but with less power than Pao2 at rest.\n\n\nCONCLUSIONS\nThe 6MWD helps predict mortality primarily in patients with severe COPD. Although the oxygen desaturation profile during the 6MWT improves the predictive ability of the 6MWD, it appears to be of less relevance than in other lung diseases and than the resting Pao2."
},
{
"pmid": "19732197",
"title": "The six-minute walk test: a useful metric for the cardiopulmonary patient.",
"abstract": "Measurement of exercise capacity is an integral element in assessment of patients with cardiopulmonary disease. The 6-min walk test (6MWT) provides information regarding functional capacity, response to therapy and prognosis across a range of chronic cardiopulmonary conditions. A distance less than 350 m is associated with increased mortality in chronic obstructive pulmonary disease, chronic heart failure and pulmonary arterial hypertension. Desaturation during a 6MWT is an important prognostic indicator for patients with interstitial lung disease. The 6MWT is sensitive to commonly used therapies in chronic obstructive pulmonary disease such as pulmonary rehabilitation, oxygen, long-term use of inhaled corticosteroids and lung volume reduction surgery. However, it appears less reliable to detect changes in clinical status associated with medical therapies for heart failure. A change in walking distance of more than 50 m is clinically significant in most disease states. When interpreting the results of a 6MWT, consideration should be given to choice of predictive values and the methods by which the test was carried out."
},
{
"pmid": "20859825",
"title": "Validity and reliability of GPS for measuring distance travelled in field-based team sports.",
"abstract": "The aim of the present study was to examine the effects of movement intensity and path linearity on global positioning system (GPS) distance validity and reliability. One participant wore eight 1-Hz GPS receivers while walking, jogging, running, and sprinting over linear and non-linear 200-m courses. Five trials were performed at each intensity of movement on each 200-m course. One receiver was excluded from analysis due to errors during data collection. The results from seven GPS receivers showed the mean (± s) and percent bias of the GPS distance values on the 200-m linear course were 205.8 ± 2.4 m (2.8%), 201.8 ± 2.8 m (0.8%), 203.1 ± 2.2 m (1.5%), and 205.2 ± 4 m (2.5%) for the walk, jog, run, and sprint trial respectively. Walk and sprint distances were significantly different from jogging and running distances (P < 0.05). The GPS distance values on the 200-m non-linear course were 198.9 ± 3.5 m (-0.5%), 188.3 ± 2 m (-5.8%), 184.6 ± 2.9 m (-7.7%), and 180.4 ± 5.7 m (-9.8%) for the walk, jog, run, and sprint trial respectively; these were significantly lower than those for the corresponding values on the linear course (P < 0.05). Differences between all non-linear movement intensities were significant (P < 0.05). The overall coefficient of variation within and between receivers was 2.6% and 2.8% respectively. Path linearity and movement intensity appear to affect GPS distance accuracy via inherent positioning errors, update rate, and conditions of use; reliability decreases with movement intensity."
},
{
"pmid": "20861523",
"title": "The validity and reliability of GPS units for measuring distance in team sport specific running patterns.",
"abstract": "PURPOSE\nTo assess the validity and reliability of distance data measured by global positioning system (GPS) units sampling at 1 and 5 Hz during movement patterns common to team sports.\n\n\nMETHODS\nTwenty elite Australian Football players each wearing two GPS devices (MinimaxX, Catapult, Australia) completed straight line movements (10, 20, 40 m) at various speeds (walk, jog, stride, sprint), changes of direction (COD) courses of two different frequencies (gradual and tight), and a team sport running simulation circuit. Position and speed data were collected by the GPS devices at 1 and 5 Hz. Distance validity was assessed using the standard error of the estimate (±90% confidence intervals [CI]). Reliability was estimated using typical error (TE) ± 90% CI (expressed as coefficient of variation [CV]).\n\n\nRESULTS\nMeasurement accuracy decreased as speed of locomotion increased in both straight line and the COD courses. Difference between criterion and GPS measured distance ranged from 9.0% to 32.4%. A higher sampling rate improved validity regardless of distance and locomotion in the straight line, COD and simulated running circuit trials. The reliability improved as distance traveled increased but decreased as speed increased. Total distance over the simulated running circuit exhibited the lowest variation (CV 3.6%) while sprinting over 10 m demonstrated the highest (CV 77.2% at 1 Hz).\n\n\nCONCLUSION\nCurrent GPS systems maybe limited for assessment of short, high speed straight line running and efforts involving change of direction. An increased sample rate improves validity and reliability of GPS devices."
},
{
"pmid": "18091013",
"title": "Assessment of speed and position during human locomotion using nondifferential GPS.",
"abstract": "PURPOSE\nTo validate a nondifferential global positioning system (GPS) to measure speed, displacement, and position during human locomotion.\n\n\nMETHODS\nThree healthy participants walked and ran over straight and curved courses for 59 and 34 trials, respectively. A nondifferential GPS receiver provided speed data by Doppler shift and change in GPS position over time, which were compared with actual speeds determined by chronometry. Displacement data from the GPS were compared with a surveyed 100-m section, and static positions were collected for 1 h and compared with the known geodetic point.\n\n\nRESULTS\nGPS speed values on the straight course were closely correlated with actual speeds (Doppler shift: r = 0.9994, P < 0.001, Delta GPS position/time: r = 0.9984, P < 0.001). Actual speed errors were lowest using the Doppler shift method (90.8% of values within +/- 0.1 m x s(-1)). Speed was slightly underestimated on a curved path, though still highly correlated with actual speed (Doppler shift: r = 0.9985, P < 0.001, Delta GPS distance/time: r = 0.9973, P < 0.001). Distance measured by GPS was 100.46 +/- 0.49 m, and 86.5% of static points were within 1.5 m of the actual geodetic point (mean error: 1.08 +/- 0.34 m, range 0.69-2.10 m).\n\n\nCONCLUSIONS\nNondifferential GPS demonstrated a highly accurate estimation of speed across a wide range of human locomotion velocities using only the raw signal data with a minimal decrease in accuracy around bends. This high level of resolution was matched by accurate displacement and position data. Coupled with reduced size, cost, and ease of use, this method offers a valid alternative to differential GPS in the study of overground locomotion."
},
{
"pmid": "22031349",
"title": "Is outdoor use of the six-minute walk test with a global positioning system in stroke patients' own neighbourhoods reproducible and valid?",
"abstract": "OBJECTIVE\nTo examine the reproducibility, responsiveness and concurrent validity of the six-minute walk test (6MWT) when tested outdoors in patients' own neighbourhoods using a global positioning system (GPS) or a measuring wheel.\n\n\nMETHODS\nA total of 27 chronic stroke patients, discharged to their own homes, were tested twice, within 5 consecutive days. The 6MWT was conducted using a GPS and an measuring wheel simultaneously to determine walking distance. Reproducibility was determined as test-retest reliability and agreement, using the intraclass correlation coefficient, standard error of measurement and Bland & Altman plots. Responsiveness was expressed as the smallest real difference and visualized in Bland & Altman plots. Pearson's correlation coefficient (r) was used to study concurrent validity between the GPS and measuring wheel.\n\n\nRESULTS\nIntraclass correlation coefficiens were 0.96 for the GPS and 0.98 for the measuring wheel, and standard error of measurement scores were 11.9 m for the measuring wheel and 18.1 m for the GPS, resulting in smallest real differences of 33.0 m and 50.2 m, respectively. Concurrent validity was strong (r = 0.99).\n\n\nCONCLUSION\nThese results indicate that the outdoor 6MWT using a GPS or measuring wheel is reproducible, responsive and concurrently valid. This suggests that therapists working in the community can use the outdoor 6MWT as a reliable, responsive and valid test."
},
{
"pmid": "22438763",
"title": "Gait analysis using wearable sensors.",
"abstract": "Gait analysis using wearable sensors is an inexpensive, convenient, and efficient manner of providing useful information for multiple health-related applications. As a clinical tool applied in the rehabilitation and diagnosis of medical conditions and sport activities, gait analysis using wearable sensors shows great prospects. The current paper reviews available wearable sensors and ambulatory gait analysis methods based on the various wearable sensors. After an introduction of the gait phases, the principles and features of wearable sensors used in gait analysis are provided. The gait analysis methods based on wearable sensors is divided into gait kinematics, gait kinetics, and electromyography. Studies on the current methods are reviewed, and applications in sports, rehabilitation, and clinical diagnosis are summarized separately. With the development of sensor technology and the analysis method, gait analysis using wearable sensors is expected to play an increasingly important role in clinical applications."
},
{
"pmid": "22400972",
"title": "Reliability and validity of gait analysis by android-based smartphone.",
"abstract": "Smartphones are very common devices in daily life that have a built-in tri-axial accelerometer. Similar to previously developed accelerometers, smartphones can be used to assess gait patterns. However, few gait analyses have been performed using smartphones, and their reliability and validity have not been evaluated yet. The purpose of this study was to evaluate the reliability and validity of a smartphone accelerometer. Thirty healthy young adults participated in this study. They walked 20 m at their preferred speeds, and their trunk accelerations were measured using a smartphone and a tri-axial accelerometer that was secured over the L3 spinous process. We developed a gait analysis application and installed it in the smartphone to measure the acceleration. After signal processing, we calculated the gait parameters of each measurement terminal: peak frequency (PF), root mean square (RMS), autocorrelation peak (AC), and coefficient of variance (CV) of the acceleration peak intervals. Remarkable consistency was observed in the test-retest reliability of all the gait parameter results obtained by the smartphone (p<0.001). All the gait parameter results obtained by the smartphone showed statistically significant and considerable correlations with the same parameter results obtained by the tri-axial accelerometer (PF r=0.99, RMS r=0.89, AC r=0.85, CV r=0.82; p<0.01). Our study indicates that the smartphone with gait analysis application used in this study has the capacity to quantify gait parameters with a degree of accuracy that is comparable to that of the tri-axial accelerometer."
},
{
"pmid": "10426508",
"title": "A practical gait analysis system using gyroscopes.",
"abstract": "This study investigated the possibility of using uni-axial gyroscopes to develop a simple portable gait analysis system. Gyroscopes were attached on the skin surface of the shank and thigh segments and the angular velocity for each segment was recorded in each segment. Segment inclinations and knee angle were derived from segment angular velocities. The angular signals from a motion analysis system were used to evaluate the angular velocities and the derived signals from the gyroscopes. There was a good correlation between these signals. When performing a turn the signals of segment inclination and knee angle drifted. Two methods were used to solve this: automatically resetting the system to re-initialise the angle in each gait cycle, and high-pass filtering. They both successfully corrected this drift. A single gyroscope on the shank segment could provide information on segment inclination range, cadence, number of steps, and an estimation of stride length and walking speed."
},
{
"pmid": "24768525",
"title": "Quantified self and human movement: a review on the clinical impact of wearable sensing and feedback for gait analysis and intervention.",
"abstract": "The proliferation of miniaturized electronics has fueled a shift toward wearable sensors and feedback devices for the mass population. Quantified self and other similar movements involving wearable systems have gained recent interest. However, it is unclear what the clinical impact of these enabling technologies is on human gait. The purpose of this review is to assess clinical applications of wearable sensing and feedback for human gait and to identify areas of future research. Four electronic databases were searched to find articles employing wearable sensing or feedback for movements of the foot, ankle, shank, thigh, hip, pelvis, and trunk during gait. We retrieved 76 articles that met the inclusion criteria and identified four common clinical applications: (1) identifying movement disorders, (2) assessing surgical outcomes, (3) improving walking stability, and (4) reducing joint loading. Characteristics of knee and trunk motion were the most frequent gait parameters for both wearable sensing and wearable feedback. Most articles performed testing on healthy subjects, and the most prevalent patient populations were osteoarthritis, vestibular loss, Parkinson's disease, and post-stroke hemiplegia. The most widely used wearable sensors were inertial measurement units (accelerometer and gyroscope packaged together) and goniometers. Haptic (touch) and auditory were the most common feedback sensations. This review highlights the current state of the literature and demonstrates substantial potential clinical benefits of wearable sensing and feedback. Future research should focus on wearable sensing and feedback in patient populations, in natural human environments outside the laboratory such as at home or work, and on continuous, long-term monitoring and intervention."
},
{
"pmid": "21850254",
"title": "Development and validation of a new method to measure walking speed in free-living environments using the actibelt® platform.",
"abstract": "Walking speed is a fundamental indicator for human well-being. In a clinical setting, walking speed is typically measured by means of walking tests using different protocols. However, walking speed obtained in this way is unlikely to be representative of the conditions in a free-living environment. Recently, mobile accelerometry has opened up the possibility to extract walking speed from long-time observations in free-living individuals, but the validity of these measurements needs to be determined. In this investigation, we have developed algorithms for walking speed prediction based on 3D accelerometry data (actibelt®) and created a framework using a standardized data set with gold standard annotations to facilitate the validation and comparison of these algorithms. For this purpose 17 healthy subjects operated a newly developed mobile gold standard while walking/running on an indoor track. Subsequently, the validity of 12 candidate algorithms for walking speed prediction ranging from well-known simple approaches like combining step length with frequency to more sophisticated algorithms such as linear and non-linear models was assessed using statistical measures. As a result, a novel algorithm employing support vector regression was found to perform best with a concordance correlation coefficient of 0.93 (95%CI 0.92-0.94) and a coverage probability CP1 of 0.46 (95%CI 0.12-0.70) for a deviation of 0.1 m/s (CP2 0.78, CP3 0.94) when compared to the mobile gold standard while walking indoors. A smaller outdoor experiment confirmed those results with even better coverage probability. We conclude that walking speed thus obtained has the potential to help establish walking speed in free-living environments as a patient-oriented outcome measure."
},
{
"pmid": "25889112",
"title": "Novel algorithm for a smartphone-based 6-minute walk test application: algorithm, application development, and evaluation.",
"abstract": "BACKGROUND\nThe 6-minute walk test (6MWT: the maximum distance walked in 6 minutes) is used by rehabilitation professionals as a measure of exercise capacity. Today's smartphones contain hardware that can be used for wearable sensor applications and mobile data analysis. A smartphone application can run the 6MWT and provide typically unavailable biomechanical information about how the person moves during the test.\n\n\nMETHODS\nA new algorithm for a calibration-free 6MWT smartphone application was developed that uses the test's inherent conditions and smartphone accelerometer-gyroscope data to report the total distance walked, step timing, gait symmetry, and walking changes over time. This information is not available with a standard 6MWT and could help with clinical decision-making. The 6MWT application was evaluated with 15 able-bodied participants. A BlackBerry Z10 smartphone was worn on a belt at the mid lower back. Audio from the phone instructed the person to start and stop walking. Digital video was independently recorded during the trial as a gold-standard comparator.\n\n\nRESULTS\nThe average difference between smartphone and gold standard foot strike timing was 0.014 ± 0.015 s. The total distance calculated by the application was within 1 m of the measured distance for all but one participant, which was more accurate than other smartphone-based studies.\n\n\nCONCLUSIONS\nThese results demonstrated that clinically relevant 6MWT results can be achieved with typical smartphone hardware and a novel algorithm."
},
{
"pmid": "26364278",
"title": "Mobility Outcomes Following Five Training Sessions with a Powered Exoskeleton.",
"abstract": "BACKGROUND\nLoss of legged mobility due to spinal cord injury (SCI) is associated with multiple physiological and psychological impacts. Powered exoskeletons offer the possibility of regained mobility and reversal or prevention of the secondary effects associated with immobility.\n\n\nOBJECTIVE\nThis study was conducted to evaluate mobility outcomes for individuals with SCI after 5 gait-training sessions with a powered exoskeleton, with a primary goal of characterizing the ease of learning and usability of the system.\n\n\nMETHODS\nSixteen subjects with SCI were enrolled in a pilot clinical trial at Shepherd Center, Atlanta, Georgia, with injury levels ranging from C5 complete to L1 incomplete. An investigational Indego exoskeleton research kit was evaluated for ease of use and efficacy in providing legged mobility. Outcome measures of the study included the 10-meter walk test (10 MWT) and the 6-minute walk test (6 MWT) as well as measures of independence including donning and doffing times and the ability to walk on various surfaces.\n\n\nRESULTS\nAt the end of 5 sessions (1.5 hours per session), average walking speed was 0.22 m/s for persons with C5-6 motor complete tetraplegia, 0.26 m/s for T1-8 motor complete paraplegia, and 0.45 m/s for T9-L1 paraplegia. Distances covered in 6 minutes averaged 64 meters for those with C5-6, 74 meters for T1-8, and 121 meters for T9-L1. Additionally, all participants were able to walk on both indoor and outdoor surfaces.\n\n\nCONCLUSIONS\nResults after only 5 sessions suggest that persons with tetraplegia and paraplegia learn to use the Indego exoskeleton quickly and can manage a variety of surfaces. Walking speeds and distances achieved also indicate that some individuals with paraplegia can quickly become limited community ambulators using this system."
},
{
"pmid": "27973671",
"title": "Feasibility of Obtaining Measures of Lifestyle From a Smartphone App: The MyHeart Counts Cardiovascular Health Study.",
"abstract": "Importance\nStudies have established the importance of physical activity and fitness, yet limited data exist on the associations between objective, real-world physical activity patterns, fitness, sleep, and cardiovascular health.\n\n\nObjectives\nTo assess the feasibility of obtaining measures of physical activity, fitness, and sleep from smartphones and to gain insights into activity patterns associated with life satisfaction and self-reported disease.\n\n\nDesign, Setting, and Participants\nThe MyHeart Counts smartphone app was made available in March 2015, and prospective participants downloaded the free app between March and October 2015. In this smartphone-based study of cardiovascular health, participants recorded physical activity, filled out health questionnaires, and completed a 6-minute walk test. The app was available to download within the United States.\n\n\nMain Outcomes and Measures\nThe feasibility of consent and data collection entirely on a smartphone, the use of machine learning to cluster participants, and the associations between activity patterns, life satisfaction, and self-reported disease.\n\n\nResults\nFrom the launch to the time of the data freeze for this study (March to October 2015), the number of individuals (self-selected) who consented to participate was 48 968, representing all 50 states and the District of Columbia. Their median age was 36 years (interquartile range, 27-50 years), and 82.2% (30 338 male, 6556 female, 10 other, and 3115 unknown) were male. In total, 40 017 (81.7% of those who consented) uploaded data. Among those who consented, 20 345 individuals (41.5%) completed 4 of the 7 days of motion data collection, and 4552 individuals (9.3%) completed all 7 days. Among those who consented, 40 017 (81.7%) filled out some portion of the questionnaires, and 4990 (10.2%) completed the 6-minute walk test, made available only at the end of 7 days. The Heart Age Questionnaire, also available after 7 days, required entering lipid values and age 40 to 79 years (among 17 245 individuals, 43.1% of participants). Consequently, 1334 (2.7%) of those who consented completed all fields needed to compute heart age and a 10-year risk score. Physical activity was detected for a mean (SD) of 14.5% (8.0%) of individuals' total recorded time. Physical activity patterns were identified by cluster analysis. A pattern of lower overall activity but more frequent transitions between active and inactive states was associated with equivalent self-reported cardiovascular disease as a pattern of higher overall activity with fewer transitions. Individuals' perception of their activity and risk bore little relation to sensor-estimated activity or calculated cardiovascular risk.\n\n\nConclusions and Relevance\nA smartphone-based study of cardiovascular health is feasible, and improvements in participant diversity and engagement will maximize yield from consented participants. Large-scale, real-world assessment of physical activity, fitness, and sleep using mobile devices may be a useful addition to future population health studies."
},
{
"pmid": "31304343",
"title": "Clinical validation of smartphone-based activity tracking in peripheral artery disease patients.",
"abstract": "Peripheral artery disease (PAD) is a vascular disease that leads to reduced blood flow to the limbs, often causing claudication symptoms that impair patients' ability to walk. The distance walked during a 6-min walk test (6MWT) correlates well with patient claudication symptoms, so we developed the VascTrac iPhone app as a platform for monitoring PAD using a digital 6MWT. In this study, we evaluate the accuracy of the built-in iPhone distance and step-counting algorithms during 6MWTs. One hundred and fourteen (114) participants with PAD performed a supervised 6MWT using the VascTrac app while simultaneously wearing an ActiGraph GT9X Activity Monitor. Steps and distance-walked during the 6MWT were manually measured and used to assess the bias in the iPhone CMPedometer algorithms. The iPhone CMPedometer step algorithm underestimated steps with a bias of -7.2% ± 13.8% (mean ± SD) and had a mean percent difference with the Actigraph (Actigraph-iPhone) of 5.7% ± 20.5%. The iPhone CMPedometer distance algorithm overestimated distance with a bias of 43% ± 42% due to overestimation in stride length. Our correction factor improved distance estimation to 8% ± 32%. The Ankle-Brachial Index (ABI) correlated poorly with steps (R = 0.365) and distance (R = 0.413). Thus, in PAD patients, the iPhone's built-in distance algorithm is unable to accurately measure distance, suggesting that custom algorithms are necessary for using iPhones as a platform for monitoring distance walked in PAD patients. Although the iPhone accurately measured steps, more research is necessary to establish step counting as a clinically meaningful metric for PAD."
},
{
"pmid": "29983994",
"title": "Sharpening the focus: differentiating between focus groups for patient engagement vs. qualitative research.",
"abstract": "PLAIN ENGLISH SUMMARY\nPatient engagement is an opportunity for people with experience of a health-related issue to contribute to research on that issue. The Canadian Strategy for Patient-Oriented Research (SPOR) highlights patient engagement as an important part of health research. Patient engagement, however, is a new concept for many researchers and research ethics boards, and it can be difficult to understand the differences between patient engagement activities and research activities. Focus groups are one example of how research and patient engagement activities are often confused.We distinguish these two types of activities by using different terms for each. We use focus groups to refer to research activities, and discussion groups to refer to patient engagement activities. In focus groups, researchers collect data by speaking with a group of research subjects about their experiences. Researchers use this information to answer research questions and share their findings in academic journals and gatherings. In patient engagement, discussion groups are a way for patients to help plan research projects. Their contributions are not treated as research data, but instead they help make decisions that shape the research process. We have found that using different language to refer to each type of activity has led to improved clarity in research planning and research ethics submissions.\n\n\nABSTRACT\nBackground In patient-oriented research (POR), focus groups can be used as a method in both qualitative research and in patient engagement. Canadian health systems researchers and research ethics boards (REBs), however, are often unaware of the key differences to consider when using focus groups for these two distinct purposes. Furthermore, no one has clearly established how using focus groups for these two purposes should be differentiated in the context of Canada's Strategy for Patient-Oriented Research (SPOR), which emphasizes appropriate patient engagement as a fundamental component of POR. Body Researchers and staff in the Maritime SPOR SUPPORT Unit refer to focus groups in patient engagement as discussion groups for clarity, and have developed internal guidelines to encourage their appropriate use. In this paper, the guidelines comparing and contrasting the design and conduct of focus groups and of discussion groups is described, including: the theoretical framework for each; the need for research ethics board review approval; identifying participants; collecting and analyzing data; ensuring rigour; and disseminating results. Conclusion The MSSU guidelines address an important and current methodological challenge in patient-oriented research, which will benefit Canadian and international health systems researchers, patients, and institutional REBs."
},
{
"pmid": "22428752",
"title": "Patient and public involvement in patient-reported outcome measures: evolution not revolution.",
"abstract": "This paper considers the potential for collaborative patient and public involvement in the development, application, evaluation, and interpretation of patient-reported outcome measures (PROMs). The development of PROMs has followed a well trodden methodological path, with patients contributing as research subjects to the content of many PROMs. This paper argues that the development of PROMs should embrace more collaborative forms of patient and public involvement with patients as research partners in the research process, not just as those individuals who are consulted or as subjects, from whom data are sourced, to ensure the acceptability, relevance, and quality of research. We consider the potential for patients to be involved in a much wider range of methodological activities in PROM development working in partnership with researchers, which we hope will promote paradigmal evolution rather than revolution."
},
{
"pmid": "23984728",
"title": "Macitentan and morbidity and mortality in pulmonary arterial hypertension.",
"abstract": "BACKGROUND\nCurrent therapies for pulmonary arterial hypertension have been adopted on the basis of short-term trials with exercise capacity as the primary end point. We assessed the efficacy of macitentan, a new dual endothelin-receptor antagonist, using a primary end point of morbidity and mortality in a long-term trial.\n\n\nMETHODS\nWe randomly assigned patients with symptomatic pulmonary arterial hypertension to receive placebo once daily, macitentan at a once-daily dose of 3 mg, or macitentan at a once-daily dose of 10 mg. Stable use of oral or inhaled therapy for pulmonary arterial hypertension, other than endothelin-receptor antagonists, was allowed at study entry. The primary end point was the time from the initiation of treatment to the first occurrence of a composite end point of death, atrial septostomy, lung transplantation, initiation of treatment with intravenous or subcutaneous prostanoids, or worsening of pulmonary arterial hypertension.\n\n\nRESULTS\nA total of 250 patients were randomly assigned to placebo, 250 to the 3-mg macitentan dose, and 242 to the 10-mg macitentan dose. The primary end point occurred in 46.4%, 38.0%, and 31.4% of the patients in these groups, respectively. The hazard ratio for the 3-mg macitentan dose as compared with placebo was 0.70 (97.5% confidence interval [CI], 0.52 to 0.96; P=0.01), and the hazard ratio for the 10-mg macitentan dose as compared with placebo was 0.55 (97.5% CI, 0.39 to 0.76; P<0.001). Worsening of pulmonary arterial hypertension was the most frequent primary end-point event. The effect of macitentan on this end point was observed regardless of whether the patient was receiving therapy for pulmonary arterial hypertension at baseline. Adverse events more frequently associated with macitentan than with placebo were headache, nasopharyngitis, and anemia.\n\n\nCONCLUSIONS\nMacitentan significantly reduced morbidity and mortality among patients with pulmonary arterial hypertension in this event-driven study. (Funded by Actelion Pharmaceuticals; SERAPHIN ClinicalTrials.gov number, NCT00660179.)."
}
] |
Frontiers in Public Health | 31993412 | PMC6970971 | 10.3389/fpubh.2019.00400 | Assessing Canadians Health Activity and Nutritional Habits Through Social Media | When conducting data analysis in the twenty-first century, social media is crucial to the analysis due to the ability to provide information on a variety of topics such as health, food, feedback on products, and many others. Presently, users utilize social media to share their daily lifestyles. For example, travel locations, exercises, and food are common subjects of social media posts. By analyzing such information collected from users, health of the general population can be gauged. This analysis can become an integral part of federal efforts to study the health of a nation's people on a large scale. In this paper, we focus on such efforts from a Canadian lens. Public health is becoming a primary concern for many governments around the world. It is believed that it is necessary to analyze the current scenario within a given population before creating any new policies. Traditionally, governments use a variety of ways to gauge the flavor for any new policy including door to door surveys, a national level census, or hospital information to decide health policies. This information is limited and sometimes takes a long time to collect and analyze sufficiently enough to aid in decision making. In this paper, our approach is to solve such problems through the advancement of natural language processing algorithms and large scale data analysis. Our in-depth results show that the proposed method provides a viable solution in less time with the same accuracy when compared to traditional methods. | 2. Related WorkGoogle Flu Trends was a real-time flu detection tool based on Google search query (6). If individuals search for a solution for the flu or any medical information related to the flu, the algorithm uses that information and considers their location as a potential flu affected area (6). However, the algorithm was proven to be ineffective. Paul and Drendze (7) gave a correlation when comparing cancer tweets, showing that there are higher obesity and tweets regarding smoking. They also found a negative relationship between health care coverage and tweets posted about diseases. With more sophisticated algorithms the accuracy of the data increases and this can be used to discover more true trends when looking at Twitter for health analysis.Shawndra et al. (8) found that people who search about sodium content per recipe directly correlated with the number of people admitted in the emergency room of a major urban Washington hospital for congestive heart failure. Eichstaedt found that sentiment analysis of tweet language outperforms the traditional socioeconomic surveys for predicting heart disease at the country level (9). They correlated the growth of negative emotions in Twitter with the risk factor of heart disease on a large scale. This shows that social media analysis can be more effective than traditional surveys and may be the next step of methodology for future analysis done by the government.Culotta et al. (10) analyzed tweets which contain the daily habits of the account holders. The results were a “deep representation” of the US community in regards to their daily negative engagement concerning their routine such as watching television, playing, or reading. Abbar also did an analysis of data on Twitter for caloric analysis at the country level. They classified food-related tweets and found the caloric value of such food. This analysis gave a brief understanding of the food habits of the people in different demographic areas (11). Subsequently, Lexicocaloricmeter (LCM) became one of the most sophisticated approaches toward the health analysis of people at the country level. This is done by utilizing social media. LCM is an online instrument that is designed for measuring social, physical, and psychological examination at a large scale. Sharon et al. (12) developed it for public health monitoring and to create health policies through data-centric comparison of communities at all scales. Oversimplification exists in data analysis which means that the data is being classified in basic categories. Doing this results in looking only at the data present instead of looking deeper into the meaning or relevance of the data. This methodology is known to cause bias. An example of this is a piece of data from a Twitter account that says “the test was a piece of cake.” This idiomatic expression that has very little to do with food. Instruments like LCM will take this data as a food tweet and add it to is trends. This causes errors and inaccurate trends which needs to be addressed in future models. Models need to have a resistance to oversimplification. LCM extracts text related to caloric input and caloric output and calculates their caloric content (13, 14). They also use food phrases from a 450-plus database and physical activity phrases from a 550-plus database. The second step is to group categorically similar words and phrases into small pieces called lemmas. They then assign caloric values to it, based on the food and physical activity. To get these lemmas, they use a greedy selection algorithm. Food caloric value is represented as Cin and activity caloric value is represented as Cout. Crat is calculated as shown in Equation (1).(1)Crat=CinCoutTo find the average caloric value of different provinces or countries, frequency of all food and activity related words is counted and then caloric values to all words are assigned. Next, the standard Crat formula is used to compute the caloric ratio of each place. The authors consider 80.7 kilograms as the average weight for metabolism equivalent of tasks; this is subtracted from the calorie's physical activity value.2.1. LimitationsFor simplicity, the LCM did not use any filter for tweets beyond their geographic locations. This causes bias in the data-set because the user may live/eat in different locations. This also causes the users eating habits to affect another location's data-set instead of affecting there home location data-set. For example a given user might be from Toronto and go on a trip and eat in Montreal. With LCM's current filter the user's data will affect the data-set gathered from two separate locations instead just Toronto as it should. This causes a loophole in the data-set that will cause inaccuracy.LCM's data-set is quite limited with only 451 food phrases (15). The food phrase data-set has the most common food names which limits its applicability. Also when people talk about the food, it can be called anything such as the name of special dish in a certain restaurant (15). Different cultures have different foods and this is very important in a country as diverse as Canada. So the database of food phrases in our model must be large in order to accommodate for all possibilities in order to be accurate.Another limitation of LCM is that the Twitter account may talk about food or an activity in a metaphorical perspective. Food words are commonly used in idiomatic expressions in the English language. Some examples include: “bring home the bacon,” “crying over spilled milk,” and “cup of tea.” LCM will still consider these phrases as food items in their system and assign values to them. The approach used in LCM cannot solve this problem, and therefore creates bias in the system. An example of this includes if a person tweets the phrase “you are the apple of my eye,” the present algorithm will consider apple as a food. But in this case, it is not related to food. Also, a lack of Natural Language Processing (NLP) understanding of such approach creates higher chances for the bias output (16). Due to this, unnecessary data will enter the data-set and create false trends, over-fitting and decrease accuracy of the overall analysis. | [
"22163266",
"24626916",
"25045598",
"25605707",
"28187216",
"6399758",
"29384696",
"27858278",
"29358266",
"29310629"
] | [
{
"pmid": "22163266",
"title": "Temporal patterns of happiness and information in a global social network: hedonometrics and Twitter.",
"abstract": "Individual happiness is a fundamental societal metric. Normally measured through self-report, happiness has often been indirectly characterized and overshadowed by more readily quantifiable economic indicators such as gross domestic product. Here, we examine expressions made on the online, global microblog and social networking service Twitter, uncovering and explaining temporal variations in happiness and information levels over timescales ranging from hours to years. Our data set comprises over 46 billion words contained in nearly 4.6 billion expressions posted over a 33 month span by over 63 million unique users. In measuring happiness, we construct a tunable, real-time, remote-sensing, and non-invasive, text-based hedonometer. In building our metric, made available with this paper, we conducted a survey to obtain happiness evaluations of over 10,000 individual words, representing a tenfold size improvement over similar existing word sets. Rather than being ad hoc, our word list is chosen solely by frequency of usage, and we show how a highly robust and tunable metric can be constructed and defended."
},
{
"pmid": "25045598",
"title": "LESSONS LEARNED ABOUT PUBLIC HEALTH FROM ONLINE CROWD SURVEILLANCE.",
"abstract": "The Internet has forever changed the way people access information and make decisions about their healthcare needs. Patients now share information about their health at unprecedented rates on social networking sites such as Twitter and Facebook and on medical discussion boards. In addition to explicitly shared information about health conditions through posts, patients reveal data on their inner fears and desires about health when searching for health-related keywords on search engines. Data are also generated by the use of mobile phone applications that track users' health behaviors (e.g., eating and exercise habits) as well as give medical advice. The data generated through these applications are mined and repackaged by surveillance systems developed by academics, companies, and governments alike to provide insight to patients and healthcare providers for medical decisions. Until recently, most Internet research in public health has been surveillance focused or monitoring health behaviors. Only recently have researchers used and interacted with the crowd to ask questions and collect health-related data. In the future, we expect to move from this surveillance focus to the \"ideal\" of Internet-based patient-level interventions where healthcare providers help patients change their health behaviors. In this article, we highlight the results of our prior research on crowd surveillance and make suggestions for the future."
},
{
"pmid": "25605707",
"title": "Psychological language on Twitter predicts county-level heart disease mortality.",
"abstract": "Hostility and chronic stress are known risk factors for heart disease, but they are costly to assess on a large scale. We used language expressed on Twitter to characterize community-level psychological correlates of age-adjusted mortality from atherosclerotic heart disease (AHD). Language patterns reflecting negative social relationships, disengagement, and negative emotions-especially anger-emerged as risk factors; positive emotions and psychological engagement emerged as protective factors. Most correlations remained significant after controlling for income and education. A cross-sectional regression model based only on Twitter language predicted AHD mortality significantly better than did a model that combined 10 common demographic, socioeconomic, and health risk factors, including smoking, diabetes, hypertension, and obesity. Capturing community psychological characteristics through social media is feasible, and these characteristics are strong markers of cardiovascular mortality at the community level."
},
{
"pmid": "28187216",
"title": "The Lexicocalorimeter: Gauging public health through caloric input and output on social media.",
"abstract": "We propose and develop a Lexicocalorimeter: an online, interactive instrument for measuring the \"caloric content\" of social media and other large-scale texts. We do so by constructing extensive yet improvable tables of food and activity related phrases, and respectively assigning them with sourced estimates of caloric intake and expenditure. We show that for Twitter, our naive measures of \"caloric input\", \"caloric output\", and the ratio of these measures are all strong correlates with health and well-being measures for the contiguous United States. Our caloric balance measure in many cases outperforms both its constituent quantities; is tunable to specific health and well-being measures such as diabetes rates; has the capability of providing a real-time signal reflecting a population's health; and has the potential to be used alongside traditional survey data in the development of public policy and collective self-awareness. Because our Lexicocalorimeter is a linear superposition of principled phrase scores, we also show we can move beyond correlations to explore what people talk about in collective detail, and assist in the understanding and explanation of how population-scale conditions vary, a capacity unavailable to black-box type methods."
},
{
"pmid": "29384696",
"title": "A social network analysis of Canadian food insecurity policy actors.",
"abstract": "PURPOSE\nThis paper aims to: (i) visualize the networks of food insecurity policy actors in Canada, (ii) identify potential food insecurity policy entrepreneurs (i.e., individuals with voice, connections, and persistence) within these networks, and (iii) examine the political landscape for action on food insecurity as revealed by social network analysis.\n\n\nMETHODS\nA survey was administered to 93 Canadian food insecurity policy actors. They were each asked to nominate 3 individuals whom they believed to be policy entrepreneurs. Ego-centred social network maps (sociograms) were generated based on data on nominees and nominators.\n\n\nRESULTS\nSeventy-two percent of the actors completed the survey; 117 unique nominations ensued. Eleven actors obtained 3 or more nominations and thus were considered policy entrepreneurs. The majority of actors nominated actors from the same province (71.5%) and with a similar approach to theirs to addressing food insecurity (54.8%). Most nominees worked in research, charitable, and other nongovernmental organizations.\n\n\nCONCLUSIONS\nNetworks of Canadian food insecurity policy actors exist but are limited in scope and reach, with a paucity of policy entrepreneurs from political, private, or governmental jurisdictions. The networks are divided between food-based solution actors and income-based solution actors, which might impede collaboration among those with differing approaches to addressing food insecurity."
},
{
"pmid": "27858278",
"title": "Primary Health Care Models Addressing Health Equity for Immigrants: A Systematic Scoping Review.",
"abstract": "To examine two healthcare models, specifically \"Primary Medical Care\" (PMC) and \"Primary Health Care\" (PHC) in the context of immigrant populations' health needs. We conducted a systematic scoping review of studies that examined primary care provided to immigrants. We categorized studies into two models, PMC and PHC. We used subjects of access barriers and preventive interventions to analyze the potential of PMC/PHC to address healthcare inequities. From 1385 articles, 39 relevant studies were identified. In the context of immigrant populations, the PMC model was found to be more oriented to implement strategies that improve quality of care of the acute and chronically ill, while PHC models focused more on health promotion and strategies to address cultural and access barriers to care, and preventive strategies to address social determinants of health. Primary Health Care models may be better equipped to address social determinants of health, and thus have more potential to reduce immigrant populations' health inequities."
},
{
"pmid": "29358266",
"title": "Associations between sensory loss and social networks, participation, support, and loneliness: Analysis of the Canadian Longitudinal Study on Aging.",
"abstract": "OBJECTIVE\nTo determine if hearing loss, vision loss, and dual sensory loss were associated with social network diversity, social participation, availability of social support, and loneliness, respectively, in a population-based sample of older Canadians and to determine whether age or sex modified the associations.\n\n\nDESIGN\nCross-sectional population-based study.\n\n\nSETTING\nCanada.\n\n\nPARTICIPANTS\nThe sample included 21 241 participants in the Canadian Longitudinal Study on Aging tracking cohort. The sample was nationally representative of English- and French-speaking, non-institutionalized 45- to 89-year-old Canadians who did not live on First Nations reserves and who had normal cognition. Participants with missing data for any of the variables in the multivariable regression models were excluded from analysis.\n\n\nMAIN OUTCOME MEASURES\nHearing and vision loss were determined by self-report. Dual sensory loss was defined as reporting both hearing and vision loss. Univariate analyses were performed to assess cross-sectional associations between hearing, vision, and dual sensory loss, and social, demographic, and medical variables. Multivariable regression models were used to analyze cross-sectional associations between each type of sensory loss and social network diversity, social participation, availability of social support, and loneliness.\n\n\nRESULTS\nVision loss (in men) and dual sensory loss (in 65- to 85-year-olds) were independently associated with reduced social network diversity. Vision loss and dual sensory loss (in 65- to 85-year-olds) were each independently associated with reduced social participation. All forms of sensory loss were associated with both low availability of social support and loneliness.\n\n\nCONCLUSION\nSensory impairment is associated with reduced social function in older Canadians. Interventions and research that address the social needs of older individuals with sensory loss are needed."
},
{
"pmid": "29310629",
"title": "The Canadian Urban Environmental Health Research Consortium - a protocol for building a national environmental exposure data platform for integrated analyses of urban form and health.",
"abstract": "BACKGROUND\nMultiple external environmental exposures related to residential location and urban form including, air pollutants, noise, greenness, and walkability have been linked to health impacts or benefits. The Canadian Urban Environmental Health Research Consortium (CANUE) was established to facilitate the linkage of extensive geospatial exposure data to existing Canadian cohorts and administrative health data holdings. We hypothesize that this linkage will enable investigators to test a variety of their own hypotheses related to the interdependent associations of built environment features with diverse health outcomes encompassed by the cohorts and administrative data.\n\n\nMETHODS\nWe developed a protocol for compiling measures of built environment features that quantify exposure; vary spatially on the urban and suburban scale; and can be modified through changes in policy or individual behaviour to benefit health. These measures fall into six domains: air quality, noise, greenness, weather/climate, and transportation and neighbourhood factors; and will be indexed to six-digit postal codes to facilitate merging with health databases. Initial efforts focus on existing data and include estimates of air pollutants, greenness, temperature extremes, and neighbourhood walkability and socioeconomic characteristics. Key gaps will be addressed for noise exposure, with a new national model being developed, and for transportation-related exposures, with detailed estimates of truck volumes and diesel emissions now underway in selected cities. Improvements to existing exposure estimates are planned, primarily by increasing temporal and/or spatial resolution given new satellite-based sensors and more detailed national air quality modelling. Novel metrics are also planned for walkability and food environments, green space access and function and life-long climate-related exposures based on local climate zones. Critical challenges exist, for example, the quantity and quality of input data to many of the models and metrics has changed over time, making it difficult to develop and validate historical exposures.\n\n\nDISCUSSION\nCANUE represents a unique effort to coordinate and leverage substantial research investments and will enable a more focused effort on filling gaps in exposure information, improving the range of exposures quantified, their precision and mechanistic relevance to health. Epidemiological studies may be better able to explore the common theme of urban form and health in an integrated manner, ultimately contributing new knowledge informing policies that enhance healthy urban living."
}
] |
Frontiers in Neuroscience | 31992969 | PMC6971124 | 10.3389/fnins.2019.01408 | A Two-Level Transfer Learning Algorithm for Evolutionary Multitasking | Different from conventional single-task optimization, the recently proposed multitasking optimization (MTO) simultaneously deals with multiple optimization tasks with different types of decision variables. MTO explores the underlying similarity and complementarity among the component tasks to improve the optimization process. The well-known multifactorial evolutionary algorithm (MFEA) has been successfully introduced to solve MTO problems based on transfer learning. However, it uses a simple and random inter-task transfer learning strategy, thereby resulting in slow convergence. To deal with this issue, this paper presents a two-level transfer learning (TLTL) algorithm, in which the upper-level implements inter-task transfer learning via chromosome crossover and elite individual learning, and the lower-level introduces intra-task transfer learning based on information transfer of decision variables for an across-dimension optimization. The proposed algorithm fully uses the correlation and similarity among the component tasks to improve the efficiency and effectiveness of MTO. Experimental studies demonstrate the proposed algorithm has outstanding ability of global search and fast convergence rate. | Background and Related WorkThis section introduces the basics of MTO and MFEA, and the related work of Evolutionary MTO.Multitasking OptimizationThe main motivation of MTO is to exploit the inter-task synergy to improve the problem solving. The advantage of MTO over the counterpart single-task optimization in some specific problems has been demonstrated in the literature (Xie et al., 2016; Feng et al., 2017; Ramon and Ong, 2017; Wen and Ting, 2017; Zhou et al., 2017).Without loss of generality, we consider a scenario in which K distinct minimization tasks are solved simultaneously. The j-th task is labeled Tj, and its objective function is defined as Fj(x):Xj→R. In such setting, MTO aims at searching the space of all optimization tasks concurrently for {x1*,…,xk*}=argmin{F1(x1),…,FK(xk)}, where each xj* is a feasible solution in decision space Xj. To compare solution individuals in the MFEA, it is necessary to assign new fitness for each population member pi based on a set of properties as follows (Gupta and Ong, 2016).Definition 1 (Factorial Cost)The factorial cost of an individual is defined as αij = γδij + Fij, where Fij and δij are the objective value and the total constraint violation of individual pi on optimization task Tj, respectively. The coefficient γ is a large penalizing multiplier.Definition 2 (Factorial Rank)For an optimization task Tj, the population individuals are sorted in ascending order with respect to the factorial cost. The factorial rank rij of an individual pi on optimization task Tj is the index value of piin the sort list.Definition 3 (Skill Factor)The skill factor τi of an individual pi is the component task on which pi performs the best τi = argmin{rij}.Definition 4 (Scalar Fitness)The scalar fitness of an individual pi in a multitasking environment is calculated by βi = max{1/ri1,…,1/riK}.Multifactorial Evolutionary AlgorithmThis subsection briefly introduces MFEA (Gupta and Ong, 2016), which is the first evolutionary MTO algorithm inspired by the work (Cloninger et al., 1979). MFEA evaluates a population of N individuals in a unified search space. Each individual in the initial population is pre-assigned a dominant task randomly. In the process of evolution, each individual is only evaluated with respect to one task to reduce the computing resource consumption. MFEA uses typical crossover and mutation operators of classical EAs to the population. Elite individuals for each task in the current generation are selected to form the next generation.The knowledge transfer in MFEA is implemented through assortative mating and vertical cultural transmission (Gupta and Ong, 2016). If two parent individuals assigned to different skill factor are selected for reproduction, the dominant tasks, and genetic material of offspring inherit from their parent individuals randomly. MFEA uses a simple inter-task transfer learning and has strong randomness.Evolutionary Multitasking OptimizationTransfer learning is one active research field of machine learning, where the related knowledge in source domain is used to help the learning of the target domain. Many transfer learning techniques have been proposed to enable EAs to solve MTO problems. For example, the cross-domain MFEA, i.e., MFEA, solves multi-task optimization problems using implicit transfer learning in crossover operation. Wen and Ting (2017) proposed a utility detection of information sharing and a resource redistribution method to reduce resource waste of MFEA. Yuan et al. (2017) presented a permutation-based MFEA (P-MFEA) for multi-tasking vehicle routing problems. Unlike the original MFEA using a random-key representation, P-MFEA adopts a more effective permutation-based unified representation. Zhou et al. (2017) suggested a novel MFEA for combinatorial MTO problems. They developed two new mechanisms to improve search efficiency and decrease the computational complexity, respectively. Xie et al. (2016) enhanced the MFEA based on particle swarm optimization (PSO). Feng et al. (2017) developed a MFEA with PSO and differential evolution (DE). Bali et al. (2017) put forward a linearized domain adaptation strategy to deal with the issue of the negative knowledge transfer between uncorrelated tasks. Ramon and Ong (2017) presented a multi-task evolutionary algorithm for search-based software test data generation. Their work is the first attempt to demonstrate the feasibility of MFEA for solving real-world problems with more than two tasks. Da et al. (2016) advanced a benchmark problem set and a performance index for single-objective MTO. Yuan et al. (2016) designed a benchmark problem set for multi-objective MTO that can facilitate the development and comparison of MTO algorithms. Hou et al. (2017) proposed an evolutionary transfer reinforcement learning framework for multi-agent intelligent system, which can adapt to the dynamic environment. Tan et al. (2017) introduced an adaptive knowledge reuse framework across expensive multi-objective optimization problems. Multi-problem surrogates were proposed to reuse knowledge gained from distinct but related problem-solving experiences. Gupta et al. (2018) discussed the recent studies on global black-box optimization via knowledge transfer across different problems, including sequential transfer, multitasking, and multiform optimization. For a general survey of transfer learning, the reader is referred to Pan and Yang (2010). | [
"4797967",
"453202",
"21237920",
"23777254",
"17278560"
] | [
{
"pmid": "453202",
"title": "Multifactorial inheritance with cultural transmission and assortative mating. II. a general model of combined polygenic and cultural inheritance.",
"abstract": "A general linear model of combined polygenic-cultural inheritance is described. The model allows for phenotypic assortative mating, common environment, maternal and paternal effects, and genic-cultural correlation. General formulae for phenotypic correlation between family members in extended pedigrees are given for both primary and secondary assortative mating. A FORTRAN program BETA, available upon request, is used to provide maximum likelihood estimates of the parameters from reported correlations. American data about IQ and Burks' culture index are analyzed. Both cultural and genetic components of phenotypic variance are observed to make significant and substantial contributions to familial resemblance in IQ. The correlation between the environments of DZ twins is found to equal that of singleton sibs, not that of MZ twins. Burks' culture index is found to be an imperfect measure of midparent IQ rather than an index of home environment as previously assumed. Conditions under which the parameters of the model may be uniquely and precisely estimated are discussed. Interpretation of variance components in the presence of assortative mating and genic-cultural covariance is reviewed. A conservative, but robust, approach to the use of environmental indices is described."
},
{
"pmid": "21237920",
"title": "Gene-culture coevolutionary theory.",
"abstract": "Gene-culture coevolutionary theory is a branch of theoretical population genetics that models the transmission of genes and cultural traits from one generation to the next, exploring how they interact. These models have been employed to examine the adaptive advantages of learning and culture, to investigate the forces of cultural change, to partition the variance in complex human behavioral and personality traits, and to address specific cases in human evolution in which there is an interaction between genes and culture."
},
{
"pmid": "23777254",
"title": "MOEA/D with adaptive weight adjustment.",
"abstract": "Recently, MOEA/D (multi-objective evolutionary algorithm based on decomposition) has achieved great success in the field of evolutionary multi-objective optimization and has attracted a lot of attention. It decomposes a multi-objective optimization problem (MOP) into a set of scalar subproblems using uniformly distributed aggregation weight vectors and provides an excellent general algorithmic framework of evolutionary multi-objective optimization. Generally, the uniformity of weight vectors in MOEA/D can ensure the diversity of the Pareto optimal solutions, however, it cannot work as well when the target MOP has a complex Pareto front (PF; i.e., discontinuous PF or PF with sharp peak or low tail). To remedy this, we propose an improved MOEA/D with adaptive weight vector adjustment (MOEA/D-AWA). According to the analysis of the geometric relationship between the weight vectors and the optimal solutions under the Chebyshev decomposition scheme, a new weight vector initialization method and an adaptive weight vector adjustment strategy are introduced in MOEA/D-AWA. The weights are adjusted periodically so that the weights of subproblems can be redistributed adaptively to obtain better uniformity of solutions. Meanwhile, computing efforts devoted to subproblems with duplicate optimal solution can be saved. Moreover, an external elite population is introduced to help adding new subproblems into real sparse regions rather than pseudo sparse regions of the complex PF, that is, discontinuous regions of the PF. MOEA/D-AWA has been compared with four state of the art MOEAs, namely the original MOEA/D, Adaptive-MOEA/D, [Formula: see text]-MOEA/D, and NSGA-II on 10 widely used test problems, two newly constructed complex problems, and two many-objective problems. Experimental results indicate that MOEA/D-AWA outperforms the benchmark algorithms in terms of the IGD metric, particularly when the PF of the MOP is complex."
},
{
"pmid": "17278560",
"title": "Wrapper-filter feature selection algorithm using a memetic framework.",
"abstract": "This correspondence presents a novel hybrid wrapper and filter feature selection algorithm for a classification problem using a memetic framework. It incorporates a filter ranking method in the traditional genetic algorithm to improve classification performance and accelerate the search in identifying the core feature subsets. Particularly, the method adds or deletes a feature from a candidate feature subset based on the univariate feature ranking information. This empirical study on commonly used data sets from the University of California, Irvine repository and microarray data sets shows that the proposed method outperforms existing methods in terms of classification accuracy, number of selected features, and computational efficiency. Furthermore, we investigate several major issues of memetic algorithm (MA) to identify a good balance between local search and genetic search so as to maximize search quality and efficiency in the hybrid filter and wrapper MA."
}
] |
Frontiers in Neurorobotics | 31992979 | PMC6971161 | 10.3389/fnbot.2019.00112 | A Privacy-Preserving Multi-Task Learning Framework for Face Detection, Landmark Localization, Pose Estimation, and Gender Recognition | Recently, multi-task learning (MTL) has been extensively studied for various face processing tasks, including face detection, landmark localization, pose estimation, and gender recognition. This approach endeavors to train a better model by exploiting the synergy among the related tasks. However, the raw face dataset used for training often contains sensitive and private information, which can be maliciously recovered by carefully analyzing the model and outputs. To address this problem, we propose a novel privacy-preserving multi-task learning approach that utilizes the differential private stochastic gradient descent algorithm to optimize the end-to-end multi-task model and weighs the loss functions of multiple tasks to improve learning efficiency and prediction accuracy. Specifically, calibrated noise is added to the gradient of loss functions to preserve the privacy of the training data during model training. Furthermore, we exploit the homoscedastic uncertainty to balance different learning tasks. The experiments demonstrate that the proposed approach yields differential privacy guarantees without decreasing the accuracy of HyperFace under a desirable privacy budget. | 2. Related WorkIn this section, we briefly review differential privacy and multi-task learning.2.1. Differential PrivacyDifferential privacy is a new and promising model presented by Dwork et al. (2006b) in 2006. It provides strong privacy guarantees by requiring the indistinguishability of whether or not an individual's data exists in a dataset (McSherry and Talwar, 2007; Dwork, 2011b; Dwork and Roth, 2014; McMahan et al., 2017; Wang et al., 2018; Erlingsson et al., 2019). We regard a dataset as d or d′ on the basis of whether the individual is present or not. A differential privacy mechanism provides indistinguishability guarantees with respect to the pair (d, d′); the datasets d and d′ are referred to as adjacent datasets. The definition of (ε, δ)-differential privacy is provided as follow: DEFINITION 1. A randomized mechanism
M:D→R
satisfies (ε, δ)-differential privacy if, for any two adjacent datasets
d,d′∈D
and for any subset of outputs
Y⊆R, it holds thatPr[M(X)∈Y]≤eεPr[M(X′)∈Y]+δThe parameter ε denotes the privacy budget, which controls the privacy level of M. For a small ε, the probability distributions of the output results of M on d and d′ are extremely similar, and it is difficult for attackers to distinguish the two datasets. In addition, the parameter δ, which provides a possibility to violate ε-differential privacy, does not exist in the original definition of ε-differential privacy (Dwork et al., 2006a).There are several common noise perturbation mechanisms for differential privacy that mask the original datasets or intermediate results during the training process of models: the Laplace mechanism, the exponential mechanism, and the Gaussian mechanism. Phan et al. (2017) developed a novel mechanism that injects Laplace noise into the computation of Layer-Wise Relevance Propagation (LRP) to preserve differential privacy in deep learning. Chaudhuri et al. (2011, 2013) adopted the exponential mechanism as a privacy-preserving tuning method by training classifiers with different parameters on disjoint subsets of the data and then randomizing the selection of which classifier to release. In Yin and Liu (2017), numerical evaluations of the Gaussian cumulative density function are used to obtain the optimal variance to improve the utility of output perturbation Gaussian mechanisms for differential privacy.To add less noise, the gradient computation of loss functions samples Gaussian noise instead of Laplacian noise, since the tail of the Gaussian distribution diminishes far more rapidly than that of the Laplacian distribution. A general paradigm for approximating the deterministic real-valued function f:M→ℝ with a differential privacy mechanism is via additive noise calibrated to f's sensitivity Sf, which is defined as the maximum of the absolute distance |f(d) − f(d′)| where d and d′ are adjacent datasets. For instance, the Gaussian noise mechanism is defined byM(d)≜f(d)+N(0,Sf2·σ2)where N(0,Sf2·σ2) is the normal (Gaussian) distribution with mean 0 and standard deviation Sfσ.2.2. Multi-Task LearningMTL is an interesting and promising area in machine learning that aims to improve the performance of multiple related learning tasks by transferring useful information among them. Based on an assumption that all of the tasks, or at least a subset of them, are related, jointly learning multiple tasks is empirically and theoretically found to lead to better performance than learning them independently. Recently, MTL is becoming increasingly popular in many applications, such as recommendation, natural language processing, and face detection. Yin and Liu (2017) proposed a pose-directed multi-task convolutional neural network (CNN), and most importantly, an energy-based weight analysis method to explore how CNN-based multi-task learning works. However, multi-task learning algorithms may cause the leakage of information from different models across different tasks. Specifically, an attacker can participate in the multi-task learning process through one task, thereby acquiring model information of another task. To address this problem, Liu et al. (2018) developed a provable privacy-preserving MTL protocol that incorporates a homomorphic encryption technique to achieve the best security guarantee. Xie et al. (2017) proposed a novel privacy-preserving distributed multi-task learning framework for asynchronous updates and privacy preservation. Previous methods always apply privacy preservation to the parameters of models. In this paper, we combine HyperFace with a differential privacy mechanism for preserving the privacy of original datasets. | [
"21892342",
"28809673",
"29990235"
] | [
{
"pmid": "21892342",
"title": "Differentially Private Empirical Risk Minimization.",
"abstract": "Privacy-preserving machine learning algorithms are crucial for the increasingly common setting in which personal data, such as medical or financial records, are analyzed. We provide general techniques to produce privacy-preserving approximations of classifiers learned via (regularized) empirical risk minimization (ERM). These algorithms are private under the ε-differential privacy definition due to Dwork et al. (2006). First we apply the output perturbation ideas of Dwork et al. (2006), to ERM classification. Then we propose a new method, objective perturbation, for privacy-preserving machine learning algorithm design. This method entails perturbing the objective function before optimizing over classifiers. If the loss and regularizer satisfy certain convexity and differentiability criteria, we prove theoretical results showing that our algorithms preserve privacy, and provide generalization bounds for linear and nonlinear kernels. We further present a privacy-preserving technique for tuning the parameters in general machine learning algorithms, thereby providing end-to-end privacy guarantees for the training process. We apply these results to produce privacy-preserving analogues of regularized logistic regression and support vector machines. We obtain encouraging results from evaluating their performance on real demographic and benchmark data sets. Our results show that both theoretically and empirically, objective perturbation is superior to the previous state-of-the-art, output perturbation, in managing the inherent tradeoff between privacy and learning performance."
},
{
"pmid": "28809673",
"title": "Heterogeneous Face Attribute Estimation: A Deep Multi-Task Learning Approach.",
"abstract": "Face attribute estimation has many potential applications in video surveillance, face retrieval, and social media. While a number of methods have been proposed for face attribute estimation, most of them did not explicitly consider the attribute correlation and heterogeneity (e.g., ordinal versus nominal and holistic versus local) during feature representation learning. In this paper, we present a Deep Multi-Task Learning (DMTL) approach to jointly estimate multiple heterogeneous attributes from a single face image. In DMTL, we tackle attribute correlation and heterogeneity with convolutional neural networks (CNNs) consisting of shared feature learning for all the attributes, and category-specific feature learning for heterogeneous attributes. We also introduce an unconstrained face database (LFW+), an extension of public-domain LFW, with heterogeneous demographic attributes (age, gender, and race) obtained via crowdsourcing. Experimental results on benchmarks with multiple face attributes (MORPH II, LFW+, CelebA, LFWA, and FotW) show that the proposed approach has superior performance compared to state of the art. Finally, evaluations on a public-domain face database (LAP) with a single attribute show that the proposed approach has excellent generalization ability."
},
{
"pmid": "29990235",
"title": "HyperFace: A Deep Multi-Task Learning Framework for Face Detection, Landmark Localization, Pose Estimation, and Gender Recognition.",
"abstract": "We present an algorithm for simultaneous face detection, landmarks localization, pose estimation and gender recognition using deep convolutional neural networks (CNN). The proposed method called, HyperFace, fuses the intermediate layers of a deep CNN using a separate CNN followed by a multi-task learning algorithm that operates on the fused features. It exploits the synergy among the tasks which boosts up their individual performances. Additionally, we propose two variants of HyperFace: (1) HyperFace-ResNet that builds on the ResNet-101 model and achieves significant improvement in performance, and (2) Fast-HyperFace that uses a high recall fast face detector for generating region proposals to improve the speed of the algorithm. Extensive experiments show that the proposed models are able to capture both global and local information in faces and performs significantly better than many competitive algorithms for each of these four tasks."
}
] |
GigaScience | 31972019 | PMC6977584 | 10.1093/gigascience/giz163 | Telescope: an interactive tool for managing large-scale analysis from mobile devices | AbstractBackgroundIn today's world of big data, computational analysis has become a key driver of biomedical research. High-performance computational facilities are capable of processing considerable volumes of data, yet often lack an easy-to-use interface to guide the user in supervising and adjusting bioinformatics analysis via a tablet or smartphone.ResultsTo address this gap we proposed Telescope, a novel tool that interfaces with high-performance computational clusters to deliver an intuitive user interface for controlling and monitoring bioinformatics analyses in real-time. By leveraging last generation technology now ubiquitous to most researchers (such as smartphones), Telescope delivers a friendly user experience and manages conectivity and encryption under the hood.ConclusionsTelescope helps to mitigate the digital divide between wet and computational laboratories in contemporary biology. By delivering convenience and ease of use through a user experience not relying on expertise with computational clusters, Telescope can help researchers close the feedback loop between bioinformatics and experimental work with minimal impact on the performance of computational tools. Telescope is freely available at https://github.com/Mangul-Lab-USC/telescope. | Related WorkSeveral tools exist that provide management and monitoring of bioinformatics analysis tasks, but they offer limited functionality and deployment when compared to Telescope. PHPQstat [18] and GE Web Application [19] are open-source PHP applications that provide web interfaces that allow users to monitor the status of jobs managed by SGE, a commonly used high-throughput cluster system. PHPQstat and GE Web Application are limited to use with SGE and display only details of the jobs currently running on the cluster. (Telescope includes in the display for each job additional functionalities, such as job submission and tracking history.) Virtual Desktop (VDI) [20] provides users a web-based user interface to interact with the Faculty of Arts and Sciences, Research Computing (FASRC) Cluster at Harvard University. Among other functionalities, VDI allows users to check the status of a job, edit an existing job, and submit new jobs. However, VDI is proprietary software that is limited to deployment on the FASRC Cluster; implementation details are not publicly available.Applications of distributed processing frameworks, such as Apache Spark [21] and Hadoop MapReduce [22], can be monitored via the framework's web-based user interfaces. These tools display detailed information about each job, including the worker nodes, statuses of job stages, and memory usage. Applications such as Apache Spark, Hadoop, and MapReduce are specifically designed for each framework and are incapable of working with commonly used scheduling systems like SGE or individual cluster systems managed by universities.Several existing tools can be used to create and monitor jobs using a web-based interface, but they support only specific programming languages or processing pipeline formats. For example, Luigi [23] is a Python module that can be used to manage jobs via the Internet. Airflow [24] allows the creation of directed acyclic graphs (DAGs) that specify a pipeline for processing of tasks; it also provides a user interface that allows users to visualize the processing status of the jobs specified by the DAGs. Compared with these tools, Telescope is a more general tool because its main objective is to leverage the common existence of scheduling systems (e.g., SGE) on clusters. Thus, Telescope is neither designed for nor restricted to a specific programming language or processing pipeline format. Telescope was initially developed to work with SGE, but it is designed to be configurable to other scheduling systems.Finally, several tools enable an interactive approach to building and executing bioinformatics analysis tasks but lack a function that allows the user to remotely monitor jobs. Jupyter Notebooks, an open-source web application that supports the creation and sharing of live code and data visualizations, allow users to connect to clusters and run jobs using web browsers [25, 26]. However, the Jupyter Notebooks system does not allow the user to monitor jobs from a mobile device. | [
"28278152",
"27153671",
"31439817",
"29158473",
"30289166",
"26937966",
"29548336",
"30949620",
"28720283",
"26819474",
"31220077"
] | [
{
"pmid": "28278152",
"title": "All biology is computational biology.",
"abstract": "Here, I argue that computational thinking and techniques are so central to the quest of understanding life that today all biology is computational biology. Computational biology brings order into our understanding of life, it makes biological concepts rigorous and testable, and it provides a reference map that holds together individual insights. The next modern synthesis in biology will be driven by mathematical, statistical, and computational methods being absorbed into mainstream biological training, turning biology into a quantitative science."
},
{
"pmid": "27153671",
"title": "Bioinformatics programs are 31-fold over-represented among the highest impact scientific papers of the past two decades.",
"abstract": "MOTIVATION\nTo analyze the relative proportion of bioinformatics papers and their non-bioinformatics counterparts in the top 20 most cited papers annually for the past two decades.\n\n\nRESULTS\nWhen defining bioinformatics papers as encompassing both those that provide software for data analysis or methods underlying data analysis software, we find that over the past two decades, more than a third (34%) of the most cited papers in science were bioinformatics papers, which is approximately a 31-fold enrichment relative to the total number of bioinformatics papers published. More than half of the most cited papers during this span were bioinformatics papers. Yet, the average 5-year JIF of top 20 bioinformatics papers was 7.7, whereas the average JIF for top 20 non-bioinformatics papers was 25.8, significantly higher (P < 4.5 × 10(-29)). The 20-year trend in the average JIF between the two groups suggests the gap does not appear to be significantly narrowing. For a sampling of the journals producing top papers, bioinformatics journals tended to have higher Gini coefficients, suggesting that development of novel bioinformatics resources may be somewhat 'hit or miss'. That is, relative to other fields, bioinformatics produces some programs that are extremely widely adopted and cited, yet there are fewer of intermediate success.\n\n\nCONTACT\[email protected]\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "31439817",
"title": "An in situ high-throughput screen identifies inhibitors of intracellular Burkholderia pseudomallei with therapeutic efficacy.",
"abstract": "Burkholderia pseudomallei (Bp) and Burkholderia mallei (Bm) are Tier-1 Select Agents that cause melioidosis and glanders, respectively. These are highly lethal human infections with limited therapeutic options. Intercellular spread is a hallmark of Burkholderia pathogenesis, and its prominent ties to virulence make it an attractive therapeutic target. We developed a high-throughput cell-based phenotypic assay and screened ∼220,000 small molecules for their ability to disrupt intercellular spread by Burkholderia thailandensis, a closely related BSL-2 surrogate. We identified 268 hits, and cross-species validation found 32 hits that also disrupt intercellular spread by Bp and/or Bm Among these were a fluoroquinolone analog, which we named burkfloxacin (BFX), which potently inhibits growth of intracellular Burkholderia, and flucytosine (5-FC), an FDA-approved antifungal drug. We found that 5-FC blocks the intracellular life cycle at the point of type VI secretion system 5 (T6SS-5)-mediated cell-cell spread. Bacterial conversion of 5-FC to 5-fluorouracil and subsequently to fluorouridine monophosphate is required for potent and selective activity against intracellular Burkholderia In a murine model of fulminant respiratory melioidosis, treatment with BFX or 5-FC was significantly more effective than ceftazidime, the current antibiotic of choice, for improving survival and decreasing bacterial counts in major organs. Our results demonstrate the utility of cell-based phenotypic screening for Select Agent drug discovery and warrant the advancement of BFX and 5-FC as candidate therapeutics for melioidosis in humans."
},
{
"pmid": "29158473",
"title": "NOTCH1 is a mechanosensor in adult arteries.",
"abstract": "Endothelial cells transduce mechanical forces from blood flow into intracellular signals required for vascular homeostasis. Here we show that endothelial NOTCH1 is responsive to shear stress, and is necessary for the maintenance of junctional integrity, cell elongation, and suppression of proliferation, phenotypes induced by laminar shear stress. NOTCH1 receptor localizes downstream of flow and canonical NOTCH signaling scales with the magnitude of fluid shear stress. Reduction of NOTCH1 destabilizes cellular junctions and triggers endothelial proliferation. NOTCH1 suppression results in changes in expression of genes involved in the regulation of intracellular calcium and proliferation, and preventing the increase of calcium signaling rescues the cell-cell junctional defects. Furthermore, loss of Notch1 in adult endothelium increases hypercholesterolemia-induced atherosclerosis in the descending aorta. We propose that NOTCH1 is atheroprotective and acts as a mechanosensor in adult arteries, where it integrates responses to laminar shear stress and regulates junctional integrity through modulation of calcium signaling."
},
{
"pmid": "30289166",
"title": "Individual differences in learning and biogenic amine levels influence the behavioural division between foraging honeybee scouts and recruits.",
"abstract": "Animals must effectively balance the time they spend exploring the environment for new resources and exploiting them. One way that social animals accomplish this balance is by allocating these two tasks to different individuals. In honeybees, foraging is divided between scouts, which tend to explore the landscape for novel resources, and recruits, which tend to exploit these resources. Exploring the variation in cognitive and physiological mechanisms of foraging behaviour will provide a deeper understanding of how the division of labour is regulated in social insect societies. Here, we uncover how honeybee foraging behaviour may be shaped by predispositions in performance of latent inhibition (LI), which is a form of non-associative learning by which individuals learn to ignore familiar information. We compared LI between scouts and recruits, hypothesizing that differences in learning would correlate with differences in foraging behaviour. Scouts seek out and encounter many new odours while locating novel resources, while recruits continuously forage from the same resource, even as its quality degrades. We found that scouts show stronger LI than recruits, possibly reflecting their need to discriminate forage quality. We also found that scouts have significantly elevated tyramine compared to recruits. Furthermore, after associative odour training, recruits have significantly diminished octopamine in their brains compared to scouts. These results suggest that individual variation in learning behaviour shapes the phenotypic behavioural differences between different types of honeybee foragers. These differences in turn have important consequences for how honeybee colonies interact with their environment. Uncovering the proximate mechanisms that influence individual variation in foraging behaviour is crucial for understanding the ecological context in which societies evolve."
},
{
"pmid": "26937966",
"title": "Reproducibility of Fluorescent Expression from Engineered Biological Constructs in E. coli.",
"abstract": "We present results of the first large-scale interlaboratory study carried out in synthetic biology, as part of the 2014 and 2015 International Genetically Engineered Machine (iGEM) competitions. Participants at 88 institutions around the world measured fluorescence from three engineered constitutive constructs in E. coli. Few participants were able to measure absolute fluorescence, so data was analyzed in terms of ratios. Precision was strongly related to fluorescent strength, ranging from 1.54-fold standard deviation for the ratio between strong promoters to 5.75-fold for the ratio between the strongest and weakest promoter, and while host strain did not affect expression ratios, choice of instrument did. This result shows that high quantitative precision and reproducibility of results is possible, while at the same time indicating areas needing improved laboratory practices."
},
{
"pmid": "29548336",
"title": "ROP: dumpster diving in RNA-sequencing to find the source of 1 trillion reads across diverse adult human tissues.",
"abstract": "High-throughput RNA-sequencing (RNA-seq) technologies provide an unprecedented opportunity to explore the individual transcriptome. Unmapped reads are a large and often overlooked output of standard RNA-seq analyses. Here, we present Read Origin Protocol (ROP), a tool for discovering the source of all reads originating from complex RNA molecules. We apply ROP to samples across 2630 individuals from 54 diverse human tissues. Our approach can account for 99.9% of 1 trillion reads of various read length. Additionally, we use ROP to investigate the functional mechanisms underlying connections between the immune system, microbiome, and disease. ROP is freely available at https://github.com/smangul1/rop/wiki ."
},
{
"pmid": "30949620",
"title": "Translating in vivo metabolomic analysis of succinate dehydrogenase deficient tumours into clinical utility.",
"abstract": "PURPOSE\nMutations in the mitochondrial enzyme succinate dehydrogenase (SDH) subunit genes are associated with a wide spectrum of tumours including phaeochromocytoma and paraganglioma (PPGL) 1, 2, gastrointestinal stromal tumours (GIST) 3, renal cell carcinoma (RCC) 4 and pituitary adenomas5. SDH-related tumorigenesis is believed to be secondary to accumulation of the oncometabolite succinate. Our aim was to investigate the potential clinical applications of MRI spectroscopy (1H-MRS) in a range of suspected SDH-related tumours.\n\n\nPATIENTS AND METHODS\nFifteen patients were recruited to this study. Respiratory-gated single-voxel 1H-MRS was performed at 3T to quantify the content of succinate at 2.4 ppm and choline at 3.22 ppm.\n\n\nRESULTS\nA succinate peak was seen in six patients, all of whom had a germline SDHx mutation or loss of SDHB by immunohistochemistry. A succinate peak was also detected in two patients with a metastatic wild-type GIST (wtGIST) and no detectable germline SDHx mutation but a somatic epimutation in SDHC. Three patients without a tumour succinate peak retained SDHB expression, consistent with SDH functionality. In six cases with a borderline or absent peak, technical difficulties such as motion artefact rendered 1H-MRS difficult to interpret. Sequential imaging in a patient with a metastatic abdominal paraganglioma demonstrated loss of the succinate peak after four cycles of [177Lu]-DOTATATE, with a corresponding biochemical response in normetanephrine.\n\n\nCONCLUSIONS\nThis study has demonstrated the translation into clinical practice of in vivo metabolomic analysis using 1H-MRS in patients with SDH-deficient tumours. Potential applications include non-invasive diagnosis and disease stratification, as well as monitoring of tumour response to targeted treatments."
},
{
"pmid": "28720283",
"title": "Addressing the Digital Divide in Contemporary Biology: Lessons from Teaching UNIX.",
"abstract": "Life and medical science researchers increasingly rely on applications that lack a graphical interface. Scientists who are not trained in computer science face an enormous challenge analyzing high-throughput data. We present a training model for use of command-line tools when the learner has little to no prior knowledge of UNIX."
},
{
"pmid": "26819474",
"title": "Galaxy Portal: interacting with the galaxy platform through mobile devices.",
"abstract": "UNLABELLED\n: We present Galaxy Portal app, an open source interface to the Galaxy system through smart phones and tablets. The Galaxy Portal provides convenient and efficient monitoring of job completion, as well as opportunities for inspection of results and execution history. In addition to being useful to the Galaxy community, we believe that the app also exemplifies a useful way of exploiting mobile interfaces for research/high-performance computing resources in general.\n\n\nAVAILABILITY AND IMPLEMENTATION\nThe source is freely available under a GPL license on GitHub, along with user documentation and pre-compiled binaries and instructions for several platforms: https://github.com/Tarostar/QMLGalaxyPortal It is available for iOS version 7 (and newer) through the Apple App Store, and for Android through Google Play for version 4.1 (API 16) or newer.\n\n\nCONTACT\[email protected]."
},
{
"pmid": "31220077",
"title": "Challenges and recommendations to improve the installability and archival stability of omics computational tools.",
"abstract": "Developing new software tools for analysis of large-scale biological data is a key component of advancing modern biomedical research. Scientific reproduction of published findings requires running computational tools on data generated by such studies, yet little attention is presently allocated to the installability and archival stability of computational software tools. Scientific journals require data and code sharing, but none currently require authors to guarantee the continuing functionality of newly published tools. We have estimated the archival stability of computational biology software tools by performing an empirical analysis of the internet presence for 36,702 omics software resources published from 2005 to 2017. We found that almost 28% of all resources are currently not accessible through uniform resource locators (URLs) published in the paper they first appeared in. Among the 98 software tools selected for our installability test, 51% were deemed \"easy to install,\" and 28% of the tools failed to be installed at all because of problems in the implementation. Moreover, for papers introducing new software, we found that the number of citations significantly increased when authors provided an easy installation process. We propose for incorporation into journal policy several practical solutions for increasing the widespread installability and archival stability of published bioinformatics software."
}
] |
IEEE Journal of Translational Engineering in Health and Medicine | 32520000 | PMC6984195 | 10.1109/JTEHM.2016.2597838 | The Utility of Cloud Computing in Analyzing GPU-Accelerated Deformable Image Registration of CT and CBCT Images in Head and Neck Cancer Radiation Therapy | The images generated during radiation oncology treatments provide a valuable resource to conduct analysis for personalized therapy, outcomes prediction, and treatment margin optimization. Deformable image registration (DIR) is an essential tool in analyzing these images. We are enhancing and examining DIR with the contributions of this paper: 1) implementing and investigating a cloud and graphic processing unit (GPU) accelerated DIR solution and 2) assessing the accuracy and flexibility of that solution on planning computed tomography (CT) with cone-beam CT (CBCT). Registering planning CTs and CBCTs aids in monitoring tumors, tracking body changes, and assuring that the treatment is executed as planned. This provides significant information not only on the level of a single patient, but also for an oncology department. However, traditional methods for DIR are usually time-consuming, and manual intervention is sometimes required even for a single registration. In this paper, we present a cloud-based solution in order to increase the data analysis throughput, so that treatment tracking results may be delivered at the time of care. We assess our solution in terms of accuracy and flexibility compared with a commercial tool registering CT with CBCT. The latency of a previously reported mutual information-based DIR algorithm was improved with GPUs for a single registration. This registration consists of rigid registration followed by volume subdivision-based nonrigid registration. In this paper, the throughput of the system was accelerated on the cloud for hundreds of data analysis pairs. Nine clinical cases of head and neck cancer patients were utilized to quantitatively evaluate the accuracy and throughput. Target registration error (TRE) and structural similarity index were utilized as evaluation metrics for registration accuracy. The total computation time consisting of preprocessing the data, running the registration, and analyzing the results was used to evaluate the system throughput. Evaluation showed that the average TRE for GPU-accelerated DIR for each of the nine patients was from 1.99 to 3.39 mm, which is lower than the voxel dimension. The total processing time for 282 pairs on an Amazon Web Services cloud consisting of 20 GPU enabled nodes took less than an hour. Beyond the original registration, the cloud resources also included automatic registration quality checks with minimal impact to timing. Clinical data were utilized in quantitative evaluations, and the results showed that the presented method holds great potential for many high-impact clinical applications in radiation oncology, including adaptive radio therapy, patient outcomes prediction, and treatment margin optimization. | II.Related WorkA comprehensive review of DIR methods in radiation therapy can be found in the literature [3]. There exist two principal DIR approaches: feature-based and intensity-based. Compared with feature-based DIR, intensity-based DIR, our preferred approach, does not rely on feature detection and feature matching. It searches for an optimal transformation between two images based on image intensities instead. Although capable of being fully automated and arguably more accurate, intensity-based DIR is usually slower than feature-based DIR. In the family of intensity-based DIR methods, traditional Demons and its variants utilize optical flow-based strategy to deform one image to align with the other [4]. These methods have a range of applications in radiation therapy because of Demons algorithm’s speed and simplicity [5]–[7]. However, these methods modify CBCT intensity values to reduce intensity deviations between CT and CBCT because Demons algorithm assumes that the two images to be registered are of the same modality. Although based on the same physical principles with conventional CT, CBCT does not yield a consistent intensity value for a particular tissue type because of the underlying Feldkamp reconstruction algorithm generates a high-quality image only at the central plane and intensity degradation increases linearly with the distance from it [8], [9]. In addition, noise and artifacts due to small field of view and hardware limitations in CBCT also cause intensity deviation between CBCT and conventional CT [10].Mutual information (MI) is the most effective currently known image similarity measure for multimodality DIR [11], [12]. A popular MI-based DIR algorithm based on free-form deformation was proposed by Rueckert et al. [13]. In this algorithm, B-splines are used to describe the smooth and continuous free-form deformation. This method is particularly suitable for recovering local deformations but its accuracy relies highly on the density of the control points, and the computation time increases exponentially with the number of control points.CT-to-CBCT registration using MI- and B-spline-based DIR has been reported, an example of which is the method reported by Paquin et al. [14]. These investigators further used a multiscale approach to improve the method’s efficiency for CT-CBCT registration. The radiation oncology-oriented commercial software, Velocity Advanced Imaging (Velocity Medical Solutions, Atlanta, GA), offers B-spline-based DIR with MI as the image similarity measure as part of its fusion module. The Velocity DIR has been evaluated with both phantom and clinical images [15], [16]. In clinical validations, Lawson et al. focused on its applicability to CT-CBCT registration [16]. Both studies showed that the Velocity DIR algorithm was accurate and robust to noise in CBCT images. While accurate, commercial software such as Velocity Medical’s grants limited accessibility and flexibility to users for making application-specific customizations and seamless integration into other software workflows.Distributed computing used for image registration has also been explored previously. Image registration was used as the driver of an investigation into virtual computational cloud that integrates local computational environments and public cloud services on-the-fly, and support image registration requests from different distributed research groups [17]. Grid computing has also been applied to image registration by viewing the problem as a mesh and decomposing it [18]. We have also previously investigated the use of cluster computing in the context of image registration. By controlling network topology and fine grain scheduling, subtasks of a single registration may be farmed out to local nodes [19].In this article, we improve upon the previous work by using a graphics processing unit (GPU)-accelerated hierarchical volume subdivision (VS)-based DIR method in a GPU-enabled cloud environment. In contrast to the previous work on distributed computing, a cloud computing image registration solution exists on a spectrum between a grid and a cluster. Resources of a grid can be spread across geography and even different owners, while clouds are typically centrally managed and located, providing an opportunity to overcome transmission latency limitations reported in [18]. At the other end of the distributed spectrum, clusters typically operate on a local area network capable of providing fine grain control of scheduling. Clouds by contrast use nodes which are loosely coupled. By decoupling the computation for a cloud implementation, there is an opportunity to provide more scalability and less complexity than our previous cluster based solution [19]. The core algorithm was developed by our team and has been reported and extensively validated previously [20]–[23]. This implementation is referred to as “VS-GPU” henceforth in this article. We further present a quantitative evaluation of applying the VS-GPU algorithm to the registration of clinical CT-CBCT images of head and neck cancer patients and demonstrate that both its speed and accuracy are acceptable for use in radiation oncology studies. | [
"9873902",
"21626913",
"23032638",
"21626941",
"17587058",
"19235367",
"17079184",
"16157532",
"23851666",
"15376593",
"17881799",
"20004493"
] | [
{
"pmid": "9873902",
"title": "Image matching as a diffusion process: an analogy with Maxwell's demons.",
"abstract": "In this paper, we present the concept of diffusing models to perform image-to-image matching. Having two images to match, the main idea is to consider the objects boundaries in one image as semi-permeable membranes and to let the other image, considered as a deformable grid model, diffuse through these interfaces, by the action of effectors situated within the membranes. We illustrate this concept by an analogy with Maxwell's demons. We show that this concept relates to more traditional ones, based on attraction, with an intermediate step being optical flow techniques. We use the concept of diffusing models to derive three different non-rigid matching algorithms, one using all the intensity levels in the static image, one using only contour points, and a last one operating on already segmented images. Finally, we present results with synthesized deformations and real medical images, with applications to heart motion tracking and three-dimensional inter-patients matching."
},
{
"pmid": "21626913",
"title": "Demons deformable registration of CT and cone-beam CT using an iterative intensity matching approach.",
"abstract": "PURPOSE\nA method of intensity-based deformable registration of CT and cone-beam CT (CBCT) images is described, in which intensity correction occurs simultaneously within the iterative registration process. The method preserves the speed and simplicity of the popular Demons algorithm while providing robustness and accuracy in the presence of large mismatch between CT and CBCT voxel values (\"intensity\").\n\n\nMETHODS\nA variant of the Demons algorithm was developed in which an estimate of the relationship between CT and CBCT intensity values for specific materials in the image is computed at each iteration based on the set of currently overlapping voxels. This tissue-specific intensity correction is then used to estimate the registration output for that iteration and the process is repeated. The robustness of the method was tested in CBCT images of a cadaveric head exhibiting a broad range of simulated intensity variations associated with x-ray scatter, object truncation, and/or errors in the reconstruction algorithm. The accuracy of CT-CBCT registration was also measured in six real cases, exhibiting deformations ranging from simple to complex during surgery or radiotherapy guided by a CBCT-capable C-arm or linear accelerator, respectively.\n\n\nRESULTS\nThe iterative intensity matching approach was robust against all levels of intensity variation examined, including spatially varying errors in voxel value of a factor of 2 or more, as can be encountered in cases of high x-ray scatter. Registration accuracy without intensity matching degraded severely with increasing magnitude of intensity error and introduced image distortion. A single histogram match performed prior to registration alleviated some of these effects but was also prone to image distortion and was quantifiably less robust and accurate than the iterative approach. Within the six case registration accuracy study, iterative intensity matching Demons reduced mean TRE to (2.5 +/- 2.8) mm compared to (3.5 +/- 3.0) mm with rigid registration.\n\n\nCONCLUSIONS\nA method was developed to iteratively correct CT-CBCT intensity disparity during Demons registration, enabling fast, intensity-based registration in CBCT-guided procedures such as surgery and radiotherapy, in which CBCT voxel values may be inaccurate. Accurate CT-CBCT registration in turn facilitates registration of multimodality preoperative image and planning data to intraoperative CBCT by way of the preoperative CT, thereby linking the intraoperative frame of reference to a wealth of preoperative information that could improve interventional guidance."
},
{
"pmid": "23032638",
"title": "CT to cone-beam CT deformable registration with simultaneous intensity correction.",
"abstract": "Computed tomography (CT) to cone-beam CT (CBCT) deformable image registration (DIR) is a crucial step in adaptive radiation therapy. Current intensity-based registration algorithms, such as demons, may fail in the context of CT-CBCT DIR because of inconsistent intensities between the two modalities. In this paper, we propose a variant of demons, called deformation with intensity simultaneously corrected (DISC), to deal with CT-CBCT DIR. DISC distinguishes itself from the original demons algorithm by performing an adaptive intensity correction step on the CBCT image at every iteration step of the demons registration. Specifically, the intensity correction of a voxel in CBCT is achieved by matching the first and the second moments of the voxel intensities inside a patch around the voxel with those on the CT image. It is expected that such a strategy can remove artifacts in the CBCT image, as well as ensuring the intensity consistency between the two modalities. DISC is implemented on computer graphics processing units in compute unified device architecture (CUDA) programming environment. The performance of DISC is evaluated on a simulated patient case and six clinical head-and-neck cancer patient data. It is found that DISC is robust against the CBCT artifacts and intensity inconsistency and significantly improves the registration accuracy when compared with the original demons."
},
{
"pmid": "21626941",
"title": "Deformable planning CT to cone-beam CT image registration in head-and-neck cancer.",
"abstract": "PURPOSE\nThe purpose of this work was to implement and validate a deformable CT to cone-beam computed tomography (CBCT) image registration method in head-and-neck cancer to eventually facilitate automatic target delineation on CBCT.\n\n\nMETHODS\nTwelve head-and-neck cancer patients underwent a planning CT and weekly CBCT during the 5-7 week treatment period. The 12 planning CT images (moving images) of these patients were registered to their weekly CBCT images (fixed images) via the symmetric force Demons algorithm and using a multiresolution scheme. Histogram matching was used to compensate for the intensity difference between the two types of images. Using nine known anatomic points as registration targets, the accuracy of the registration was evaluated using the target registration error (TRE). In addition, region-of-interest (ROI) contours drawn on the planning CT were morphed to the CBCT images and the volume overlap index (VOI) between registered contours and manually delineated contours was evaluated.\n\n\nRESULTS\nThe mean TRE value of the nine target points was less than 3.0 mm, the slice thickness of the planning CT. Of the 369 target points evaluated for registration accuracy, the average TRE value was 2.6 +/- 0.6 mm. The mean TRE for bony tissue targets was 2.4 +/- 0.2 mm, while the mean TRE for soft tissue targets was 2.8 +/- 0.2 mm. The average VOI between the registered and manually delineated ROI contours was 76.2 +/- 4.6%, which is consistent with that reported in previous studies.\n\n\nCONCLUSIONS\nThe authors have implemented and validated a deformable image registration method to register planning CT images to weekly CBCT images in head-and-neck cancer cases. The accuracy of the TRE values suggests that they can be used as a promising tool for automatic target delineation on CBCT."
},
{
"pmid": "17587058",
"title": "Flat-detector computed tomography (FD-CT).",
"abstract": "Flat-panel detectors or, synonymously, flat detectors (FDs) have been developed for use in radiography and fluoroscopy with the defined goal to replace standard X-ray film, film-screen combinations and image intensifiers by an advanced sensor system. FD technology in comparison to X-ray film and image intensifiers offers higher dynamic range, dose reduction, fast digital readout and the possibility for dynamic acquisitions of image series, yet keeping to a compact design. It appeared logical to employ FD designs also for computed tomography (CT) imaging. Respective efforts date back a few years only, but FD-CT has meanwhile become widely accepted for interventional and intra-operative imaging using C-arm systems. FD-CT provides a very efficient way of combining two-dimensional (2D) radiographic or fluoroscopic and 3D CT imaging. In addition, FD technology made its way into a number of dedicated CT scanner developments, such as scanners for the maxillo-facial region or for micro-CT applications. This review focuses on technical and performance issues of FD technology and its full range of applications for CT imaging. A comparison with standard clinical CT is of primary interest. It reveals that FD-CT provides higher spatial resolution, but encompasses a number of disadvantages, such as lower dose efficiency, smaller field of view and lower temporal resolution. FD-CT is not aimed at challenging standard clinical CT as regards to the typical diagnostic examinations; but it has already proven unique for a number of dedicated CT applications, offering distinct practical advantages, above all the availability of immediate CT imaging in the interventional suite or the operating room."
},
{
"pmid": "19235367",
"title": "Multiscale registration of planning CT and daily cone beam CT images for adaptive radiation therapy.",
"abstract": "Adaptive radiation therapy (ART) is the incorporation of daily images in the radiotherapy treatment process so that the treatment plan can be evaluated and modified to maximize the amount of radiation dose to the tumor while minimizing the amount of radiation delivered to healthy tissue. Registration of planning images with daily images is thus an important component of ART. In this article, the authors report their research on multiscale registration of planning computed tomography (CT) images with daily cone beam CT (CBCT) images. The multiscale algorithm is based on the hierarchical multiscale image decomposition of E. Tadmor, S. Nezzar, and L. Vese [Multiscale Model. Simul. 2(4), pp. 554-579 (2004)]. Registration is achieved by decomposing the images to be registered into a series of scales using the (BV, L2) decomposition and initially registering the coarsest scales of the image using a landmark-based registration algorithm. The resulting transformation is then used as a starting point to deformably register the next coarse scales with one another. This procedure is iterated at each stage using the transformation computed by the previous scale registration as the starting point for the current registration. The authors present the results of studies of rectum, head-neck, and prostate CT-CBCT registration, and validate their registration method quantitatively using synthetic results in which the exact transformations our known, and qualitatively using clinical deformations in which the exact results are not known."
},
{
"pmid": "17079184",
"title": "Automatic elastic image registration by interpolation of 3D rotations and translations from discrete rigid-body transformations.",
"abstract": "We present an algorithm for automatic elastic registration of three-dimensional (3D) medical images. Our algorithm initially recovers the global spatial mismatch between the reference and floating images, followed by hierarchical octree-based subdivision of the reference image and independent registration of the floating image with the individual subvolumes of the reference image at each hierarchical level. Global as well as local registrations use the six-parameter full rigid-body transformation model and are based on maximization of normalized mutual information (NMI). To ensure robustness of the subvolume registration with low voxel counts, we calculate NMI using a combination of current and prior mutual histograms. To generate a smooth deformation field, we perform direct interpolation of six-parameter rigid-body subvolume transformations obtained at the last subdivision level. Our interpolation scheme involves scalar interpolation of the 3D translations and quaternion interpolation of the 3D rotational pose. We analyzed the performance of our algorithm through experiments involving registration of synthetically deformed computed tomography (CT) images. Our algorithm is general and can be applied to image pairs of any two modalities of most organs. We have demonstrated successful registration of clinical whole-body CT and positron emission tomography (PET) images using this algorithm. The registration accuracy for this application was evaluated, based on validation using expert-identified anatomical landmarks in 15 CT-PET image pairs. The algorithm's performance was comparable to the average accuracy observed for three expert-determined registrations in the same 15 image pairs."
},
{
"pmid": "16157532",
"title": "Automated 3-dimensional elastic registration of whole-body PET and CT from separate or combined scanners.",
"abstract": "UNLABELLED\nRegistration and fusion of whole-body functional PET and anatomic CT is significant for accurate differentiation of viable tumors from benign masses, radiotherapy planning and monitoring treatment response, and cancer staging. Whole-body PET and CT acquired on separate scanners are misregistered because of differences in patient positions and orientations, couch shapes, and breathing protocols. Although a combined PET/CT scanner removes many of these misalignments, breathing-related nonrigid mismatches still persist.\n\n\nMETHODS\nWe have developed a new, fully automated normalized mutual information-based 3-dimensional elastic image registration technique that can accurately align whole-body PET and CT images acquired on stand-alone scanners as well as a combined PET/CT scanner. The algorithm morphs the PET image to align spatially with the CT image by generating an elastic transformation field by interpolating quaternions and translations from multiple 6-parameter rigid-body registrations, each obtained for hierarchically subdivided image subvolumes. Fifteen whole-body (spanning thorax and abdomen) PET/CT image pairs acquired separately and 5 image pairs acquired on a combined scanner were registered. The cases were selected on the basis of the availability of both CT and PET images, without any other screening criteria, such as a specific clinical condition or prognosis. A rigorous quantitative validation was performed by evaluating algorithm performance in the context of variability among 3 clinical experts in the identification of up to 32 homologous anatomic landmarks.\n\n\nRESULTS\nThe average execution time was 75 and 45 min for images acquired using separate scanners and combined scanner, respectively. Visual inspection indicated improved matching of homologous structures in all cases. The mean registration accuracy (5.5 and 5.9 mm for images from separate scanners and combined scanner, respectively) was found comparable to the mean interexpert difference in landmark identification (5.6 +/- 2.4 and 6.6 +/- 3.4 mm, respectively). The variability in landmark identification did not show statistically significant changes on replacing any expert by the algorithm.\n\n\nCONCLUSION\nWe have presented a new and automated elastic registration algorithm to correct for nonrigid misalignments in whole-body PET/CT images as well as improve the \"mechanical\" registration of a combined PET/CT scanner. The algorithm performance was on par with the average opinion of 3 experts."
},
{
"pmid": "23851666",
"title": "FPGA-Accelerated Deformable Image Registration for Improved Target-Delineation During CT-Guided Interventions.",
"abstract": "Minimally invasive image-guided interventions (IGIs) are time and cost efficient, minimize unintended damage to healthy tissue, and lead to faster patient recovery. With the advent of multislice computed tomography (CT), many IGIs are now being performed under volumetric CT guidance. Registering pre-and intraprocedural images for improved intraprocedural target delineation is a fundamental need in the IGI workflow. Earlier approaches to meet this need primarily employed rigid body approximation, which may not be valid because of nonrigid tissue misalignment between these images. Intensity-based automatic deformable registration is a promising option to correct for this misalignment; however, the long execution times of these algorithms have prevented their use in clinical workflow. This article presents a field-programmable gate array-based architecture for accelerated implementation of mutual information (Ml)-based deformable registration. The reported implementation reduces the execution time of MI-based deformable registration from hours to a few minutes. This work also demonstrates successful registration of abdominal intraprocedural noncontrast CT (iCT) images with preprocedural contrast-enhanced CT (preCT) and positron emission tomography (PET) images using the reported solution. The registration accuracy for this application was evaluated using 5 iCT-preCT and 5 iCT-PET image pairs. The registration accuracy of the hardware implementation is comparable with that achieved using a software implementation and is on the order of a few millimeters. This registration accuracy, coupled with the execution speed and compact implementation of the reported solution, makes it suitable for integration in the IGI-workflow."
},
{
"pmid": "15376593",
"title": "Image quality assessment: from error visibility to structural similarity.",
"abstract": "Objective methods for assessing perceptual image quality traditionally attempted to quantify the visibility of errors (differences) between a distorted image and a reference image using a variety of known properties of the human visual system. Under the assumption that human visual perception is highly adapted for extracting structural information from a scene, we introduce an alternative complementary framework for quality assessment based on the degradation of structural information. As a specific example of this concept, we develop a Structural Similarity Index and demonstrate its promise through a set of intuitive examples, as well as comparison to both subjective ratings and state-of-the-art objective methods on a database of images compressed with JPEG and JPEG2000."
},
{
"pmid": "17881799",
"title": "GPU-based streaming architectures for fast cone-beam CT image reconstruction and demons deformable registration.",
"abstract": "This paper shows how to significantly accelerate cone-beam CT reconstruction and 3D deformable image registration using the stream-processing model. We describe data-parallel designs for the Feldkamp, Davis and Kress (FDK) reconstruction algorithm, and the demons deformable registration algorithm, suitable for use on a commodity graphics processing unit. The streaming versions of these algorithms are implemented using the Brook programming environment and executed on an NVidia 8800 GPU. Performance results using CT data of a preserved swine lung indicate that the GPU-based implementations of the FDK and demons algorithms achieve a substantial speedup--up to 80 times for FDK and 70 times for demons when compared to an optimized reference implementation on a 2.8 GHz Intel processor. In addition, the accuracy of the GPU-based implementations was found to be excellent. Compared with CPU-based implementations, the RMS differences were less than 0.1 Hounsfield unit for reconstruction and less than 0.1 mm for deformable registration."
},
{
"pmid": "20004493",
"title": "Parallel computation of mutual information on the GPU with application to real-time registration of 3D medical images.",
"abstract": "Due to processing constraints, automatic image-based registration of medical images has been largely used as a pre-operative tool. We propose a novel method named sort and count for efficient parallelization of mutual information (MI) computation designed for massively multi-processing architectures. Combined with a parallel transformation implementation and an improved optimization algorithm, our method achieves real-time (less than 1s) rigid registration of 3D medical images using a commodity graphics processing unit (GPU). This represents a more than 50-fold improvement over a standard implementation on a CPU. Real-time registration opens new possibilities for development of improved and interactive intraoperative tools that can be used for enhanced visualization and navigation during an intervention."
}
] |
Frontiers in Neurorobotics | 32038220 | PMC6985151 | 10.3389/fnbot.2019.00113 | Droplet-Transmitted Infection Risk Ranking Based on Close Proximity Interaction | We propose an automatic method to identify people who are potentially-infected by droplet-transmitted diseases. This high-risk group of infection was previously identified by conducting large-scale visits/interviews, or manually screening among tons of recorded surveillance videos. Both are time-intensive and most likely to delay the control of communicable diseases like influenza. In this paper, we address this challenge by solving a multi-tasking problem from the captured surveillance videos. This multi-tasking framework aims to model the principle of Close Proximity Interaction and thus infer the infection risk of individuals. The complete workflow includes three essential sub-tasks: (1) person re-identification (REID), to identify the diagnosed patient and infected individuals across different cameras, (2) depth estimation, to provide a spatial knowledge of the captured environment, (3) pose estimation, to evaluate the distance between the diagnosed and potentially-infected subjects. Our method significantly reduces the time and labor costs. We demonstrate the advantages of high accuracy and efficiency of our method. Our method is expected to be effective in accelerating the process of identifying the potentially infected group and ultimately contribute to the well-being of public health. | 2. Related Work2.1. Infectious Disease MonitorMonitoring the spread of infectious disease is critical for taking prompt actions to control the expansion. The contact in close distance between an infectious individual and the population leads to the spread of respiratory infections (Leung et al., 2017). This paper investigates the diseases transmitted via droplets.The conventional methods started with social surveys, by asking participants to report their contact patterns, including the number/duration of contacts and other demographical information (including age, gender, household size) (Eames et al., 2012; Read et al., 2014; Dodd et al., 2015). Understanding the contact pattern allows us to build parameterized models and capture the transmission patterns. Leung et al. (2017) proposed a diary-based design, using both paper and online questionnaires, and found out that the approach of using paper questionnaires leads to an increasing report of contacts and longer contact duration than using online questionnaires. However, conducting such social surveys and questionnaires requires a significant amount of time and effort.Researchers use wearable devices to analyze the contact patterns among a group of individuals. A recent work measured face-to-face proximity between family members within 16 households with infants younger than six months for 2-5 consecutive days of data collection (Ozella et al., 2018). Researchers compared the two methods of reporting with paper diaries and recording with wearable sensors, to monitor the contact pattern at a conference (Smieszek et al., 2016). They found out that reporting was notably incomplete for contacts <5 min, and participants appear to have overestimated the duration of their contacts. The typical device is RFID-based and proves to be useful in a variety of scenarios, including a pediatric hospital (Isella et al., 2011), a tertiary care hospital (Voirin et al., 2015), and a primary school (Stehlé et al., 2011). The merit of using wearable devices is a high-resolution measurement of contact matrices between individuals with the device. However, it is not feasible to apply to a wide, dynamic, and unconstrained scenario.Different from existing methods, our work utilizes the surveillance cameras as the capture device and process the video input with the state-of-the-art techniques in computer vision. Our method quantitatively modeled the principle of close proximity interaction and introduced a graph structure to represent the contact pattern.2.2. Person Re-identificationPerson re-identification is a long-standing and significant problem that has profound application value for a wide range of fields such as security, health care, business. It aims at re-identifying the person of interest from a collection of images or videos taken by multiple non-overlapping cameras in a large distributed space over a prolonged period. Re-ID is fundamentally challenging due to three difficulties: (1) diverse visual appearance changes caused by variations in view angle, lighting, background clutter, and occlusion. (2) difficulties in producing discriminating feature representation invariant to background clutter. (3) over-fitting problem due to the limited scale of a tagged dataset.Two types of solutions are proposed to address these problems. One is to learn a more distinctive feature representation to make a trade-off between recognition accuracy and generalization ability. The other is to leverage the Siamese neural network and triplet loss to minimize the loss of images with the same identity and maximize that with different identities. We briefly survey the person re-identification literature from these two aspects in this paper.2.2.1. Improvements in Feature RepresentationImprovements in feature representation mainly achieved by leveraging local parts of the person. Representative methods applied part-informed features such as segmentation mask, pose, gait, etc. Pose sensitive model proposed by Saquib Sarfraz et al. (2018) incorporates both fine- and coarse-grained pose information into CNN to learn the feature representation without explicitly modeling body parts. The combined representation includes both the view captured by the camera and joint locations, which ensures the discriminating embedding. Song et al. (2018) proposed a mask-guided contrastive attention model to learn features separately from the background and human bodies. Their work takes the binary body mask as input to remove the background in pixel-level and use gait information as features. However, failure cases will happen when discriminative body parts are missing. Horizontal Pyramid Matching (HPM) approach is proposed by Fu et al. (2018), solving this problem by using partial feature representations at different horizontal pyramid scales and adopting average and max pooling for inter-person variations. For similarity measurement, metric learning approaches are exploited such as cross-view quadratic discriminant analysis (Liao et al., 2015), relative distance comparison optimization (PRDC algorithm) (Zheng et al., 2011), locally-adaptive decision functions (LADF) (Li et al., 2013) and etc.2.2.2. Siamese Neural Network ArchitectureSiamese neural network architecture is also adopted to tackle the problem of person re-identification by taking image pairs or triplets (Ding et al., 2015) as input. Siamese CNN (S-CNN) for person re-identification was presented in Yi et al. (2014) and Li et al. (2014). Improvements such as Gated Siamese CNN (Varior et al., 2016) aimed at acquiring finer local patterns for discriminative capacity enhancement. Cheng et al. (2016) proposed a Multi-Channel Parts-Based CNN with improved triplet loss consisting of multiple channels to jointly learn the global full-body and local body-parts features. Triplet loss is also widely used to learn fine-grained similarity image metrics (Wang et al., 2014). Quadruplet loss Chen et al. (2017c) strengthens the generalization capability and leads the model to output with a larger inter-class variation and a smaller intra-class variation superior to triplet loss.2.3. Multi-Person Pose EstimationMulti-person pose estimation aims at recognizing and locating key points on multiple persons in the image, which is the basis for resolving the technical challenges such as human action recognition (HAR) and motion analysis. Single person pose estimation is based on the assumption that the person dominates the image content. Deep learning methods perform well when the assumption is satisfied. However, for our specific problem in this paper, the case of a single person in one captured image seldom happens. Thus, we focus on the survey of multiple people pose estimation problem here. Cases such as occluded or invisible key points and background clutter lead to significant difficulties for multi-person pose estimation. State-of-the-art approaches built on CNN can be mainly divided into two categories: bottom-up approaches and top-down approaches.2.3.1. Bottom-Up ApproachesBottom-up approaches (Insafutdinov et al., 2016; Pishchulin et al., 2016; Cao et al., 2017) mainly adopt the strategy of detecting all key points in the image first and then matching poses to individuals. Deepcut (Pishchulin et al., 2016) casts the problem in the form of an Integer Linear Program (ILP), and the proposed partitioning and labeling formulation jointly solve the task of detection and pose estimation. A follow-up work, Deepercut (Insafutdinov et al., 2016), achieves better success by adopting image-conditioned pairwise terms with deeper ResNet (He et al., 2016). An open-source effort, Openpose (Cao et al., 2017), uses a non-parametric representation referred to as Part Affinity Fields (PAFs) for associating body parts with individuals, achieving real-time performance with high accuracy.2.3.2. Top-Down ApproachesTop-down approaches (Fang et al., 2017; Huang et al., 2017; Papandreou et al., 2017; Chen et al., 2018) are opposed to the former, locating and partitioning all persons in the image followed by utilizing single person pose estimation caches individually for each person. Cascaded Pyramid Network (CPN) (Chen et al., 2018) takes two steps to cope with overlapping or obscured keypoints: GlobalNet for easy recognized keypoints and RefineNet for hard one. Papandreou et al. (2017) leverages the Faster RCNN (Ren et al., 2015) as the person detector and the fully convolutional ResNet to predict heatmaps and offsets. The recent work based on Mask-RCNN (He et al., 2017) extends Faster RCNN to predict human keypoints by combining the human bounding box and the corresponding feature map.2.4. Multi-Tasking IntelligenceMulti-tasking refers to the capability of solving many tasks simultaneously. The current advances of artificial intelligence outperform human beings in effortlessly handling multiple tasks without switching costs. There are a couple of mainstream techniques for solving multi-tasking problems.One of the popular techniques is to use the evolutionary algorithm to tackle the problem of multi-tasking. This is referred to as evolutionary multi-tasking optimization. In classic EAs, different optimization problems are typically solved independently. Researchers proposed a variety of techniques, such as multi-factorial memetic algorithm (Chen et al., 2017a), opposition-based learning (Yu et al., 2019), cross-task search direction (Yin et al., 2019), explicit autoencoding (Feng et al., 2018), and cooperative co-evolutionary memetic algorithm (Chen et al., 2017b), for the purpose of solving the multi-tasking problem. Evolutionary multi-tasking algorithms share knowledge among individual tasks and accelerate the convergence of multiple optimization tasks (Liang et al., 2019).Relevant domains to multi-task are transfer learning and multi-objective optimization. A linearized domain adaptation (LDA) strategy transforms the search space of a simple task to the search space similar to its constitutive complex task (Bali et al., 2017). Researchers explored the use of transfer learning to tackle the problem of dynamic multi-objective optimization (Jiang et al., 2018). This method can significantly speed up the evolutionary process by reusing past experience and generating an effective initial population pool. The formulation of multi-objective optimization allows us to share the underlying similarity between different optimization exercises and automates the information transfer, which improves the convergence (Gupta et al., 2016).Inspired by the methods mentioned above, our method solves a multi-tasking problem by effectively taking advantage of the information from a few building blocks. Our method directly applies to real-world scenarios to identify potentially-infected subjects. So far, we found that this problem is under-explored. | [
"26646292",
"22412366",
"29994415",
"27164616",
"26424846",
"21386902",
"28801623",
"29879196",
"24789897",
"21149721",
"27449511",
"21858018",
"25695165",
"28700268",
"19447706"
] | [
{
"pmid": "26646292",
"title": "Age- and Sex-Specific Social Contact Patterns and Incidence of Mycobacterium tuberculosis Infection.",
"abstract": "We aimed to model the incidence of infection with Mycobacterium tuberculosis among adults using data on infection incidence in children, disease prevalence in adults, and social contact patterns. We conducted a cross-sectional face-to-face survey of adults in 2011, enumerating \"close\" (shared conversation) and \"casual\" (shared indoor space) social contacts in 16 Zambian communities and 8 South African communities. We modeled the incidence of M. tuberculosis infection in all age groups using these contact patterns, as well as the observed incidence of M. tuberculosis infection in children and the prevalence of tuberculosis disease in adults. A total of 3,528 adults participated in the study. The reported rates of close and casual contact were 4.9 per adult per day (95% confidence interval: 4.6, 5.2) and 10.4 per adult per day (95% confidence interval: 9.3, 11.6), respectively. Rates of close contact were higher for adults in larger households and rural areas. There was preferential mixing of close contacts within age groups and within sexes. The estimated incidence of M. tuberculosis infection in adults was 1.5-6 times higher (2.5%-10% per year) than that in children. More than 50% of infections in men, women, and children were estimated to be due to contact with adult men. We conclude that estimates of infection incidence based on surveys in children might underestimate incidence in adults. Most infections may be due to contact with adult men. Treatment and control of tuberculosis in men is critical to protecting men, women, and children from tuberculosis."
},
{
"pmid": "22412366",
"title": "Measured dynamic social contact patterns explain the spread of H1N1v influenza.",
"abstract": "Patterns of social mixing are key determinants of epidemic spread. Here we present the results of an internet-based social contact survey completed by a cohort of participants over 9,000 times between July 2009 and March 2010, during the 2009 H1N1v influenza epidemic. We quantify the changes in social contact patterns over time, finding that school children make 40% fewer contacts during holiday periods than during term time. We use these dynamically varying contact patterns to parameterise an age-structured model of influenza spread, capturing well the observed patterns of incidence; the changing contact patterns resulted in a fall of approximately 35% in the reproduction number of influenza during the holidays. This work illustrates the importance of including changing mixing patterns in epidemic models. We conclude that changes in contact patterns explain changes in disease incidence, and that the timing of school terms drove the 2009 H1N1v epidemic in the UK. Changes in social mixing patterns can be usefully measured through simple internet-based surveys."
},
{
"pmid": "29994415",
"title": "Evolutionary Multitasking via Explicit Autoencoding.",
"abstract": "Evolutionary multitasking (EMT) is an emerging research topic in the field of evolutionary computation. In contrast to the traditional single-task evolutionary search, EMT conducts evolutionary search on multiple tasks simultaneously. It aims to improve convergence characteristics across multiple optimization problems at once by seamlessly transferring knowledge among them. Due to the efficacy of EMT, it has attracted lots of research attentions and several EMT algorithms have been proposed in the literature. However, existing EMT algorithms are usually based on a common mode of knowledge transfer in the form of implicit genetic transfer through chromosomal crossover. This mode cannot make use of multiple biases embedded in different evolutionary search operators, which could give better search performance when properly harnessed. Keeping this in mind, this paper proposes an EMT algorithm with explicit genetic transfer across tasks, namely EMT via autoencoding, which allows the incorporation of multiple search mechanisms with different biases in the EMT paradigm. To confirm the efficacy of the proposed EMT algorithm with explicit autoencoding, comprehensive empirical studies have been conducted on both the single- and multi-objective multitask optimization problems."
},
{
"pmid": "27164616",
"title": "Multiobjective Multifactorial Optimization in Evolutionary Multitasking.",
"abstract": "In recent decades, the field of multiobjective optimization has attracted considerable interest among evolutionary computation researchers. One of the main features that makes evolutionary methods particularly appealing for multiobjective problems is the implicit parallelism offered by a population, which enables simultaneous convergence toward the entire Pareto front. While a plethora of related algorithms have been proposed till date, a common attribute among them is that they focus on efficiently solving only a single optimization problem at a time. Despite the known power of implicit parallelism, seldom has an attempt been made to multitask, i.e., to solve multiple optimization problems simultaneously. It is contended that the notion of evolutionary multitasking leads to the possibility of automated transfer of information across different optimization exercises that may share underlying similarities, thereby facilitating improved convergence characteristics. In particular, the potential for automated transfer is deemed invaluable from the standpoint of engineering design exercises where manual knowledge adaptation and reuse are routine. Accordingly, in this paper, we present a realization of the evolutionary multitasking paradigm within the domain of multiobjective optimization. The efficacy of the associated evolutionary algorithm is demonstrated on some benchmark test functions as well as on a real-world manufacturing process design problem from the composites industry."
},
{
"pmid": "26424846",
"title": "Social contacts, vaccination decisions and influenza in Japan.",
"abstract": "BACKGROUND\nContact patterns and vaccination decisions are fundamental to transmission dynamics of infectious diseases. We report on age-specific contact patterns in Japan and their effect on influenza vaccination behaviour.\n\n\nMETHODS\nJapanese adults (N=3146) were surveyed in Spring 2011 to assess the number of their social contacts within a 24 h period, defined as face-to-face conversations within 2 m, and gain insight into their influenza-related behaviour. We analysed the duration and location of contacts according to age. Additionally, we analysed the probability of vaccination and influenza infection in relation to the number of contacts controlling for individual's characteristics.\n\n\nRESULTS\nThe mean and median reported numbers of daily contacts were 15.3 and 12.0, respectively. School-aged children and young adults reported the greatest number of daily contacts, and individuals had the most contacts with those in the same age group. The age-specific contact patterns were different between men and women, and differed between weekdays and weekends. Children had fewer contacts between the same age groups during weekends than during weekdays, due to reduced contacts at school. The probability of vaccination increased with the number of contacts, controlling for age and household size. Influenza infection among unvaccinated individuals was higher than for those vaccinated, and increased with the number of contacts.\n\n\nCONCLUSIONS\nContact patterns in Japan are age and gender specific. These contact patterns, as well as their interplay with vaccination decisions and infection risks, can help inform the parameterisation of mathematical models of disease transmission and the design of public health policies, to control disease transmission."
},
{
"pmid": "21386902",
"title": "Close encounters in a pediatric ward: measuring face-to-face proximity and mixing patterns with wearable sensors.",
"abstract": "BACKGROUND\nNosocomial infections place a substantial burden on health care systems and represent one of the major issues in current public health, requiring notable efforts for its prevention. Understanding the dynamics of infection transmission in a hospital setting is essential for tailoring interventions and predicting the spread among individuals. Mathematical models need to be informed with accurate data on contacts among individuals.\n\n\nMETHODS AND FINDINGS\nWe used wearable active Radio-Frequency Identification Devices (RFID) to detect face-to-face contacts among individuals with a spatial resolution of about 1.5 meters, and a time resolution of 20 seconds. The study was conducted in a general pediatrics hospital ward, during a one-week period, and included 119 participants, with 51 health care workers, 37 patients, and 31 caregivers. Nearly 16,000 contacts were recorded during the study period, with a median of approximately 20 contacts per participants per day. Overall, 25% of the contacts involved a ward assistant, 23% a nurse, 22% a patient, 22% a caregiver, and 8% a physician. The majority of contacts were of brief duration, but long and frequent contacts especially between patients and caregivers were also found. In the setting under study, caregivers do not represent a significant potential for infection spread to a large number of individuals, as their interactions mainly involve the corresponding patient. Nurses would deserve priority in prevention strategies due to their central role in the potential propagation paths of infections.\n\n\nCONCLUSIONS\nOur study shows the feasibility of accurate and reproducible measures of the pattern of contacts in a hospital setting. The obtained results are particularly useful for the study of the spread of respiratory infections, for monitoring critical patterns, and for setting up tailored prevention strategies. Proximity-sensing technology should be considered as a valuable tool for measuring such patterns and evaluating nosocomial prevention strategies in specific settings."
},
{
"pmid": "28801623",
"title": "Social contact patterns relevant to the spread of respiratory infectious diseases in Hong Kong.",
"abstract": "The spread of many respiratory infections is determined by contact patterns between infectious and susceptible individuals in the population. There are no published data for quantifying social contact patterns relevant to the spread of respiratory infectious diseases in Hong Kong which is a hotspot for emerging infectious diseases due to its high population density and connectivity in the air transportation network. We adopted a commonly used diary-based design to conduct a social contact survey in Hong Kong in 2015/16 using both paper and online questionnaires. Participants using paper questionnaires reported more contacts and longer contact duration than those using online questionnaires. Participants reported 13 person-hours of contact and 8 contacts per day on average, which decreased over age but increased with household size, years of education and income level. Prolonged and frequent contacts, and contacts at home, school and work were more likely to involve physical contacts. Strong age-assortativity was observed in all age groups. We evaluated the characteristics of social contact patterns relevant to the spread of respiratory infectious diseases in Hong Kong. Our findings could help to improve the design of future social contact surveys, parameterize transmission models of respiratory infectious diseases, and inform intervention strategies based on model outputs."
},
{
"pmid": "29879196",
"title": "Close encounters between infants and household members measured through wearable proximity sensors.",
"abstract": "Describing and understanding close proximity interactions between infant and family members can provide key information on transmission opportunities of respiratory infections within households. Among respiratory infections, pertussis represents a public health priority. Pertussis infection can be particularly harmful to young, unvaccinated infants and for these patients, family members represent the main sources of transmission. Here, we report on the use of wearable proximity sensors based on RFID technology to measure face-to-face proximity between family members within 16 households with infants younger than 6 months for 2-5 consecutive days of data collection. The sensors were deployed over the course of approximately 1 year, in the context of a national research project aimed at the improvement of infant pertussis prevention strategies. We investigated differences in close-range interactions between family members and we assessed whether demographic variables or feeding practices affect contact patterns between parents and infants. A total of 5,958 contact events were recorded between 55 individuals: 16 infants, 4 siblings, 31 parents and 4 grandparents. The aggregated contact networks, obtained for each household, showed a heterogeneous distribution of the cumulative time spent in proximity with the infant by family members. Contact matrices defined by age and by family role showed that most of the contacts occurred between the infant and other family members (70%), while 30% of contacts was among family members (infants excluded). Many contacts were observed between infants and adults, in particular between infant and mother, followed by father, siblings and grandparents. A larger number of contacts and longer contact durations between infant and other family members were observed in families adopting exclusive breastfeeding, compared to families in which the infant receives artificial or mixed feeding. Our results demonstrate how a high-resolution measurement of contact matrices within infants' households is feasible using wearable proximity sensing devices. Moreover, our findings suggest the mother is responsible for the large majority of the infant's contact pattern, thus being the main potential source of infection for a transmissible disease. As the contribution to the infants' contact pattern by other family members is very variable, vaccination against pertussis during pregnancy is probably the best strategy to protect young, unvaccinated infants."
},
{
"pmid": "24789897",
"title": "Social mixing patterns in rural and urban areas of southern China.",
"abstract": "A dense population, global connectivity and frequent human-animal interaction give southern China an important role in the spread and emergence of infectious disease. However, patterns of person-to-person contact relevant to the spread of directly transmitted infections such as influenza remain poorly quantified in the region. We conducted a household-based survey of travel and contact patterns among urban and rural populations of Guangdong, China. We measured the character and distance from home of social encounters made by 1821 individuals. Most individuals reported 5-10 h of contact with around 10 individuals each day; however, both distributions have long tails. The distribution of distance from home at which contacts were made is similar: most were within a kilometre of the participant's home, while some occurred further than 500 km away. Compared with younger individuals, older individuals made fewer contacts which tended to be closer to home. There was strong assortativity in age-based contact rates. We found no difference between the total number or duration of contacts between urban and rural participants, but urban participants tended to make contacts closer to home. These results can improve mathematical models of infectious disease emergence, spread and control in southern China and throughout the region."
},
{
"pmid": "21149721",
"title": "A high-resolution human contact network for infectious disease transmission.",
"abstract": "The most frequent infectious diseases in humans--and those with the highest potential for rapid pandemic spread--are usually transmitted via droplets during close proximity interactions (CPIs). Despite the importance of this transmission route, very little is known about the dynamic patterns of CPIs. Using wireless sensor network technology, we obtained high-resolution data of CPIs during a typical day at an American high school, permitting the reconstruction of the social network relevant for infectious disease transmission. At 94% coverage, we collected 762,868 CPIs at a maximal distance of 3 m among 788 individuals. The data revealed a high-density network with typical small-world properties and a relatively homogeneous distribution of both interaction time and interaction partners among subjects. Computer simulations of the spread of an influenza-like disease on the weighted contact graph are in good agreement with absentee data during the most recent influenza season. Analysis of targeted immunization strategies suggested that contact network data are required to design strategies that are significantly more effective than random immunization. Immunization strategies based on contact network data were most effective at high vaccination coverage."
},
{
"pmid": "27449511",
"title": "Contact diaries versus wearable proximity sensors in measuring contact patterns at a conference: method comparison and participants' attitudes.",
"abstract": "BACKGROUND\nStudies measuring contact networks have helped to improve our understanding of infectious disease transmission. However, several methodological issues are still unresolved, such as which method of contact measurement is the most valid. Further, complete network analysis requires data from most, ideally all, members of a network and, to achieve this, acceptance of the measurement method. We aimed at investigating measurement error by comparing two methods of contact measurement - paper diaries vs. wearable proximity sensors - that were applied concurrently to the same population, and we measured acceptability.\n\n\nMETHODS\nWe investigated the contact network of one day of an epidemiology conference in September 2014. Seventy-six participants wore proximity sensors throughout the day while concurrently recording their contacts with other study participants in a paper-diary; they also reported on method acceptability.\n\n\nRESULTS\nThere were 329 contact reports in the paper diaries, corresponding to 199 contacts, of which 130 were noted by both parties. The sensors recorded 316 contacts, which would have resulted in 632 contact reports if there had been perfect concordance in recording. We estimated the probabilities that a contact was reported in a diary as: P = 72 % for <5 min contact duration (significantly lower than the following, p < 0.05), P = 86 % for 5-15 min, P = 89 % for 15-60 min, and P = 94 % for >60 min. The sets of sensor-measured and self-reported contacts had a large intersection, but neither was a subset of the other. Participants' aggregated contact duration was mostly substantially longer in the diary data than in the sensor data. Twenty percent of respondents (>1 reported contact) stated that filling in the diary was too much work, 25 % of respondents reported difficulties in remembering contacts, and 93 % were comfortable having their conference contacts measured by sensors.\n\n\nCONCLUSION\nReporting and recording were not complete; reporting was particularly incomplete for contacts <5 min. The types of contact that both methods are capable of detecting are partly different. Participants appear to have overestimated the duration of their contacts. Conducting a study with diaries or wearable sensors was acceptable to and mostly easily done by participants. Both methods can be applied meaningfully if their specific limitations are considered and incompleteness is accounted for."
},
{
"pmid": "21858018",
"title": "High-resolution measurements of face-to-face contact patterns in a primary school.",
"abstract": "BACKGROUND\nLittle quantitative information is available on the mixing patterns of children in school environments. Describing and understanding contacts between children at school would help quantify the transmission opportunities of respiratory infections and identify situations within schools where the risk of transmission is higher. We report on measurements carried out in a French school (6-12 years children), where we collected data on the time-resolved face-to-face proximity of children and teachers using a proximity-sensing infrastructure based on radio frequency identification devices.\n\n\nMETHODS AND FINDINGS\nData on face-to-face interactions were collected on Thursday, October 1(st) and Friday, October 2(nd) 2009. We recorded 77,602 contact events between 242 individuals (232 children and 10 teachers). In this setting, each child has on average 323 contacts per day with 47 other children, leading to an average daily interaction time of 176 minutes. Most contacts are brief, but long contacts are also observed. Contacts occur mostly within each class, and each child spends on average three times more time in contact with classmates than with children of other classes. We describe the temporal evolution of the contact network and the trajectories followed by the children in the school, which constrain the contact patterns. We determine an exposure matrix aimed at informing mathematical models. This matrix exhibits a class and age structure which is very different from the homogeneous mixing hypothesis.\n\n\nCONCLUSIONS\nWe report on important properties of the contact patterns between school children that are relevant for modeling the propagation of diseases and for evaluating control measures. We discuss public health implications related to the management of schools in case of epidemics and pandemics. Our results can help define a prioritization of control measures based on preventive measures, case isolation, classes and school closures, that could reduce the disruption to education during epidemics."
},
{
"pmid": "25695165",
"title": "Combining high-resolution contact data with virological data to investigate influenza transmission in a tertiary care hospital.",
"abstract": "OBJECTIVE\nContact patterns and microbiological data contribute to a detailed understanding of infectious disease transmission. We explored the automated collection of high-resolution contact data by wearable sensors combined with virological data to investigate influenza transmission among patients and healthcare workers in a geriatric unit.\n\n\nDESIGN\nProof-of-concept observational study. Detailed information on contact patterns were collected by wearable sensors over 12 days. Systematic nasopharyngeal swabs were taken, analyzed for influenza A and B viruses by real-time polymerase chain reaction, and cultured for phylogenetic analysis.\n\n\nSETTING\nAn acute-care geriatric unit in a tertiary care hospital.\n\n\nPARTICIPANTS\nPatients, nurses, and medical doctors.\n\n\nRESULTS\nA total of 18,765 contacts were recorded among 37 patients, 32 nurses, and 15 medical doctors. Most contacts occurred between nurses or between a nurse and a patient. Fifteen individuals had influenza A (H3N2). Among these, 11 study participants were positive at the beginning of the study or at admission, and 3 patients and 1 nurse acquired laboratory-confirmed influenza during the study. Infectious medical doctors and nurses were identified as potential sources of hospital-acquired influenza (HA-Flu) for patients, and infectious patients were identified as likely sources for nurses. Only 1 potential transmission between nurses was observed.\n\n\nCONCLUSIONS\nCombining high-resolution contact data and virological data allowed us to identify a potential transmission route in each possible case of HA-Flu. This promising method should be applied for longer periods in larger populations, with more complete use of phylogenetic analyses, for a better understanding of influenza transmission dynamics in a hospital setting."
},
{
"pmid": "28700268",
"title": "US healthcare costs attributable to type A and type B influenza.",
"abstract": "While the overall healthcare burden of seasonal influenza in the United States (US) has been well characterized, the proportion of influenza burden attributable to type A and type B illness warrants further elucidation. The aim of this study was to estimate numbers of healthcare encounters and healthcare costs attributable to influenza viral strains A and B in the US during the 2001/2002 - 2008/2009 seasons. Healthcare encounters and costs in the US during the 2001/2002 - 2008/2009 seasons for influenza type A and influenza type B were estimated separately and collectively, by season and age group, based on data from published literature and secondary sources for: rates of influenza-related encounters requiring formal healthcare, unit costs of influenza-related healthcare encounters, and estimates of population size. Across 8 seasons, projected annual numbers of influenza-related healthcare encounters ranged from 11.3-25.6 million, and healthcare costs, from $2.0-$5.8 billion. While the majority of influenza illness was attributable to type A strains, type B strains accounted for 37% of healthcare costs across all seasons, and as much as 66% in a single season. The outpatient burden of type B disease was considerable among persons aged 18-64 y while the hospital cost burden was highest in young children. Influenza viral strain B was associated with considerable health system burden each year during the period of interest. Increasing influenza vaccine coverage, especially with the recently approved quadrivalent products including an additional type B strain, could potentially reduce overall annual influenza burden in the US."
},
{
"pmid": "19447706",
"title": "A study on gait-based gender classification.",
"abstract": "Gender is an important cue in social activities. In this correspondence, we present a study and analysis of gender classification based on human gait. Psychological experiments were carried out. These experiments showed that humans can recognize gender based on gait information, and that contributions of different body components vary. The prior knowledge extracted from the psychological experiments can be combined with an automatic method to further improve classification accuracy. The proposed method which combines human knowledge achieves higher performance than some other methods, and is even more accurate than human observers. We also present a numerical analysis of the contributions of different human components, which shows that head and hair, back, chest and thigh are more discriminative than other components. We also did challenging cross-race experiments that used Asian gait data to classify the gender of Europeans, and vice versa. Encouraging results were obtained. All the above prove that gait-based gender classification is feasible in controlled environments. In real applications, it still suffers from many difficulties, such as view variation, clothing and shoes changes, or carrying objects. We analyze the difficulties and suggest some possible solutions."
}
] |
BMC Medical Informatics and Decision Making | 31992273 | PMC6986067 | 10.1186/s12911-020-1029-z | Usage of cloud storage facilities by medical students in a low-middle income country, Sri Lanka: a cross sectional study | BackgroundCloud storage facilities (CSF) has become popular among the internet users. There is limited data on CSF usage among university students in low middle-income countries including Sri Lanka. In this study we present the CSF usage among medical students at the Faculty of Medicine, University of Kelaniya.MethodsWe undertook a cross sectional study at the Faculty of Medicine, University of Kelaniya, Sri Lanka. Stratified random sampling was used to recruit students representing all the batches. A self-administrated questionnaire was given.ResultsOf 261 (90.9%) respondents, 181 (69.3%) were females. CSF awareness was 56.5% (95%CI: 50.3–62.6%) and CSF usage was 50.8% (95%CI: 44.4–57.2%). Awareness was higher in males (P = 0.003) and was low in senior students. Of CSF aware students, 85% knew about Google Drive and 70.6% used it. 73.6 and 42.1% knew about Dropbox and OneDrive. 50.0 and 22.0% used them respectively. There was no association between CSF awareness and pre-university entrance or undergraduate examination performance. Inadequate knowledge, time, accessibility, security and privacy concerns limited CSF usage. 69.8% indicated that they would like to undergo training on CSF as an effective tool for education.ConclusionCSF awareness and usage among the students were 56.5 and 50.8%. Google drive is the most popular CSF. Lack of knowledge, accessibility, concerns on security and privacy limited CSF usage among students. Majority were interested to undergo training on CSF and undergraduate Information Communication Technology (ICT) curricula should introduce CSF as effective educational tools. | Related worksThis section describes the previous literature in recent years which is related to the use in CSF by university students.Several recent studies related the use of CSF by university students have been carried out. One large scale online survey [15] conducted by Meske et al., targeted more than 3000 participants including students (72%) as well as employees (28%) at the University of Muenster in Germany. The analysis of survey results indicated a high demand for cloud service solution in German higher education sector where the most of the students (85%) used at least one cloud service (employees: 73%). Students mainly used cloud services for educational purpose (project work -(83%) and teaching material - (78%)). Employees main use was to save work-related materials (78%). The most important reason for rejecting cloud storage services was security concerns (students: 64%; employees: 62%). The primary aim of this paper was to describe and present the main results of a preliminary large-scale survey on cloud services at the University of Muenster with more than 3000 participants in order to identify how the cloud service should be designed to be attractive for the target audience.In another research [19] by Ashtari & Eydgahi examined how the engineering students at Eastern Michigan University accept and use the cloud services long after its adoption in the education process. The researchers used the Technology Acceptance Model (TAM) and Determinants of Perceived Ease of Use model to determine the CC adaptation by students.A 97.5% of participants indicated that they are utilizing the cloud-based university class management application that enabled direct student access to Google Drive and other Google cloud suite. The majority of the students (97.5%) were utilizing at least two forms of cloud technology and 87.5% were using three or more applications. The reasons given by students for using cloud applications were: accessibility, the ability to share data, the low cost and the ability to back up files. The most common concerns were data privacy, fear of losing data, and difficulty of use. The researchers suggest that a combined model drawing from more aspects of internet technology will be more useful in further examinations of cloud computing adaptation.Stantchev et al. [20] used the Technology Acceptance Model (TAM) to investigate the motivations that lead higher education students to replace several Learning Management System (LMS) services with cloud file hosting services in the field of information sharing and collaboration. Research findings extended previous research that has investigated the use of Dropbox to cover certain weaknesses of LMS within the higher education setting. The results showed that Dropbox receives better valuation than LMS for the three considered constructs: attitude toward using, perceived ease of use and perceived usefulness.Another study based on first year medical students can be seen in the work of Peacock & Grande [21]. The main objective of their work was to present the results of effectively using a free Google cloud suite including Google Drive to manage and teach a first-year pathology course at Mayo Medical School in USA. The results demonstrated that Google cloud suite allowed faculty to build an efficient and effective classroom teaching and management system. 87% of participants responded positively in favor of Google Drive as a storage location for course materials. Ibrahim Arpaci et al. [12] investigated the adoption of cloud computing to achieve knowledge management using TAM. Researchers examined the cloud services involvement in knowledge creation and discovery, storage, sharing, and application among the students and concluded that the integration of cloud computing services into the educational settings may promote students’ academic performance, effectiveness, and efficiency by facilitating knowledge management mainly due to the cloud services that enable for students to access and synchronize their digital reference materials any time, from anywhere, and using any device. The Table 1 compares the similarities and differences in the current study with the other works stated in the Related Works section.
Table 1Comparison with previous worksAuthorSimilarities and differences compared to the current studyMeske et al. in 2014 [15]An Online survey that included whole student population and employees at the University of Muenster in Germany compared to the current study that used printed questionnaire to collect data from a sample of medical students. Both surveys focused on the use of CSF.Ashtari & Eydgahi in 2017 [19]The study sample was selected by inviting to 40 engineering students in a specified study setup. The objective was to find the students’ use and acceptance of CC by using TAM. The current study applied the stratified random sampling method to select the study sample from a medical faculty and attempted to find specifically the use of CSF.Stantchev et al. in 2014 [20]TAM was used to report weaknesses of several services of LMS over Dropbox cloud hosting service. Sample size was 121 students in computer science in final year and master level. Students involvement in Dropbox use as a CSF was the similarity found the two studies.Peacock & Grande in 2016 [21]This study involved the first year medical students in order to examine the possibility of effectively using a free Google cloud suite, including Google Drive to manage and teach a first-year pathology course. Medical students involved in the both studies. The current study specifically examine the CSF usage in the education process by the whole medical students.Ibrahim et al. in 2017 [12]221 Students in Information Technology (IT) subject stream who followed a training course on knowledge management and CC were involved in the study. The adaptation of cloud services in knowledge management was examined using TAM. Both studies focused on cloud services but on different research aspects. | [
"25889846",
"24395632",
"25782601",
"28580694"
] | [
{
"pmid": "25889846",
"title": "Cultural adaptation of a shared decision making tool with Aboriginal women: a qualitative study.",
"abstract": "BACKGROUND\nShared decision making (SDM) may narrow health equity gaps experienced by Aboriginal women. SDM tools such as patient decision aids can facilitate SDM between the client and health care providers; SDM tools for use in Western health care settings have not yet been developed for and with Aboriginal populations. This study describes the adaptation and usability testing of a SDM tool, the Ottawa Personal Decision Guide (OPDG), to support decision making by Aboriginal women.\n\n\nMETHODS\nAn interpretive descriptive qualitative study was structured by the Ottawa Decision Support Framework and used a postcolonial theoretical lens. An advisory group was established with representation from the Aboriginal community and used a mutually agreed-upon ethical framework. Eligible participants were Aboriginal women at Minwaashin Lodge. First, the OPDG was discussed in focus groups using a semi-structured interview guide. Then, individual usability interviews were conducted using a semi-structured interview guide with decision coaching. Iterative adaptations to the OPDG were made during focus groups and usability interviews until saturation was reached. Transcripts were coded using thematic analysis and themes confirmed in collaboration with an advisory group.\n\n\nRESULTS\nAboriginal women 20 to 60 years of age and self-identifying as First Nations, Métis, or Inuit participated in two focus groups (n = 13) or usability interviews (n = 6). Seven themes were developed that either reflected or affirmed OPDG adaptions: 1) \"This paper makes it hard for me to show that I am capable of making decisions\"; 2) \"I am responsible for my decisions\"; 3) \"My past and current experiences affect the way I make decisions\"; 4) \"People need to talk with people\"; 5) \"I need to fully participate in making my decisions\"; 6) \"I need to explore my decision in a meaningful way\"; 7) \"I need respect for my traditional learning and communication style\".\n\n\nCONCLUSIONS\nAdaptations resulted in a culturally adapted version of the OPDG that better met the needs of Aboriginal women participants and was more accessible with respect to health literacy assumptions. Decision coaching was identified as required to enhance engagement in the decision making process and using the adapted OPDG as a talking guide."
},
{
"pmid": "24395632",
"title": "Integrating web 2.0 in clinical research education in a developing country.",
"abstract": "The use of Web 2.0 tools in education and health care has received heavy attention over the past years. Over two consecutive years, Children's Cancer Hospital - Egypt 57357 (CCHE 57357), in collaboration with Egyptian universities, student bodies, and NGOs, conducted a summer course that supports undergraduate medical students to cross the gap between clinical practice and clinical research. This time, there was a greater emphasis on reaching out to the students using social media and other Web 2.0 tools, which were heavily used in the course, including Google Drive, Facebook, Twitter, YouTube, Mendeley, Google Hangout, Live Streaming, Research Electronic Data Capture (REDCap), and Dropbox. We wanted to investigate the usefulness of integrating Web 2.0 technologies into formal educational courses and modules. The evaluation survey was filled in by 156 respondents, 134 of whom were course candidates (response rate = 94.4 %) and 22 of whom were course coordinators (response rate = 81.5 %). The course participants came from 14 different universities throughout Egypt. Students' feedback was positive and supported the integration of Web 2.0 tools in academic courses and modules. Google Drive, Facebook, and Dropbox were found to be most useful."
},
{
"pmid": "25782601",
"title": "An online app platform enhances collaborative medical student group learning and classroom management.",
"abstract": "PURPOSE\nThe authors presented their results in effectively using a free and widely-accessible online app platform to manage and teach a first-year pathology course at Mayo Medical School.\n\n\nMETHODS USED\nThe authors utilized the Google \"Blogger\", \"Forms\", \"Flubaroo\", \"Sheets\", \"Docs\", and \"Slides\" apps to effectively build a collaborative classroom teaching and management system. Students were surveyed on the use of the app platform in the classroom, and 44 (94%) students responded.\n\n\nRESULTS\nThirty-two (73%) of the students reported that \"Blogger\" was an effective place for online discussion of pathology topics and questions. 43 (98%) of the students reported that the \"Forms/Flubaroo\" grade-reporting system was helpful. 40 (91%) of the students used the remote, collaborative features of \"Slides\" to create team-based learning presentations, and 39 (89%) of the students found those collaborative features helpful. \"Docs\" helped teaching assistants to collaboratively create study guides or grading rubrics. Overall, 41 (93%) of the students found that the app platform was helpful in establishing a collaborative, online classroom environment.\n\n\nCONCLUSIONS\nThe online app platform allowed faculty to build an efficient and effective classroom teaching and management system. The ease of accessibility and opportunity for collaboration allowed for collaborative learning, grading, and teaching."
}
] |
Frontiers in Genetics | 32038712 | PMC6987458 | 10.3389/fgene.2019.01353 | Non-Negative Symmetric Low-Rank Representation Graph Regularized Method for Cancer Clustering Based on Score Function | As an important approach to cancer classification, cancer sample clustering is of particular importance for cancer research. For high dimensional gene expression data, examining approaches to selecting characteristic genes with high identification for cancer sample clustering is an important research area in the bioinformatics field. In this paper, we propose a novel integrated framework for cancer clustering known as the non-negative symmetric low-rank representation with graph regularization based on score function (NSLRG-S). First, a lowest rank matrix is obtained after NSLRG decomposition. The lowest rank matrix preserves the local data manifold information and the global data structure information of the gene expression data. Second, we construct the Score function based on the lowest rank matrix to weight all of the features of the gene expression data and calculate the score of each feature. Third, we rank the features according to their scores and select the feature genes for cancer sample clustering. Finally, based on selected feature genes, we use the K-means method to cluster the cancer samples. The experiments are conducted on The Cancer Genome Atlas (TCGA) data. Comparative experiments demonstrate that the NSLRG-S framework can significantly improve the clustering performance. | Related WorkIn this section, we briefly introduce the original Low-Rank Representation (LRR) (Liu et al., 2010), the related variants based on the original LRR method, and the Laplacian Score method (He et al., 2006).Low-Rank RepresentationOriginal LRR MethodThe Low-Rank Representation (LRR) method is an efficient method for exploring observed data and subspace clustering. The main idea is that each data sample can be represented as a linear combination of the dictionary data. In general, the matrix X = [x
1,x
2,…,x
n]∈ℝ
m×n represents the observed data, of which each column is a data sample. Therefore, the matrix X contains n data samples drawn from independent subspaces. The matrix D = [d
1,d
2,…,d
k]∈ℝ
m×k represents the dictionary data and is overcomplete. The general model of the LRR method is formulated as follows.(1)minZrank(Z) s.t. X=DZ,where the matrix Z∈ℝ
k×n is the coefficient matrix. The aim of this model is to learn a lowest rank matrix Z
* to represent the observed data X. In the actual application, the matrix X always replaces D as the dictionary data (Liu et al., 2010; Liu et al., 2013). Therefore, Z becomes a square matrix and Z∈ℝ
n×n. The element zij∈Zn×n* can denote the confidence of sample i and j in the same subspace (Wang et al., 2019b). Hence, the matrix Z
* can be used in subspace clustering that clusters data samples into several sets, with each set corresponding to a subspace.The problem of minZrank(Z) is a rank function, which is difficult to optimize with an NP-hard nature. To mitigate this problem, the best alternative is convex relaxation on problem (1), and it is written as follows.(2)minZ∥Z∥* s.t. X=XZ,where ∥⋅∥* is the nuclear norm, and ∥Z∥* is defined as ∥Z∥*=∑inδi, where δi is the singular value of matrix Z∈ℝ
n×n. It has been confirmed in the literature (Cai et al., 2010) that matrix Z of the LRR can capture the global structure of the raw data using the nuclear norm item. Furthermore, to address the real data under the noise and outliers, a more reasonable formula is applied after adjustment, and it is expressed as follows.(3)minZ,E∥Z∥*+λ∥E∥P s.t. X=XZ+E,where ∥E∥P is the error term, and it selects a different P to model special noise or outliers based on error prior information, such as l
1-norm (∥E∥1) and l
2,1- norm (∥E∥2,1) (Chen and Yang, 2014), and λ > 0 is the parameter that trades off the effect of the error item.Many researchers have attempted and proposed variants based on the original LRR method. The main idea is to introduce constraint items to optimize or improve existing methods. For example, the original LRR method is improved by considering the geometric structures within the data, including the graph regularization method (Lu et al., 2013) and k-nearest neighbour graph method (Yin et al., 2016). The different norm items are used to improve the robustness of the original LRR method (Wang et al., 2018) and others.LRR With Graph RegularizationUnder certain conditions, the geometric structure within the data is crucial for the result that we desire. To address this issue, researchers introduced graph regularization into the LRR method to create the graph-regularized low-rank representation (GLRR) method (Lu et al., 2013). The equation of GLRR is written as follows.(4)minZ,E∥Z∥*+λ1tr(ZLZT)+λ2∥E∥2,1 s.t. X=XZ+E,where the error item uses the l
2,1-norm and ∥E∥2,1=∑j=1n∑i=1m([E]ij)2, tr(⋅) is the trace of the matrix, L is the graph Laplacian, and λ
1 and λ
2 are two parameters used to balance the graph-regularized item and the error item. Based on manifold learning, the graph-regularized item achieves the aim that representative data points z
i and z
j can hold the property of the data points xi and xj of X, which are closed in the intrinsic manifold. Therefore, the inherent geometric structure in the raw data is preserved in the low-rank matrix Z.Non-Negative LRR With SparsityThe non-negativity constraint ensures that every data point is in the convex hull of its neighbours. The sparse constraint ensures that each sample is associated with only a few samples. The non-negative and sparse low-rank matrix supplies a well discriminated weight for the subspace and information group.Inspired by the above insights, Zhuang et al. proposed the non-negative low rank and sparse graph (NNLRS) method (Zhuang et al., 2012). The formula is given as follows.(5)minZ,E∥Z∥*+λ1∥Z∥1+λ2∥E∥2,1 s.t. X=XZ+E, Z>0,where ∥Z∥1 is the l
1-norm to guarantee the sparsity of coefficient matrix. In real-world applications, the sparsity and non-negativity matrix Z obtained by the NNLRS method can offer a basis for semi-supervised learning by constructing the discriminative and informative graph (You et al., 2016).Laplacian Score MethodAccording to the Laplacian eigenmaps (Belkin and Niyogi, 2001) and the locality preserving projection (He and Niyogi, 2005), the aim of the Laplacian Score (LS) method is to evaluate features based on their locality preserving power (He et al., 2006). The LS is defined as follows.(6)LS(r)=∑ij(xri−xrj)2SijVar(xr,:), (1≤r≤m,1≤i≤j≤n),where the heat kernel function Sij=e−∥xi−xj∥2t is used to obtain weight matrix S, and t is a suitable constant, which is set empirically. The matrix S is used to model the local structure of the raw data space. Additionally, Var(x
r,:) is the estimated variance of the r-th feature in all data points, and the larger the Var(x
r,:), the more information held by the r-th feature. The ∑ij(xri−xrj)2 is the sum of differences in the expression of r-th feature between all samples. For larger values of S
ij and the smaller values of ∑ij(xri−xrj)2, the value of LS(r) tends to be smaller, meaning that the importance level of the feature is higher. Therefore, the important features are selected according to LS(r). | [
"21173440",
"24196982",
"23527177",
"22487984",
"30295871",
"14528274",
"27693191",
"30528509",
"30821315",
"30984238",
"27046494"
] | [
{
"pmid": "21173440",
"title": "Graph Regularized Nonnegative Matrix Factorization for Data Representation.",
"abstract": "Matrix factorization techniques have been frequently applied in information retrieval, computer vision, and pattern recognition. Among them, Nonnegative Matrix Factorization (NMF) has received considerable attention due to its psychological and physiological interpretation of naturally occurring data whose representation may be parts based in the human brain. On the other hand, from the geometric perspective, the data is usually sampled from a low-dimensional manifold embedded in a high-dimensional ambient space. One then hopes to find a compact representation,which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In this paper, we propose a novel algorithm, called Graph Regularized Nonnegative Matrix Factorization (GNMF), for this purpose. In GNMF, an affinity graph is constructed to encode the geometrical information and we seek a matrix factorization, which respects the graph structure. Our empirical study shows encouraging results of the proposed algorithm in comparison to the state-of-the-art algorithms on real-world problems."
},
{
"pmid": "24196982",
"title": "Robust subspace segmentation via low-rank representation.",
"abstract": "Recently the low-rank representation (LRR) has been successfully used in exploring the multiple subspace structures of data. It assumes that the observed data is drawn from several low-rank subspaces and sometimes contaminated by outliers and occlusions. However, the noise (low-rank representation residual) is assumed to be sparse, which is generally characterized by minimizing the l1 -norm of the residual. This actually assumes that the residual follows the Laplacian distribution. The Laplacian assumption, however, may not be accurate enough to describe various noises in real scenarios. In this paper, we propose a new framework, termed robust low-rank representation, by considering the low-rank representation as a low-rank constrained estimation for the errors in the observed data. This framework aims to find the maximum likelihood estimation solution of the low-rank representation residuals. We present an efficient iteratively reweighted inexact augmented Lagrange multiplier algorithm to solve the new problem. Extensive experimental results show that our framework is more robust to various noises (illumination, occlusion, etc) than LRR, and also outperforms other state-of-the-art methods."
},
{
"pmid": "23527177",
"title": "Identifying subspace gene clusters from microarray data using low-rank representation.",
"abstract": "Identifying subspace gene clusters from the gene expression data is useful for discovering novel functional gene interactions. In this paper, we propose to use low-rank representation (LRR) to identify the subspace gene clusters from microarray data. LRR seeks the lowest-rank representation among all the candidates that can represent the genes as linear combinations of the bases in the dataset. The clusters can be extracted based on the block diagonal representation matrix obtained using LRR, and they can well capture the intrinsic patterns of genes with similar functions. Meanwhile, the parameter of LRR can balance the effect of noise so that the method is capable of extracting useful information from the data with high level of background noise. Compared with traditional methods, our approach can identify genes with similar functions yet without similar expression profiles. Also, it could assign one gene into different clusters. Moreover, our method is robust to the noise and can identify more biologically relevant gene clusters. When applied to three public datasets, the results show that the LRR based method is superior to existing methods for identifying subspace gene clusters."
},
{
"pmid": "22487984",
"title": "Robust recovery of subspace structures by low-rank representation.",
"abstract": "In this paper, we address the subspace clustering problem. Given a set of data samples (vectors) approximately drawn from a union of multiple subspaces, our goal is to cluster the samples into their respective subspaces and remove possible outliers as well. To this end, we propose a novel objective function named Low-Rank Representation (LRR), which seeks the lowest rank representation among all the candidates that can represent the data samples as linear combinations of the bases in a given dictionary. It is shown that the convex program associated with LRR solves the subspace clustering problem in the following sense: When the data is clean, we prove that LRR exactly recovers the true subspace structures; when the data are contaminated by outliers, we prove that under certain conditions LRR can exactly recover the row space of the original data and detect the outlier as well; for data corrupted by arbitrary sparse errors, LRR can also approximately recover the row space with theoretical guarantees. Since the subspace membership is provably determined by the row space, these further imply that LRR can perform robust subspace clustering and error correction in an efficient and effective way."
},
{
"pmid": "30295871",
"title": "Multi-omic and multi-view clustering algorithms: review and cancer benchmark.",
"abstract": "Recent high throughput experimental methods have been used to collect large biomedical omics datasets. Clustering of single omic datasets has proven invaluable for biological and medical research. The decreasing cost and development of additional high throughput methods now enable measurement of multi-omic data. Clustering multi-omic data has the potential to reveal further systems-level insights, but raises computational and biological challenges. Here, we review algorithms for multi-omics clustering, and discuss key issues in applying these algorithms. Our review covers methods developed specifically for omic data as well as generic multi-view methods developed in the machine learning community for joint clustering of multiple data types. In addition, using cancer data from TCGA, we perform an extensive benchmark spanning ten different cancer types, providing the first systematic comparison of leading multi-omics and multi-view clustering algorithms. The results highlight key issues regarding the use of single- versus multi-omics, the choice of clustering strategy, the power of generic multi-view methods and the use of approximated p-values for gauging solution quality. Due to the growing use of multi-omics data, we expect these issues to be important for future progress in the field."
},
{
"pmid": "14528274",
"title": "Advantages and limitations of microarray technology in human cancer.",
"abstract": "Cancer is a highly variable disease with multiple heterogeneous genetic and epigenetic changes. Functional studies are essential to understanding the complexity and polymorphisms of cancer. The final deciphering of the complete human genome, together with the improvement of high throughput technologies, is causing a fundamental transformation in cancer research. Microarray is a new powerful tool for studying the molecular basis of interactions on a scale that is impossible using conventional analysis. This technique makes it possible to examine the expression of thousands of genes simultaneously. This technology promises to lead to improvements in developing rational approaches to therapy as well as to improvements in cancer diagnosis and prognosis, assuring its entry into clinical practice in specialist centers and hospitals within the next few years. Predicting who will develop cancer and how this disease will behave and respond to therapy after diagnosis will be one of the potential benefits of this technology within the next decade. In this review, we highlight some of the recent developments and results in microarray technology in cancer research, discuss potentially problematic areas associated with it, describe the eventual use of microarray technology for clinical applications and comment on future trends and issues."
},
{
"pmid": "27693191",
"title": "Differentially expressed genes selection via Laplacian regularized low-rank representation method.",
"abstract": "With the rapid development of DNA microarray technology and next-generation technology, a large number of genomic data were generated. So how to extract more differentially expressed genes from genomic data has become a matter of urgency. Because Low-Rank Representation (LRR) has the high performance in studying low-dimensional subspace structures, it has attracted a chunk of attention in recent years. However, it does not take into consideration the intrinsic geometric structures in data. In this paper, a new method named Laplacian regularized Low-Rank Representation (LLRR) has been proposed and applied on genomic data, which introduces graph regularization into LRR. By taking full advantages of the graph regularization, LLRR method can capture the intrinsic non-linear geometric information among the data. The LLRR method can decomposes the observation matrix of genomic data into a low rank matrix and a sparse matrix through solving an optimization problem. Because the significant genes can be considered as sparse signals, the differentially expressed genes are viewed as the sparse perturbation signals. Therefore, the differentially expressed genes can be selected according to the sparse matrix. Finally, we use the GO tool to analyze the selected genes and compare the P-values with other methods. The results on the simulation data and two real genomic data illustrate that this method outperforms some other methods: in differentially expressed gene selection."
},
{
"pmid": "30528509",
"title": "Laplacian regularized low-rank representation for cancer samples clustering.",
"abstract": "Cancer samples clustering based on biomolecular data has been becoming an important tool for cancer classification. The recognition of cancer types is of great importance for cancer treatment. In this paper, in order to improve the accuracy of cancer recognition, we propose to use Laplacian regularized Low-Rank Representation (LLRR) to cluster the cancer samples based on genomic data. In LLRR method, the high-dimensional genomic data are approximately treated as samples extracted from a combination of several low-rank subspaces. The purpose of LLRR method is to seek the lowest-rank representation matrix based on a dictionary. Because a Laplacian regularization based on manifold is introduced into LLRR, compared to the Low-Rank Representation (LRR) method, besides capturing the global geometric structure, LLRR can capture the intrinsic local structure of high-dimensional observation data well. And what is more, in LLRR, the original data themselves are selected as a dictionary, so the lowest-rank representation is actually a similar expression between the samples. Therefore, corresponding to the low-rank representation matrix, the samples with high similarity are considered to come from the same subspace and are grouped into a class. The experiment results on real genomic data illustrate that LLRR method, compared with LRR and MLLRR, is more robust to noise and has a better ability to learn the inherent subspace structure of data, and achieves remarkable performance in the clustering of cancer samples."
},
{
"pmid": "30821315",
"title": "SinNLRR: a robust subspace clustering method for cell type detection by non-negative and low-rank representation.",
"abstract": "MOTIVATION\nThe development of single-cell RNA-sequencing (scRNA-seq) provides a new perspective to study biological problems at the single-cell level. One of the key issues in scRNA-seq analysis is to resolve the heterogeneity and diversity of cells, which is to cluster the cells into several groups. However, many existing clustering methods are designed to analyze bulk RNA-seq data, it is urgent to develop the new scRNA-seq clustering methods. Moreover, the high noise in scRNA-seq data also brings a lot of challenges to computational methods.\n\n\nRESULTS\nIn this study, we propose a novel scRNA-seq cell type detection method based on similarity learning, called SinNLRR. The method is motivated by the self-expression of the cells with the same group. Specifically, we impose the non-negative and low rank structure on the similarity matrix. We apply alternating direction method of multipliers to solve the optimization problem and propose an adaptive penalty selection method to avoid the sensitivity to the parameters. The learned similarity matrix could be incorporated with spectral clustering, t-distributed stochastic neighbor embedding for visualization and Laplace score for prioritizing gene markers. In contrast to other scRNA-seq clustering methods, our method achieves more robust and accurate results on different datasets.\n\n\nAVAILABILITY AND IMPLEMENTATION\nOur MATLAB implementation of SinNLRR is available at, https://github.com/zrq0123/SinNLRR.\n\n\nSUPPLEMENTARY INFORMATION\nSupplementary data are available at Bioinformatics online."
},
{
"pmid": "30984238",
"title": "Simultaneous Interrogation of Cancer Omics to Identify Subtypes With Significant Clinical Differences.",
"abstract": "Recent advances in high-throughput sequencing have accelerated the accumulation of omics data on the same tumor tissue from multiple sources. Intensive study of multi-omics integration on tumor samples can stimulate progress in precision medicine and is promising in detecting potential biomarkers. However, current methods are restricted owing to highly unbalanced dimensions of omics data or difficulty in assigning weights between different data sources. Therefore, the appropriate approximation and constraints of integrated targets remain a major challenge. In this paper, we proposed an omics data integration method, named high-order path elucidated similarity (HOPES). HOPES fuses the similarities derived from various omics data sources to solve the dimensional discrepancy, and progressively elucidate the similarities from each type of omics data into an integrated similarity with various high-order connected paths. Through a series of incremental constraints for commonality, HOPES can take both specificity of single data and consistency between different data types into consideration. The fused similarity matrix gives global insight into patients' correlation and efficiently distinguishes subgroups. We tested the performance of HOPES on both a simulated dataset and several empirical tumor datasets. The test datasets contain three omics types including gene expression, DNA methylation, and microRNA data for five different TCGA cancer projects. Our method was shown to achieve superior accuracy and high robustness compared with several benchmark methods on simulated data. Further experiments on five cancer datasets demonstrated that HOPES achieved superior performances in cancer classification. The stratified subgroups were shown to have statistically significant differences in survival. We further located and identified the key genes, methylation sites, and microRNAs within each subgroup. They were shown to achieve high potential prognostic value and were enriched in many cancer-related biological processes or pathways."
},
{
"pmid": "27046494",
"title": "Laplacian Regularized Low-Rank Representation and Its Applications.",
"abstract": "Low-rank representation (LRR) has recently attracted a great deal of attention due to its pleasing efficacy in exploring low-dimensional subspace structures embedded in data. For a given set of observed data corrupted with sparse errors, LRR aims at learning a lowest-rank representation of all data jointly. LRR has broad applications in pattern recognition, computer vision and signal processing. In the real world, data often reside on low-dimensional manifolds embedded in a high-dimensional ambient space. However, the LRR method does not take into account the non-linear geometric structures within data, thus the locality and similarity information among data may be missing in the learning process. To improve LRR in this regard, we propose a general Laplacian regularized low-rank representation framework for data representation where a hypergraph Laplacian regularizer can be readily introduced into, i.e., a Non-negative Sparse Hyper-Laplacian regularized LRR model (NSHLRR). By taking advantage of the graph regularizer, our proposed method not only can represent the global low-dimensional structures, but also capture the intrinsic non-linear geometric information in data. The extensive experimental results on image clustering, semi-supervised image classification and dimensionality reduction tasks demonstrate the effectiveness of the proposed method."
}
] |
Frontiers in Neuroscience | 32038151 | PMC6989613 | 10.3389/fnins.2020.00001 | Multi-Task Network Representation Learning | Networks, such as social networks, biochemical networks, and protein-protein interaction networks are ubiquitous in the real world. Network representation learning aims to embed nodes in a network as low-dimensional, dense, real-valued vectors, and facilitate downstream network analysis. The existing embedding methods commonly endeavor to capture structure information in a network, but lack of consideration of subsequent tasks and synergies between these tasks, which are of equal importance for learning desirable network representations. To address this issue, we propose a novel multi-task network representation learning (MTNRL) framework, which is end-to-end and more effective for underlying tasks. The original network and the incomplete network share a unified embedding layer followed by node classification and link prediction tasks that simultaneously perform on the embedding vectors. By optimizing the multi-task loss function, our framework jointly learns task-oriented embedding representations for each node. Besides, our framework is suitable for all network embedding methods, and the experiment results on several benchmark datasets demonstrate the effectiveness of the proposed framework compared with state-of-the-art methods. | 2. Related Work and Motivation2.1. Network Representation LearningRecently, network representation learning has attracted an increasing research attention in various fields. Existing network representation learning techniques can roughly be divided as unsupervised and semi-supervised. Given a complex network with all nodes being unlabeled, unsupervised methods learn node representations through optimizing the carefully designed objective to capture proximities and topology in the network graph, which can facilitate identifying the class labels for the nodes. Deepwalk (Perozzi et al., 2014) regards the sequence of nodes generated by random walk (Tong et al., 2006) as a sentence, the nodes in the sequence as words in the text, and obtains node representations through optimizing the Skip-Gram model (Lazaridou et al., 2015). LINE (Tang et al., 2015) characterizes the first-order proximity observed from the connections among nodes, and preserves the second-order proximity through calculating the number of common neighbors for two nodes without direct connection. Node2vec (Grover and Leskovec, 2016) extends the Deepwalk algorithm by introducing a pair of hyper-parameters for adding flexibility in exploring neighborhoods, and generates random walk sequences by breadth-first search (Beamer et al., 2013) and depth-first search (Barták, 2004).Unsupervised learning begins with clustering and then characterization, while supervised learning is carried out simultaneously with classification and characterization. Semi-supervised learning is a classic paradigm of machine learning between supervised learning and unsupervised learning. In this paradigm, a small amount of labeled data and a large number of unlabeled data are used to train the learning model. In practice, it is arduous to obtain a great deal of labeled data and semi-supervised learning is capable of improving the performance of purely supervised learning algorithms through modeling the distribution of unlabeled data. Therefore, semi-supervised learning has received considerable attention in recent years. Semi-supervised learning methods utilize partial nodes being labeled and others remaining unlabeled to learn high-quality node representations supervised by partial nodes. For examples, graph convolution networks (GCN) (Kipf and Welling, 2017) generalizes the original convolutional neural networks on grid-like images to non-grid graphs through considering the localized first-order approximation of spectral graph convolutions for encoding graph structure and optimizing the cross-entropy loss over labeled node examples for semi-supervised node classification. Given a graph composed of instance nodes, Planetoid (Yang et al., 2016) presents a semi-supervised learning framework based on graph embeddings which can train an embedding for each instance to jointly predict the class label and the neighborhood context in the graph. This method has both transduction variables and induction variables. While in the inductive variant, the embeddings are defined as a parametric function of the feature vectors, so predictions can be made on instances not seen during training. GraphSAGE (Hamilton et al., 2017) is an inductive network representation learning framework that learns an embedding function for generating node representations through sampling a fixed-size set of neighbors of each node, and then performing a specific aggregator over neighboring nodes (such as the mean over all the sampled neighbors' feature vectors, or the result of feeding them through a recurrent neural network). Graph attention networks (GAT) (Veličković et al., 2018) operate on graph-structured data, leveraging masked self-attentional layers (Zhang et al., 2018) to address the shortcomings of prior methods based on graph convolutions. These methods are all implemented as a single task, but multi-task learning can be used to improve the performance of multiple tasks simultaneously.2.2. Multi-Task LearningMulti-task learning is a promising area of machine learning that leverages the useful information contained in multiple learning tasks to help learn each task more accurately. Multi-task learning is capable of learning more than one learning task simultaneously, because each task can take advantage of the knowledge of other related tasks. Traditional multi-task learning methods (Doersch and Zisserman, 2017) can be classified into many kinds, including multi-task supervised learning, multi-task unsupervised learning (Kim et al., 2017), and multi-task semi-supervised learning (Zhuang et al., 2015). Multi-task supervised learning implies that each task in multi-task learning is a supervised learning task, which models the function mapping from examples to labels. Different from the multi-task supervised learning with labeled examples, the training set of multi-task unsupervised learning only consists of unlabeled examples to mine the information contained in the dataset.2.3. MotivationIn many practical applications, there is usually only a small amount of labeled graph data, because manual annotation wastes labor and time considerably (Navon and Goldschmidt, 2003). For example, in biology, the structure and function analysis of a protein network may take a long time, while large amounts of unlabeled data are easily available. Hence, semi-supervised learning methods are widely used to improve learning performance of graph analysis. Unfortunately, all of the aforementioned semi-supervised learning methods applied on graphs, such as GCN, GraphSAGE, and GAT only learn the latent node representations in a single-task oriented manner and lack consideration of the synergy among subsequent graph analytic tasks. In reality, tasks of node classification and link prediction usually share some common characteristics and can be conducted simultaneously for facilitating each other.As far as we know, the only existing work is the local neighborhood graph autoencoder (LoNGAE, αLoNGAE), which implements the multi-task network representation learning based on a densely connected symmetrical autoencoder and is model dependent. The model utilizes the parameter sharing between encoders and decoders to learn expressive non-linear latent node representations from local graph neighborhoods. Motivated by this, we innovatively propose a general multi-task network representation learning (MTNRL) framework, which is model-agnostic and can be applied on arbitrary network representation models. It optimizes the losses of two tasks jointly to learn the desirable node representations followed by node classification and link prediction tasks that performed on the embedding vectors. | [
"26017442",
"28778026"
] | [
{
"pmid": "26017442",
"title": "Deep learning.",
"abstract": "Deep learning allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction. These methods have dramatically improved the state-of-the-art in speech recognition, visual object recognition, object detection and many other domains such as drug discovery and genomics. Deep learning discovers intricate structure in large data sets by using the backpropagation algorithm to indicate how a machine should change its internal parameters that are used to compute the representation in each layer from the representation in the previous layer. Deep convolutional nets have brought about breakthroughs in processing images, video, speech and audio, whereas recurrent nets have shone light on sequential data such as text and speech."
},
{
"pmid": "28778026",
"title": "A survey on deep learning in medical image analysis.",
"abstract": "Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks. Concise overviews are provided of studies per application area: neuro, retinal, pulmonary, digital pathology, breast, cardiac, abdominal, musculoskeletal. We end with a summary of the current state-of-the-art, a critical discussion of open challenges and directions for future research."
}
] |
Scientific Reports | 31996726 | PMC6989685 | 10.1038/s41598-020-58208-y | Optimisation of monolithic nanocomposite and transparent ceramic scintillation detectors for positron emission tomography | High-resolution arrays of discrete monocrystalline scintillators used for gamma photon coincidence detection in PET are costly and complex to fabricate, and exhibit intrinsically non-uniform sensitivity with respect to emission angle. Nanocomposites and transparent ceramics are two alternative classes of scintillator materials which can be formed into large monolithic structures, and which, when coupled to optical photodetector arrays, may offer a pathway to low cost, high-sensitivity, high-resolution PET. However, due to their high optical attenuation and scattering relative to monocrystalline scintillators, these materials exhibit an inherent trade-off between detection sensitivity and the number of scintillation photons which reach the optical photodetectors. In this work, a method for optimising scintillator thickness to maximise the probability of locating the point of interaction of 511 keV photons in a monolithic scintillator within a specified error bound is proposed and evaluated for five nanocomposite materials (LaBr3:Ce-polystyrene, Gd2O3-polyvinyl toluene, LaF3:Ce-polystyrene, LaF3:Ce-oleic acid and YAG:Ce-polystyrene) and four ceramics (GAGG:Ce, GLuGAG:Ce, GYGAG:Ce and LuAG:Pr). LaF3:Ce-polystyrene and GLuGAG:Ce were the best-performing nanocomposite and ceramic materials, respectively, with maximum sensitivities of 48.8% and 67.8% for 5 mm localisation accuracy with scintillator thicknesses of 42.6 mm and 27.5 mm, respectively. | Related WorkNanocomposite scintillatorsNanocomposite scintillators consist of a mixture of nanometre-scale inorganic scintillating particles, uniformly mixed with a transparent polymer or other organic matrix. The matrix can either be non-scintillating, such as oleic acid (OA), or a polymer scintillator, such as polystyrene (PS) or polyvinyltoluene (PVT)23. Because the gaps between scintillator nanoparticles are filled with a material with a refractive index closer to that of the scintillator than air (or vacuum), scattering is limited and light transmission is improved compared to the use of a compressed bulk powder19. Nevertheless, no matrix material has a refractive index which perfectly matches that of the nanocrystal; therefore, scattering remains a performance-limiting factor, particularly in comparison to monocrystalline scintillator materials. For hygroscopic materials, the matrix offers the additional advantage of protecting the nanoparticles against moisture ingress.Accurately modelling the optical properties of a nanocomposite is considerably more complex than for a homogeneous material. Most useful nanoscintillator particles are sufficiently small that scattering is purely Rayleigh (i.e., satisfying the criteria \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$m\cdot x < 1$$\end{document}m⋅x<1, where \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$m={n}_{p}/{n}_{m}$$\end{document}m=np/nm is the ratio of the refractive index of the scintillator particles to that of the matrix, and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$x=2\pi r/\lambda $$\end{document}x=2πr/λ and \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\lambda ={\lambda }_{0}/{n}_{m}$$\end{document}λ=λ0/nm where λ0 is the peak emission wavelength of the scintillator30). If this condition is satisfied, then the transmission of scintillation photons may then be calculated via (1) 23–26:1\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$T=\frac{I}{{I}_{0}}=exp\left\{-4{\pi }^{4}\frac{{d}^{3}\bar{L}{f}_{p}{n}_{m}^{4}}{{\lambda }^{4}}{(\frac{{m}^{2}-1}{{m}^{2}+2})}^{2}\right\}$$\end{document}T=II0=exp−4π4d3L¯fpnm4λ4(m2−1m2+2)2where I/I0 is the ratio of the output light intensity (I) to the emission light intensity (I0), d is the diameter of the nanoparticle, fp is the volume fraction of particles, \documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$$\bar{L}$$\end{document}L¯ is the average path length and λ is the wavelength of the scintillation photons. This expression assumes that the diameter of nanoparticles is uniform, there is no agglomeration of particles in the matrix, also that the photons emitted during scintillation are of exactly the same wavelength.In practice, none of these assumptions is strictly accurate. Particle agglomeration degrades uniformity when the nanocomposite loading factor is high, creating additional scattering losses31. If there is some overlap between the absorption spectrum of the matrix and the luminescence spectrum of the nanoscintillator, self-absorption of scintillation photons may occur; some polymer matrix materials are known to exhibit this behaviour, especially in the UV region26,32. Despite these limitations, (1) provides a good approximation of optical transmission of scintillation photons for the majority of nanocomposite materials.The refractive index of the nanocomposite (nc) is a function of the nanoparticle loading factor - in fact, much research has been conducted into the problem of using inorganic nanoparticles to effectively tune the refractive index of an organic material. For many nanocomposite scintillators, an effective medium model based on Maxwell-Garnett theory can be used to estimate the composite refractive index using an additive linear approximation, as shown in (2) 33–35:2\documentclass[12pt]{minimal}
\usepackage{amsmath}
\usepackage{wasysym}
\usepackage{amsfonts}
\usepackage{amssymb}
\usepackage{amsbsy}
\usepackage{mathrsfs}
\usepackage{upgreek}
\setlength{\oddsidemargin}{-69pt}
\begin{document}$${n}_{c}={n}_{m}{f}_{m}+{n}_{p}{f}_{p}$$\end{document}nc=nmfm+npfpwhere nm and np are the refractive indices of the matrix and nanoparticles, respectively, and fm and fp represent the fractions of each material by volume. However, while a large number of nanocomposite scintillator materials are known to follow this linear relationship between volume fraction and overall refractive index, others do not. Instead, it is often necessary to employ empirical methods. For instance, a TiO2-polystyrene nanocomposite synthesised by Rao et al.36 is best described by a quadratic equation. The authors hypothesise that strong chemical bonding between the two materials changed the polarisability of the polymer, resulting in the observed behaviour. Other factors which can impact upon the refractive index include the diameter of the nanoparticles; for PbS-gelatine, there was a significant drop in refractive index compared to the bulk material when the mean diameter of the nanoparticle component was less than 25 nm35. Also, if a surfactant coating with a lower refractive index than the nanoparticle is applied to improve the uniformity of the nanocomposite distribution, it may significantly contribute to the overall volume37 and hence alter the refractive index.Using (1) and (2), it is possible to identify the desirable properties for suitable nanoscintillator and matrix materials. These include closely matching scintillator and matrix refractive indices, small nanoparticle size (<10 nm), long emission wavelength and non-overlapping emission and absorption spectra, each of which will help to maximise transmission of the scintillation light23,38. For most imaging applications, high light yield per MeV of gamma photon energy, good energy resolution, and a short time constant are also desirable9. In addition, the nanocomposite needs to exhibit a high linear attenuation coefficient so as to maximise its sensitivity to high-energy gamma radiation. This requires the nanocomposite to use a high-ρ, high-Zeff nanoscintillator material at a high loading factor. Unfortunately, a high loading factor also reduces the optical transmittivity of the nanocomposite. Supplementary Section 1 describes a heuristic approach to choosing a nanocomposite loading factor which balances the attenuation of the material with its optical transmittivity (see Supplementary Figs. 1 and 2).In this work, several recent nanocomposites which may be feasible for fabrication into thick monolithic slabs are compared, with a particular focus on materials with a high nanoparticle loading factor. The material properties are summarised in Table 123,26,39–42. In all nanocomposites with the exception of Gd2O3-PVT, scintillation occurs directly within the nanoparticle. In the case of Gd2O3-PVT, the optical photons are emitted via a two-step process which involves the creation of excitons on the nanoparticle surface, which subsequently transfer energy to the surrounding PVT matrix; the PVT matrix then transfers energy to a fluor compound via fluorescence resonance energy transfer (FRET)26. Both interactions are extremely short-range (of the order of a few nanometres) and should not significantly impact the use of the material as a monolithic scintillator26,43.Table 1Physical and optical properties of a range of proposed nanocomposite materials.Nanoparticle MatrixLaBr3:Ce PSGd2O3 PVTLaF3:Ce OALaF3:Ce PSYAG:Ce PSLoad (% Vol.)194.6345050Peak λ (nm)380+550334334550+Yield (ph/keV)63+224.5+4.5+20.3+R (% @662 keV)2.6+11.416+16+11.1+Decay (ns)16+1730+30+87.9+ρ (g/cm3)1.81*1.34*2.59*3.47*2.81*n (@peak λ)1.69*1.56*1.52*1.65*1.72*α (cm−1)2.00*0.09*2.05*0.15*0.95*Refs.39,4226,3923,39,4023,39,4039,41Most properties listed are found from the literature; several parameters, marked with an*, have been estimated from the volume fractions listed, assuming a 1 cm thick slab with 9 nm diameter nanoparticles. The properties listed with a+ were taken from the bulk crystalline equivalent of the nanoparticle. R is the energy resolution; ρ is the material density; α is the optical linear attenuation coefficient at the peak emission wavelength. PS is polystyrene; PVT is polyvinyl toluene; OA is oleic acid.Transparent ceramic scintillatorsMonocrystalline doped synthetic garnets, such as yttrium aluminium garnet (YAG) and lutetium aluminium garnet (LuAG) have been been used for decades in photonics applications such as optically pumped lasers44–47. Monocrystalline synthetic garnet scintillators have also been proposed for use in medical imaging applications such as PET due to their high density, high scintillation yield and good optical transparency9,48. Recent progress in fabrication techniques has enabled the synthesis of certain garnets as optically transparent polycrystalline ceramics49. Such ceramics retain many of the properties of the equivalent monocrystalline material (in particular, the high density and linear attenuation coefficient), and may offer a higher scintillation yield in some cases27. The principal benefit of multicrystalline transparent ceramics is easier and more cost-effective fabrication, and the flexibility of being able to form the precursor powder into arbitrary shapes prior to sintering - a near-impossibility with monocrystalline materials.The optical and scintillation properties of transparent ceramics are heavily influenced by the specific manufacturing process used. Usually, the ceramic is formed by sintering, whereby a precursor material is calcinated and milled into nanoparticles (∼100–200 nm) before being pressed and sintered at around 1500–1700 °C50,51. The resulting solid may then undergo hot isostatic pressing (HIP) to reduce porosity, followed by annealing and polishing to create a highly transparent solid52,53. The atmosphere in which the sintering and annealing steps are performed has a significant effect on the transparency of the resulting ceramic, and its propensity to exhibit afterglow effects54.Compared to the Czochralski, Kyropoulos or Bridgman-Stockbarger methods used to grow large single-crystal scintillators, the process of creating transparent ceramic scintillators is much more amenable to the production of complex geometries and scaling up to large volumes29. Growth of high-quality defect-free monocrystalline scintillators with consistent characteristics requires extremely high, uniform and well-regulated temperatures and atmospheric conditions28. Additionally, the maximum dimensions of monocrystalline scintillators are limited by both the maximum dimensions of the ampoules (in the Bridgeman-Stockbarger method) and by the difficulty in maintaining a stable, uniform thermal gradient across a wide crystallisation zone55,56. The ability of the ceramics to be pressed into different molded shapes eliminates most of these problems, and the lower temperatures involved mean that material losses are able to be more easily controlled since there is no risk of evaporation of the melt.A significant challenge in the production of ceramic scintillators is achieving high optical transparency in the finished product. Cherepy et al. proposed a method to reduce the residual porosity of the ceramic with hot isostatic pressing, greatly improving optical transparency57. Optical properties are also improved through the selection of high purity precursor materials with cubic/isotropic crystal structures (i.e. no birefringence), which reduces optical scatter caused by grain boundaries and secondary phases57.The properties of several ceramic garnet scintillators are listed in Table 251–54,57–60.Table 2Properties of several transparent ceramic scintillator materials proposed for radiation detection applications.CeramicGYGAG:CeGLuGAG:CeGAGG:CeLuAG:PrPeak λ (nm)550550530310Yield (ph/keV)5048.27021.8R (% @662 keV)4.97.14.94.6st Decay (ns)100849021.4nd Decay (ns)500148194771ρ (g/cm3)5.86.96.636.73n (@peak λ)1.821.921.90*2.03*α (cm−1)0.102.003.13*2.86*refs.53,54,5752,5859,6051,60Properties listed with * have been calculated, while those with * were obtained from literature pertaining to the equivalent monocrystalline form of the material. R is the energy resolution; ρ is the material density; α is the optical linear attenuation coefficient at the peak emission wavelength.Monolithic scintillator interaction localisationSeveral methods have been proposed to estimate the point of interaction between a gamma photon and a monolithic scintillator using the distribution of optical photons exiting one or more surfaces of the monolithic slab10,15,16,61. Statistical algorithms such as maximum likelihood (ML)10,61, chi-squared (χ2)62 or k-nearest-neighbour kNN62–64 have been used to fit the optical photon distribution to a library of reference profiles, and thereby locate the point of interaction. However, generating the reference profiles requires either laborious experimental measurements or extensive Monte Carlo simulations, since a large number of events must be recorded over a range of known positions and angles of incidence - an effort which must be repeated for scintillators with different dimensions or material properties. An alternative approach is to fit an analytic model of the optical photon distribution using an optimisation algorithm which minimises the mean squared error between the observed distribution and the analytic prediction65,66. Where an exact analytic model is difficult or impractical to derive due to complex optical properties of the material, neural networks can be trained to automatically account for non-linearities in the relationship between the observed photon distribution and the point of interaction, particularly near slab boundaries67. However, the training procedure would need to be repeated if the material or slab dimensions change. Therefore, while the neural-network approach theoretically offers superior performance, in this work, the simpler analytic model fitting approach is adopted since the retraining process can be avoided.The optical photon distribution which will be observed depends on the nature of the interaction of the gamma photon with the scintillator. For cases where the photon is photoelectrically absorbed, all of the optical photons will be emitted from a single point, with the number of optical photons proportional to the energy of the gamma photon. A similar outcome will result from Compton interactions where the scattered photon escapes from the scintillator slab; in this case, the number of photons emitted at the point of interaction will be proportional to the difference in energy between the incident and scattered photon. These two cases will be the easiest to localise due to the simplicity of the interaction; the scintillation event may be treated as an isotropic point source66,67. In a nanocomposite or ceramic garnet scintillator, an attenuative factor is included to account for the imperfect transparency of the material68.Where multiple energy depositions occur within the scintillator (for example, a Compton interaction followed by photoelectric absorption of the scattered photon), the optical photon distribution becomes more complex, and accurately fitting an analytic model to the observed optical photon distribution is much harder. It is therefore expected that these interactions will result in a larger position estimation error compared to pure photoelectric events. | [
"10855634",
"26687853",
"20497121",
"29852942",
"19443953",
"18218398",
"19265203",
"21693789",
"12030568",
"20533832",
"12615908",
"17404465",
"20959686",
"25884464",
"15552416",
"21248393",
"20443512",
"20182005"
] | [
{
"pmid": "10855634",
"title": "Scintillation crystals for PET.",
"abstract": "In PET, inorganic scintillator crystals are used to record gamma-rays produced by the annihilation of positrons emitted by injected tracers. The ultimate performance of the camera is strongly tied to both the physical and scintillation properties of the crystals. For this reason, researchers have investigated virtually all known scintillator crystals for possible use in PET. Despite this massive research effort, only a few different scintillators have been found that have a suitable combination of characteristics, and only 2 (thallium-doped sodium iodide and bismuth germanate) have found widespread use. A recently developed scintillator crystal, cerium-doped lutetium oxyorthosilicate, appears to surpass all previously used materials in most respects and promises to be the basis for the next generation of PET cameras."
},
{
"pmid": "26687853",
"title": "Recent Advances and Future Progress in PET Instrumentation.",
"abstract": "PET is an important and growing imaging modality. PET instrumentation has undergone a steady evolution improving various aspects of imaging. In this review, we discuss recent and future software and hardware technologies for PET/CT. The improvements include new hardware, incorporating designs with digital photomultipliers, and fast electronics, allowing implementation of time-of-flight reconstruction. Manufacturers also improved PET sensitivity with a larger axial field of view and 3D imaging. On the CT side, faster scanners and multislice detectors allow implementation of advanced acquisition protocols such as 4D CT and coronary CT angiography. Significant advances have been also made in the reconstruction software, now integrating resolution recovery with advanced iterative techniques. New PET acquisition protocols have been enabled to include continuous bed motion. Efforts have been undertaken to compensate PET scans for respiratory and also for cardiac patient motion (for cardiac imaging) during PET imaging, which significantly improves overall image quality and resolution. Finally, simultaneous PET/MR systems have been recently deployed clinically and now offer even greater potential of image quality and enhanced clinical utility. PET/MR imaging allows for perfectly registered attenuation maps, clinically important complementary MR information, and potentially superior motion correction. These recent multifaceted advances allow PET to remain as one of the most exciting and relevant imaging technologies."
},
{
"pmid": "20497121",
"title": "Recent development in PET instrumentation.",
"abstract": "Positron emission tomography (PET) is used in the clinic and in vivo small animal research to study molecular processes associated with diseases such as cancer, heart disease, and neurological disorders, and to guide the discovery and development of new treatments. This paper reviews current challenges of advancing PET technology and some of newly developed PET detectors and systems. The paper focuses on four aspects of PET instrumentation: high photon detection sensitivity; improved spatial resolution; depth-of-interaction (DOI) resolution and time-of-flight (TOF). Improved system geometry, novel non-scintillator based detectors, and tapered scintillation crystal arrays are able to enhance the photon detection sensitivity of a PET system. Several challenges for achieving high resolution with standard scintillator-based PET detectors are discussed. Novel detectors with 3-D positioning capability have great potential to be deployed in PET for achieving spatial resolution better than 1 mm, such as cadmium-zinc-telluride (CZT) and position-sensitive avalanche photodiodes (PSAPDs). DOI capability enables a PET system to mitigate parallax error and achieve uniform spatial resolution across the field-of-view (FOV). Six common DOI designs, as well as advantages and limitations of each design, are discussed. The availability of fast scintillation crystals such as LaBr(3), and the silicon photomultiplier (SiPM) greatly advances TOF-PET development. Recent instrumentation and initial results of clinical trials are briefly presented. If successful, these technology advances, together with new probe molecules, will substantially enhance the molecular sensitivity of PET and thus increase its role in preclinical and clinical research as well as evaluating and managing disease in the clinic."
},
{
"pmid": "29852942",
"title": "Innovations in Instrumentation for Positron Emission Tomography.",
"abstract": "PET scanners are sophisticated and highly sensitive biomedical imaging devices that can produce highly quantitative images showing the 3-dimensional distribution of radiotracers inside the body. PET scanners are commonly integrated with x-ray CT or MRI scanners in hybrid devices that can provide both molecular imaging (PET) and anatomical imaging (CT or MRI). Despite decades of development, significant opportunities still exist to make major improvements in the performance of PET systems for a variety of clinical and research tasks. These opportunities stem from new ideas and concepts, as well as a range of enabling technologies and methodologies. In this paper, we review current state of the art in PET instrumentation, detectors and systems, describe the major limitations in PET as currently practiced, and offer our own personal insights into some of the recent and emerging technological innovations that we believe will impact the field. Our focus is on the technical aspects of PET imaging, specifically detectors and system design, and the opportunity and necessity to move closer to PET systems for diagnostic patient use and in vivo biomedical research that truly approach the physical performance limits while remaining mindful of imaging time, radiation dose, and cost. However, other key endeavors, which are not covered here, including innovations in reconstruction and modeling methodology, radiotracer development, and expanding the range of clinical and research applications, also will play an equally important, if not more important, role in defining the future of the field."
},
{
"pmid": "19443953",
"title": "A novel, SiPM-array-based, monolithic scintillator detector for PET.",
"abstract": "Silicon photomultipliers (SiPMs) are of great interest to positron emission tomography (PET), as they enable new detector geometries, for e.g., depth-of-interaction (DOI) determination, are MR compatible, and offer faster response and higher gain than other solid-state photosensors such as avalanche photodiodes. Here we present a novel detector design with DOI correction, in which a position-sensitive SiPM array is used to read out a monolithic scintillator. Initial characterization of a prototype detector consisting of a 4 x 4 SiPM array coupled to either the front or back surface of a 13.2 mm x 13.2 mm x 10 mm LYSO:Ce(3+) crystal shows that front-side readout results in significantly better performance than conventional back-side readout. Spatial resolutions <1.6 mm full-width-at-half-maximum (FWHM) were measured at the detector centre in response to an approximately 0.54 mm FWHM diameter test beam. Hardly any resolution losses were observed at angles of incidence up to 45 degrees , demonstrating excellent DOI correction. About 14% FWHM energy resolution was obtained. The timing resolution, measured in coincidence with a BaF(2) detector, equals 960 ps FWHM."
},
{
"pmid": "18218398",
"title": "Maximum likelihood positioning in the scintillation camera using depth of interaction.",
"abstract": "Specific effects of the depth of interaction (DOI) on the photomultiplier (PM) response in an Auger gamma camera were quantified. The method was implemented and tested on a Monte Carlo simulator with special care to the noise modeling. Two models were developed, one considering only the geometric aspects of the camera and used for comparison, and one describing a more realistic camera environment. In a typical camera configuration and 140-keV photons, the DOI alone can account for a 6.4-mm discrepancy in position and 12% in energy between two scintillations. Variation of the DOI can still bring additional distortions when photons do not enter the crystal perpendicularly such as in slant hole, cone beam and other focusing collimators. With a 0.95-cm crystal and a 30 degrees slant angle, the obliquity factor can be responsible for a 5.5-mm variation in the event position. Results indicate that both geometrical and stochastic effects of the DOI are definitely reducing the camera performances and should be included in the image formation process."
},
{
"pmid": "19265203",
"title": "Monolithic scintillator PET detectors with intrinsic depth-of-interaction correction.",
"abstract": "We developed positron emission tomography (PET) detectors based on monolithic scintillation crystals and position-sensitive light sensors. Intrinsic depth-of-interaction (DOI) correction is achieved by deriving the entry points of annihilation photons on the front surface of the crystal from the light sensor signals. Here we characterize the next generation of these detectors, consisting of a 20 mm thick rectangular or trapezoidal LYSO:Ce crystal read out on the front and the back (double-sided readout, DSR) by Hamamatsu S8550SPL avalanche photodiode (APD) arrays optimized for DSR. The full width at half maximum (FWHM) of the detector point-spread function (PSF) obtained with a rectangular crystal at normal incidence equals approximately 1.05 mm at the detector centre, after correction for the approximately 0.9 mm diameter test beam of annihilation photons. Resolution losses of several tenths of a mm occur near the crystal edges. Furthermore, trapezoidal crystals perform almost equally well as rectangular ones, while improving system sensitivity. Due to the highly accurate DOI correction of all detectors, the spatial resolution remains essentially constant for angles of incidence of up to at least 30 degrees . Energy resolutions of approximately 11% FWHM are measured, with a fraction of events of up to 75% in the full-energy peak. The coincidence timing resolution is estimated to be 2.8 ns FWHM. The good spatial, energy and timing resolutions, together with the excellent DOI correction and high detection efficiency of our detectors, are expected to facilitate high and uniform PET system resolution."
},
{
"pmid": "21693789",
"title": "A practical method for depth of interaction determination in monolithic scintillator PET detectors.",
"abstract": "Several new methods for determining the depth of interaction (DOI) of annihilation photons in monolithic scintillator detectors with single-sided, multi-pixel readout are investigated. The aim is to develop a DOI decoding method that allows for practical implementation in a positron emission tomography system. Specifically, calibration data, obtained with perpendicularly incident gamma photons only, are being used. Furthermore, neither detector modifications nor a priori knowledge of the light transport and/or signal variances is required. For this purpose, a clustering approach is utilized in combination with different parameters correlated with the DOI, such as the degree of similarity to a set of reference light distributions, the measured intensity on the sensor pixel(s) closest to the interaction position and the peak intensity of the measured light distribution. The proposed methods were tested experimentally on a detector comprised of a 20 mm × 20 mm × 12 mm polished LYSO:Ce crystal coupled to a 4 × 4 multi-anode photomultiplier. The method based on the linearly interpolated measured intensities on the sensor pixels closest to the estimated (x, y)-coordinate outperformed the other methods, yielding DOI resolutions between ∼1 and ∼4.5 mm FWHM depending on the DOI, the (x, y) resolution and the amount of reference data used."
},
{
"pmid": "12030568",
"title": "Inorganic scintillators in medical imaging.",
"abstract": "A review of medical diagnostic imaging methods utilizing x-rays or gamma rays and the application and development of inorganic scintillators is presented."
},
{
"pmid": "20533832",
"title": "Transparent infrared-emitting CeF3:Yb-Er polymer nanocomposites for optical applications.",
"abstract": "Bright infrared-emitting nanocomposites of unmodified CeF(3):Yb-Er with polymethyl-methacrylate (PMMA) and polystyrene (PS), which offer a vast range of potential applications, which include optical amplifiers, waveguides, laser materials, and implantable medical devices, were developed. For the optical application of these nanocomposites, it is critical to obtain highly transparent composites to minimize absorption and scattering losses. Preparation of transparent composites typically requires powder processing approaches that include sophisticated particle size control, deagglomeration, and dispersion stabilization methods leading to an increase in process complexity and processing steps. This work seeks to prepare transparent composites with high solids loading (>5 vol%) by matching the refractive index of the inorganic particle with low cost polymers like PMMA and PS, so as to circumvent the use of any complex processing techniques or particle surface modification. PS nanocomposites were found to exhibit better transparency than the PMMA nanocomposites, especially at high solids loading (>/=10 vol%). It was found that the optical transparency of PMMA nanocomposites was more significantly affected by the increase in solids loading and inorganic particle size because of the larger refractive index mismatch of the PMMA nanocomposites compared to that of PS nanocomposites. Rayleigh scattering theory was used to provide a theoretical estimate of the scattering losses in these ceramic-polymer nanocomposites."
},
{
"pmid": "12615908",
"title": "Fluorescence resonance energy transfer (FRET) microscopy imaging of live cell protein localizations.",
"abstract": "The current advances in fluorescence microscopy, coupled with the development of new fluorescent probes, make fluorescence resonance energy transfer (FRET) a powerful technique for studying molecular interactions inside living cells with improved spatial (angstrom) and temporal (nanosecond) resolution, distance range, and sensitivity and a broader range of biological applications."
},
{
"pmid": "17404465",
"title": "Depth of interaction decoding of a continuous crystal detector module.",
"abstract": "We present a clustering method to extract the depth of interaction (DOI) information from an 8 mm thick crystal version of our continuous miniature crystal element (cMiCE) small animal PET detector. This clustering method, based on the maximum-likelihood (ML) method, can effectively build look-up tables (LUT) for different DOI regions. Combined with our statistics-based positioning (SBP) method, which uses a LUT searching algorithm based on the ML method and two-dimensional mean-variance LUTs of light responses from each photomultiplier channel with respect to different gamma ray interaction positions, the position of interaction and DOI can be estimated simultaneously. Data simulated using DETECT2000 were used to help validate our approach. An experiment using our cMiCE detector was designed to evaluate the performance. Two and four DOI region clustering were applied to the simulated data. Two DOI regions were used for the experimental data. The misclassification rate for simulated data is about 3.5% for two DOI regions and 10.2% for four DOI regions. For the experimental data, the rate is estimated to be approximately 25%. By using multi-DOI LUTs, we also observed improvement of the detector spatial resolution, especially for the corner region of the crystal. These results show that our ML clustering method is a consistent and reliable way to characterize DOI in a continuous crystal detector without requiring any modifications to the crystal or detector front end electronics. The ability to characterize the depth-dependent light response function from measured data is a major step forward in developing practical detectors with DOI positioning capability."
},
{
"pmid": "20959686",
"title": "Nonlinear least-squares modeling of 3D interaction position in a monolithic scintillator block.",
"abstract": "This paper presents a study of possible models to describe the relation between the scintillation light point-of-origin and the measured photo detector pixel signals in monolithic scintillation crystals. From these models the X, Y and depth of interaction (DOI) coordinates can be estimated simultaneously by nonlinear least-square fitting. The method depends only on the information embedded in the signals of individual events, and therefore does not need any prior position training or calibration. Three possible distributions of the light sources were evaluated: an exact solid-angle-based distribution, an approximate solid-angle distribution and an extended approximate solid-angle-based distribution which includes internal reflection at side and bottom surfaces. The performance of the general model using these three distributions was studied using Monte Carlo simulated data of a 20 x 20 x 10 mm lutetium oxyorthosilicate (Lu₂SiO₅ or LSO) block read out by 2 Hamamatsu S8550 avalanche photo diode arrays. The approximate solid-angle-based model had the best compromise between resolution and simplicity. This model was also evaluated using experimental data by positioning a narrow 1.2 mm full width at half maximum (FWHM) beam of 511 keV photons at known positions on the 20 x 20 x 10 mm LSO block. An average intrinsic resolution in the X-direction of 1.4 mm FWHM was obtained for positions covering the complete block. The intrinsic DOI resolution was estimated at 2.6 mm FWHM."
},
{
"pmid": "25884464",
"title": "Simulation study of PET detector limitations using continuous crystals.",
"abstract": "Continuous crystals can potentially obtain better intrinsic detector spatial resolution compared to pixelated crystals, additionally providing depth of interaction (DoI) information from the light distribution. To achieve high performance sophisticated interaction position estimation algorithms are required. There are a number of algorithms in the literature applied to different crystal dimensions and different photodetectors. However, the different crystal properties and photodetector array geometries have an impact on the algorithm performance. In this work we analysed, through Monte Carlo simulations, different combinations of realistic crystals and photodetector parameters to better understand their influence on the interaction position estimation accuracy, with special emphasis on the DoI. We used an interaction position estimation based on an analytical model for the present work. Different photodetector granulation schemes were investigated. The impact of the number of crystal faces readout by photodetectors was studied by simulating scenarios with one and two photodetectors. In addition, crystals with different levels of reflection and aspect ratios (AR) were analysed. Results showed that the impact of photodetector granularity is mainly shown near the edges and specially in the corners of the crystal. The resulting intrinsic spatial resolution near the centre with a 12 × 12 × 10 mm(3) LYSO crystal was 0.7-0.9 mm, while the average spatial resolution calculated on the entire crystal was 0.77 ± 0.18 mm for all the simulated geometries with one and two photodetectors. Having front and back photodetectors reduced the DoI bias (Euclidean distance between estimated DoI and real DoI) and improved the transversal resolution near the corners. In scenarios with one photodetector, small AR resulted in DoI inaccuracies for absorbed events at the entrance of the crystal. These inaccuracies were slightly reduced either by increasing the AR or reducing the amount of reflected light, and highly mitigated using two photodetectors. Using one photodetector, we obtained a piecewise DoI error model with a DoI resolution of 0.4-0.9 mm for a 1.2 AR crystal, and we observed that including a second photodetector or reducing the amount of reflections reduced the DoI bias but did not significantly improve the DoI resolution. Translating the piecewise DoI error model obtained in this study to image reconstruction we obtained a spatial resolution variability of 0.39 mm using 85% of the FoV, compared to 2.59 mm and 1.87 mm without DoI correction or with a dual layer system, respectively."
},
{
"pmid": "15552416",
"title": "GATE: a simulation toolkit for PET and SPECT.",
"abstract": "Monte Carlo simulation is an essential tool in emission tomography that can assist in the design of new medical imaging devices, the optimization of acquisition protocols and the development or assessment of image reconstruction algorithms and correction techniques. GATE, the Geant4 Application for Tomographic Emission, encapsulates the Geant4 libraries to achieve a modular, versatile, scripted simulation toolkit adapted to the field of nuclear medicine. In particular, GATE allows the description of time-dependent phenomena such as source or detector movement, and source decay kinetics. This feature makes it possible to simulate time curves under realistic acquisition conditions and to test dynamic reconstruction algorithms. This paper gives a detailed description of the design and development of GATE by the OpenGATE collaboration, whose continuing objective is to improve, document and validate GATE by simulating commercially available imaging systems for PET and SPECT. Large effort is also invested in the ability and the flexibility to model novel detection systems or systems still under design. A public release of GATE licensed under the GNU Lesser General Public License can be downloaded at http:/www-lphe.epfl.ch/GATE/. Two benchmarks developed for PET and SPECT to test the installation of GATE and to serve as a tutorial for the users are presented. Extensive validation of the GATE simulation platform has been started, comparing simulations and measurements on commercially available acquisition systems. References to those results are listed. The future prospects towards the gridification of GATE and its extension to other domains such as dosimetry are also discussed."
},
{
"pmid": "21248393",
"title": "GATE V6: a major enhancement of the GATE simulation platform enabling modelling of CT and radiotherapy.",
"abstract": "GATE (Geant4 Application for Emission Tomography) is a Monte Carlo simulation platform developed by the OpenGATE collaboration since 2001 and first publicly released in 2004. Dedicated to the modelling of planar scintigraphy, single photon emission computed tomography (SPECT) and positron emission tomography (PET) acquisitions, this platform is widely used to assist PET and SPECT research. A recent extension of this platform, released by the OpenGATE collaboration as GATE V6, now also enables modelling of x-ray computed tomography and radiation therapy experiments. This paper presents an overview of the main additions and improvements implemented in GATE since the publication of the initial GATE paper (Jan et al 2004 Phys. Med. Biol. 49 4543-61). This includes new models available in GATE to simulate optical and hadronic processes, novelties in modelling tracer, organ or detector motion, new options for speeding up GATE simulations, examples illustrating the use of GATE V6 in radiotherapy applications and CT simulations, and preliminary results regarding the validation of GATE V6 for radiation therapy applications. Upon completion of extensive validation studies, GATE is expected to become a valuable tool for simulations involving both radiotherapy and imaging."
},
{
"pmid": "20443512",
"title": "Model of the point spread function of monolithic scintillator PET detectors for perpendicular incidence.",
"abstract": "PURPOSE\nPreviously, we demonstrated the potential of positron emission tomography detectors consisting of monolithic scintillation crystals read out by arrays of solid-state light sensors. We reported detector spatial resolutions of 1.1-1.3 mm full width at half maximum (FWHM) with no degradation for angles of incidence up to 30 degrees, energy resolutions of approximately 11% FWHM, and timing resolutions of approximately 2 ns FWHM, using monolithic LYSO:Ce3+ crystals coupled to avalanche photodiode (APD) arrays. Here we develop, validate, and demonstrate a simple model of the detector point spread function (PSF) of such monolithic scintillator detectors.\n\n\nMETHODS\nA PSF model was developed that essentially consists of two convolved components, one accounting for the spatial distribution of the energy deposited by annihilation photons within the crystal, and the other for the influences of statistical signal fluctuations and electronic noise. The model was validated through comparison with spatial resolution measurements on a detector consisting of an LYSO:Ce3+ crystal read out by two APD arrays.\n\n\nRESULTS\nThe model is shown to describe the measured detector spatial response well at the noise levels found in the experiments. In addition, it is demonstrated how the model can be used to correct the measured spatial response for the influence of the finite diameter of the annihilation photon beam used in the experiments, thus obtaining an estimate of the intrinsic detector PSF.\n\n\nCONCLUSIONS\nDespite its simplicity, the proposed model is an accurate tool for analyzing the detector PSF of monolithic scintillator detectors and can be used to estimate the intrinsic detector PSF from the measured one."
},
{
"pmid": "20182005",
"title": "Optical simulation of monolithic scintillator detectors using GATE/GEANT4.",
"abstract": "Much research is being conducted on position-sensitive scintillation detectors for medical imaging, particularly for emission tomography. Monte Carlo simulations play an essential role in many of these research activities. As the scintillation process, the transport of scintillation photons through the crystal(s), and the conversion of these photons into electronic signals each have a major influence on the detector performance; all of these processes may need to be incorporated in the model to obtain accurate results. In this work the optical and scintillation models of the GEANT4 simulation toolkit are validated by comparing simulations and measurements on monolithic scintillator detectors for high-resolution positron emission tomography (PET). We have furthermore made the GEANT4 optical models available within the user-friendly GATE simulation platform (as of version 3.0). It is shown how the necessary optical input parameters can be determined with sufficient accuracy. The results show that the optical physics models of GATE/GEANT4 enable accurate prediction of the spatial and energy resolution of monolithic scintillator PET detectors."
}
] |
BMC Medical Informatics and Decision Making | 32000770 | PMC6993314 | 10.1186/s12911-020-1026-2 | Customization scenarios for de-identification of clinical notes | BackgroundAutomated machine-learning systems are able to de-identify electronic medical records, including free-text clinical notes. Use of such systems would greatly boost the amount of data available to researchers, yet their deployment has been limited due to uncertainty about their performance when applied to new datasets.ObjectiveWe present practical options for clinical note de-identification, assessing performance of machine learning systems ranging from off-the-shelf to fully customized.MethodsWe implement a state-of-the-art machine learning de-identification system, training and testing on pairs of datasets that match the deployment scenarios. We use clinical notes from two i2b2 competition corpora, the Physionet Gold Standard corpus, and parts of the MIMIC-III dataset.ResultsFully customized systems remove 97–99% of personally identifying information. Performance of off-the-shelf systems varies by dataset, with performance mostly above 90%. Providing a small labeled dataset or large unlabeled dataset allows for fine-tuning that improves performance over off-the-shelf systems.ConclusionHealth organizations should be aware of the levels of customization available when selecting a de-identification deployment solution, in order to choose the one that best matches their resources and target performance level. | Related workThe first automated approaches for medical free-text de-identification were proposed in the late 1990s and were mainly rule-based [10, 11]. Subsequent work applied machine-learning algorithms and statistical methods such as decision trees [12] and support vector machines [13–15]. These methods required substantial feature-engineering efforts. In the last few years, techniques have shifted towards artificial neural networks and in particular deep neural networks; Yogarajan et al. review current trends [16]. Dernoncourt et al. [5] were the first to use artificial neural networks directly for de-identification of medical texts, showing improved performance. Recently, artificial neural networks were used in several studies, often in combination with rule-based heuristics [6, 17, 18]. Although in practice heuristics are recommended [19], in our work we choose not to use them in order to isolate the contribution of the machine learning model.Our partial customization scenario with labeled examples is an example of semi-supervised transfer learning/domain adaptation; we build on the work of Lee JY et al. in neural networks [20]. Lee H-J et al. compare 3 transfer learning techniques for de-identification [21]. Kim et al. study questions similar to ours but for concept extraction from notes, also concluding that transfer learning improves performance of a general model [22]. Our partial customization scenario using unlabeled examples falls under unsupervised domain adaptation, techniques for which include domain-specific embeddings [23] and propensity score weighting [24]. Our off-the-shelf scenario serves as a baseline for both adaptation scenarios. | [
"29589569",
"26293868",
"24502938",
"28040687",
"28579533",
"18652655",
"22692265",
"26225918",
"14983930",
"18053696",
"28602904",
"29854172",
"29854175",
"26958209",
"17600094",
"26319540",
"10851218",
"30217670"
] | [
{
"pmid": "29589569",
"title": "A bibliometric analysis of natural language processing in medical research.",
"abstract": "BACKGROUND\nNatural language processing (NLP) has become an increasingly significant role in advancing medicine. Rich research achievements of NLP methods and applications for medical information processing are available. It is of great significance to conduct a deep analysis to understand the recent development of NLP-empowered medical research field. However, limited study examining the research status of this field could be found. Therefore, this study aims to quantitatively assess the academic output of NLP in medical research field.\n\n\nMETHODS\nWe conducted a bibliometric analysis on NLP-empowered medical research publications retrieved from PubMed in the period 2007-2016. The analysis focused on three aspects. Firstly, the literature distribution characteristics were obtained with a statistics analysis method. Secondly, a network analysis method was used to reveal scientific collaboration relations. Finally, thematic discovery and evolution was reflected using an affinity propagation clustering method.\n\n\nRESULTS\nThere were 1405 NLP-empowered medical research publications published during the 10 years with an average annual growth rate of 18.39%. 10 most productive publication sources together contributed more than 50% of the total publications. The USA had the highest number of publications. A moderately significant correlation between country's publications and GDP per capita was revealed. Denny, Joshua C was the most productive author. Mayo Clinic was the most productive affiliation. The annual co-affiliation and co-country rates reached 64.04% and 15.79% in 2016, respectively. 10 main great thematic areas were identified including Computational biology, Terminology mining, Information extraction, Text classification, Social medium as data source, Information retrieval, etc. CONCLUSIONS: A bibliometric analysis of NLP-empowered medical research publications for uncovering the recent research status is presented. The results can assist relevant researchers, especially newcomers in understanding the research development systematically, seeking scientific cooperation partners, optimizing research topic choices and monitoring new scientific or technological activities."
},
{
"pmid": "26293868",
"title": "Clinical Natural Language Processing in 2014: Foundational Methods Supporting Efficient Healthcare.",
"abstract": "OBJECTIVE\nTo summarize recent research and present a selection of the best papers published in 2014 in the field of clinical Natural Language Processing (NLP).\n\n\nMETHOD\nA systematic review of the literature was performed by the two section editors of the IMIA Yearbook NLP section by searching bibliographic databases with a focus on NLP efforts applied to clinical texts or aimed at a clinical outcome. A shortlist of candidate best papers was first selected by the section editors before being peer-reviewed by independent external reviewers.\n\n\nRESULTS\nThe clinical NLP best paper selection shows that the field is tackling text analysis methods of increasing depth. The full review process highlighted five papers addressing foundational methods in clinical NLP using clinically relevant texts from online forums or encyclopedias, clinical texts from Electronic Health Records, and included studies specifically aiming at a practical clinical outcome. The increased access to clinical data that was made possible with the recent progress of de-identification paved the way for the scientific community to address complex NLP problems such as word sense disambiguation, negation, temporal analysis and specific information nugget extraction. These advances in turn allowed for efficient application of NLP to clinical problems such as cancer patient triage. Another line of research investigates online clinically relevant texts and brings interesting insight on communication strategies to convey health-related information.\n\n\nCONCLUSIONS\nThe field of clinical NLP is thriving through the contributions of both NLP researchers and healthcare professionals interested in applying NLP techniques for concrete healthcare purposes. Clinical NLP is becoming mature for practical applications with a significant clinical impact."
},
{
"pmid": "24502938",
"title": "Text de-identification for privacy protection: a study of its impact on clinical text information content.",
"abstract": "As more and more electronic clinical information is becoming easier to access for secondary uses such as clinical research, approaches that enable faster and more collaborative research while protecting patient privacy and confidentiality are becoming more important. Clinical text de-identification offers such advantages but is typically a tedious manual process. Automated Natural Language Processing (NLP) methods can alleviate this process, but their impact on subsequent uses of the automatically de-identified clinical narratives has only barely been investigated. In the context of a larger project to develop and investigate automated text de-identification for Veterans Health Administration (VHA) clinical notes, we studied the impact of automated text de-identification on clinical information in a stepwise manner. Our approach started with a high-level assessment of clinical notes informativeness and formatting, and ended with a detailed study of the overlap of select clinical information types and Protected Health Information (PHI). To investigate the informativeness (i.e., document type information, select clinical data types, and interpretation or conclusion) of VHA clinical notes, we used five different existing text de-identification systems. The informativeness was only minimally altered by these systems while formatting was only modified by one system. To examine the impact of de-identification on clinical information extraction, we compared counts of SNOMED-CT concepts found by an open source information extraction application in the original (i.e., not de-identified) version of a corpus of VHA clinical notes, and in the same corpus after de-identification. Only about 1.2-3% less SNOMED-CT concepts were found in de-identified versions of our corpus, and many of these concepts were PHI that was erroneously identified as clinical information. To study this impact in more details and assess how generalizable our findings were, we examined the overlap between select clinical information annotated in the 2010 i2b2 NLP challenge corpus and automatic PHI annotations from our best-of-breed VHA clinical text de-identification system (nicknamed 'BoB'). Overall, only 0.81% of the clinical information exactly overlapped with PHI, and 1.78% partly overlapped. We conclude that automated text de-identification's impact on clinical information is small, but not negligible, and that improved clinical acronyms and eponyms disambiguation could significantly reduce this impact."
},
{
"pmid": "28040687",
"title": "De-identification of patient notes with recurrent neural networks.",
"abstract": "OBJECTIVE\nPatient notes in electronic health records (EHRs) may contain critical information for medical investigations. However, the vast majority of medical investigators can only access de-identified notes, in order to protect the confidentiality of patients. In the United States, the Health Insurance Portability and Accountability Act (HIPAA) defines 18 types of protected health information that needs to be removed to de-identify patient notes. Manual de-identification is impractical given the size of electronic health record databases, the limited number of researchers with access to non-de-identified notes, and the frequent mistakes of human annotators. A reliable automated de-identification system would consequently be of high value.\n\n\nMATERIALS AND METHODS\nWe introduce the first de-identification system based on artificial neural networks (ANNs), which requires no handcrafted features or rules, unlike existing systems. We compare the performance of the system with state-of-the-art systems on two datasets: the i2b2 2014 de-identification challenge dataset, which is the largest publicly available de-identification dataset, and the MIMIC de-identification dataset, which we assembled and is twice as large as the i2b2 2014 dataset.\n\n\nRESULTS\nOur ANN model outperforms the state-of-the-art systems. It yields an F1-score of 97.85 on the i2b2 2014 dataset, with a recall of 97.38 and a precision of 98.32, and an F1-score of 99.23 on the MIMIC de-identification dataset, with a recall of 99.25 and a precision of 99.21.\n\n\nCONCLUSION\nOur findings support the use of ANNs for de-identification of patient notes, as they show better performance than previously published systems while requiring no manual feature engineering."
},
{
"pmid": "28579533",
"title": "De-identification of clinical notes via recurrent neural network and conditional random field.",
"abstract": "De-identification, identifying information from data, such as protected health information (PHI) present in clinical data, is a critical step to enable data to be shared or published. The 2016 Centers of Excellence in Genomic Science (CEGS) Neuropsychiatric Genome-scale and RDOC Individualized Domains (N-GRID) clinical natural language processing (NLP) challenge contains a de-identification track in de-identifying electronic medical records (EMRs) (i.e., track 1). The challenge organizers provide 1000 annotated mental health records for this track, 600 out of which are used as a training set and 400 as a test set. We develop a hybrid system for the de-identification task on the training set. Firstly, four individual subsystems, that is, a subsystem based on bidirectional LSTM (long-short term memory, a variant of recurrent neural network), a subsystem-based on bidirectional LSTM with features, a subsystem based on conditional random field (CRF) and a rule-based subsystem, are used to identify PHI instances. Then, an ensemble learning-based classifiers is deployed to combine all PHI instances predicted by above three machine learning-based subsystems. Finally, the results of the ensemble learning-based classifier and the rule-based subsystem are merged together. Experiments conducted on the official test set show that our system achieves the highest micro F1-scores of 93.07%, 91.43% and 95.23% under the \"token\", \"strict\" and \"binary token\" criteria respectively, ranking first in the 2016 CEGS N-GRID NLP challenge. In addition, on the dataset of 2014 i2b2 NLP challenge, our system achieves the highest micro F1-scores of 96.98%, 95.11% and 98.28% under the \"token\", \"strict\" and \"binary token\" criteria respectively, outperforming other state-of-the-art systems. All these experiments prove the effectiveness of our proposed method."
},
{
"pmid": "18652655",
"title": "Automated de-identification of free-text medical records.",
"abstract": "BACKGROUND\nText-based patient medical records are a vital resource in medical research. In order to preserve patient confidentiality, however, the U.S. Health Insurance Portability and Accountability Act (HIPAA) requires that protected health information (PHI) be removed from medical records before they can be disseminated. Manual de-identification of large medical record databases is prohibitively expensive, time-consuming and prone to error, necessitating automatic methods for large-scale, automated de-identification.\n\n\nMETHODS\nWe describe an automated Perl-based de-identification software package that is generally usable on most free-text medical records, e.g., nursing notes, discharge summaries, X-ray reports, etc. The software uses lexical look-up tables, regular expressions, and simple heuristics to locate both HIPAA PHI, and an extended PHI set that includes doctors' names and years of dates. To develop the de-identification approach, we assembled a gold standard corpus of re-identified nursing notes with real PHI replaced by realistic surrogate information. This corpus consists of 2,434 nursing notes containing 334,000 words and a total of 1,779 instances of PHI taken from 163 randomly selected patient records. This gold standard corpus was used to refine the algorithm and measure its sensitivity. To test the algorithm on data not used in its development, we constructed a second test corpus of 1,836 nursing notes containing 296,400 words. The algorithm's false negative rate was evaluated using this test corpus.\n\n\nRESULTS\nPerformance evaluation of the de-identification software on the development corpus yielded an overall recall of 0.967, precision value of 0.749, and fallout value of approximately 0.002. On the test corpus, a total of 90 instances of false negatives were found, or 27 per 100,000 word count, with an estimated recall of 0.943. Only one full date and one age over 89 were missed. No patient names were missed in either corpus.\n\n\nCONCLUSION\nWe have developed a pattern-matching de-identification system based on dictionary look-ups, regular expressions, and heuristics. Evaluation based on two different sets of nursing notes collected from a U.S. hospital suggests that, in terms of recall, the software out-performs a single human de-identifier (0.81) and performs at least as well as a consensus of two human de-identifiers (0.94). The system is currently tuned to de-identify PHI in nursing notes and discharge summaries but is sufficiently generalized and can be customized to handle text files of any format. Although the accuracy of the algorithm is high, it is probably insufficient to be used to publicly disseminate medical data. The open-source de-identification software and the gold standard re-identified corpus of medical records have therefore been made available to researchers via the PhysioNet website to encourage improvements in the algorithm."
},
{
"pmid": "22692265",
"title": "Strategies for de-identification and anonymization of electronic health record data for use in multicenter research studies.",
"abstract": "BACKGROUND\nDe-identification and anonymization are strategies that are used to remove patient identifiers in electronic health record data. The use of these strategies in multicenter research studies is paramount in importance, given the need to share electronic health record data across multiple environments and institutions while safeguarding patient privacy.\n\n\nMETHODS\nSystematic literature search using keywords of de-identify, deidentify, de-identification, deidentification, anonymize, anonymization, data scrubbing, and text scrubbing. Search was conducted up to June 30, 2011 and involved 6 different common literature databases. A total of 1798 prospective citations were identified, and 94 full-text articles met the criteria for review and the corresponding articles were obtained. Search results were supplemented by review of 26 additional full-text articles; a total of 120 full-text articles were reviewed.\n\n\nRESULTS\nA final sample of 45 articles met inclusion criteria for review and discussion. Articles were grouped into text, images, and biological sample categories. For text-based strategies, the approaches were segregated into heuristic, lexical, and pattern-based systems versus statistical learning-based systems. For images, approaches that de-identified photographic facial images and magnetic resonance image data were described. For biological samples, approaches that managed the identifiers linked with these samples were discussed, particularly with respect to meeting the anonymization requirements needed for Institutional Review Board exemption under the Common Rule.\n\n\nCONCLUSIONS\nCurrent de-identification strategies have their limitations, and statistical learning-based systems have distinct advantages over other approaches for the de-identification of free text. True anonymization is challenging, and further work is needed in the areas of de-identification of datasets and protection of genetic information."
},
{
"pmid": "26225918",
"title": "Automated systems for the de-identification of longitudinal clinical narratives: Overview of 2014 i2b2/UTHealth shared task Track 1.",
"abstract": "The 2014 i2b2/UTHealth Natural Language Processing (NLP) shared task featured four tracks. The first of these was the de-identification track focused on identifying protected health information (PHI) in longitudinal clinical narratives. The longitudinal nature of clinical narratives calls particular attention to details of information that, while benign on their own in separate records, can lead to identification of patients in combination in longitudinal records. Accordingly, the 2014 de-identification track addressed a broader set of entities and PHI than covered by the Health Insurance Portability and Accountability Act - the focus of the de-identification shared task that was organized in 2006. Ten teams tackled the 2014 de-identification task and submitted 22 system outputs for evaluation. Each team was evaluated on their best performing system output. Three of the 10 systems achieved F1 scores over .90, and seven of the top 10 scored over .75. The most successful systems combined conditional random fields and hand-written rules. Our findings indicate that automated systems can be very effective for this task, but that de-identification is not yet a solved problem."
},
{
"pmid": "14983930",
"title": "Evaluation of a deidentification (De-Id) software engine to share pathology reports and clinical documents for research.",
"abstract": "We evaluated a comprehensive deidentification engine at the University of Pittsburgh Medical Center (UPMC), Pittsburgh, PA, that uses a complex set of rules, dictionaries, pattern-matching algorithms, and the Unified Medical Language System to identify and replace identifying text in clinical reports while preserving medical information for sharing in research. In our initial data set of 967 surgical pathology reports, the software did not suppress outside (103), UPMC (47), and non-UPMC (56) accession numbers; dates (7); names (9) or initials (25) of case pathologists; or hospital or laboratory names (46). In 150 reports, some clinical information was suppressed inadvertently (overmarking). The engine retained eponymic patient names, eg, Barrett and Gleason. In the second evaluation (1,000 reports), the software did not suppress outside (90) or UPMC (6) accession numbers or names (4) or initials (2) of case pathologists. In the third evaluation, the software removed names of patients, hospitals (297/300), pathologists (297/300), transcriptionists, residents and physicians, dates of procedures, and accession numbers (298/300). By the end of the evaluation, the system was reliably and specifically removing safe-harbor identifiers and producing highly readable deidentified text without removing important clinical information. Collaboration between pathology domain experts and system developers and continuous quality assurance are needed to optimize ongoing deidentification processes."
},
{
"pmid": "18053696",
"title": "A de-identifier for medical discharge summaries.",
"abstract": "OBJECTIVE\nClinical records contain significant medical information that can be useful to researchers in various disciplines. However, these records also contain personal health information (PHI) whose presence limits the use of the records outside of hospitals. The goal of de-identification is to remove all PHI from clinical records. This is a challenging task because many records contain foreign and misspelled PHI; they also contain PHI that are ambiguous with non-PHI. These complications are compounded by the linguistic characteristics of clinical records. For example, medical discharge summaries, which are studied in this paper, are characterized by fragmented, incomplete utterances and domain-specific language; they cannot be fully processed by tools designed for lay language.\n\n\nMETHODS AND RESULTS\nIn this paper, we show that we can de-identify medical discharge summaries using a de-identifier, Stat De-id, based on support vector machines and local context (F-measure=97% on PHI). Our representation of local context aids de-identification even when PHI include out-of-vocabulary words and even when PHI are ambiguous with non-PHI within the same corpus. Comparison of Stat De-id with a rule-based approach shows that local context contributes more to de-identification than dictionaries combined with hand-tailored heuristics (F-measure=85%). Comparison with two well-known named entity recognition (NER) systems, SNoW (F-measure=94%) and IdentiFinder (F-measure=36%), on five representative corpora show that when the language of documents is fragmented, a system with a relatively thorough representation of local context can be a more effective de-identifier than systems that combine (relatively simpler) local context with global context. Comparison with a Conditional Random Field De-identifier (CRFD), which utilizes global context in addition to the local context of Stat De-id, confirms this finding (F-measure=88%) and establishes that strengthening the representation of local context may be more beneficial for de-identification than complementing local with global context."
},
{
"pmid": "28602904",
"title": "A hybrid approach to automatic de-identification of psychiatric notes.",
"abstract": "De-identification, or identifying and removing protected health information (PHI) from clinical data, is a critical step in making clinical data available for clinical applications and research. This paper presents a natural language processing system for automatic de-identification of psychiatric notes, which was designed to participate in the 2016 CEGS N-GRID shared task Track 1. The system has a hybrid structure that combines machine leaning techniques and rule-based approaches. The rule-based components exploit the structure of the psychiatric notes as well as characteristic surface patterns of PHI mentions. The machine learning components utilize supervised learning with rich features. In addition, the system performance was boosted with integration of additional data to the training set through domain adaptation. The hybrid system showed overall micro-averaged F-score 90.74 on the test set, second-best among all the participants of the CEGS N-GRID task."
},
{
"pmid": "29854172",
"title": "Modes of De-identification.",
"abstract": "De-identification of protected health information is an essential method for protecting patient privacy. Most institutes require de-identification of patient data prior to conducting scientific studies; therefore, it is important for clinical scientists to be cognizant of all modes of de-identification and all services provided by their de-identification tools. In this article, we discuss eight different modes of de-identification that yield de-identified data at different levels of quality. Most of these modes can be used in combination to achieve the best performance."
},
{
"pmid": "29854175",
"title": "Leveraging existing corpora for de-identification of psychiatric notes using domain adaptation.",
"abstract": "De-identification of clinical notes is a special case of named entity recognition. Supervised machine-learning (ML) algorithms have achieved promising results for this task. However, ML-based de-identification systems often require annotating a large number of clinical notes of interest, which is costly. Domain adaptation (DA) is a technology that enables learning from annotated datasets from different sources, thereby reducing annotation cost required for ML training in the target domain. In this study, we investigate the use of DA methods for deidentification of psychiatric notes. Three state-of-the-art DA methods: instance pruning, instance weighting, and feature augmentation are applied to three source corpora of annotated hospital discharge summaries, outpatient notes, and a mixture of different note types written for diabetic patients. Our results show that DA can increase deidentification performance over the baselines, indicating that it can effectively reduce annotation cost for the target psychiatric notes. Feature augmentation is shown to increase performance the most among the three DA methods. Performance variation among the different types of clinical notes is also observed, showing that a mixture of different types of notes brings the biggest increase in performance."
},
{
"pmid": "26958209",
"title": "A Study of Concept Extraction Across Different Types of Clinical Notes.",
"abstract": "Our research investigates methods for creating effective concept extractors for specialty clinical notes. First, we present three new \"specialty area\" datasets consisting of Cardiology, Neurology, and Orthopedics clinical notes manually annotated with medical concepts. We analyze the medical concepts in each dataset and compare with the widely used i2b2 2010 corpus. Second, we create several types of concept extraction models and examine the effects of training supervised learners with specialty area data versus i2b2 data. We find substantial differences in performance across the datasets, and obtain the best results for all three specialty areas by training with both i2b2 and specialty data. Third, we explore strategies to improve concept extraction on specialty notes with ensemble methods. We compare two types of ensemble methods (Voting/Stacking) and a domain adaptation model, and show that a Stacked ensemble of classifiers trained with i2b2 and specialty data yields the best performance."
},
{
"pmid": "17600094",
"title": "Evaluating the state-of-the-art in automatic de-identification.",
"abstract": "To facilitate and survey studies in automatic de-identification, as a part of the i2b2 (Informatics for Integrating Biology to the Bedside) project, authors organized a Natural Language Processing (NLP) challenge on automatically removing private health information (PHI) from medical discharge records. This manuscript provides an overview of this de-identification challenge, describes the data and the annotation process, explains the evaluation metrics, discusses the nature of the systems that addressed the challenge, analyzes the results of received system runs, and identifies directions for future research. The de-indentification challenge data consisted of discharge summaries drawn from the Partners Healthcare system. Authors prepared this data for the challenge by replacing authentic PHI with synthesized surrogates. To focus the challenge on non-dictionary-based de-identification methods, the data was enriched with out-of-vocabulary PHI surrogates, i.e., made up names. The data also included some PHI surrogates that were ambiguous with medical non-PHI terms. A total of seven teams participated in the challenge. Each team submitted up to three system runs, for a total of sixteen submissions. The authors used precision, recall, and F-measure to evaluate the submitted system runs based on their token-level and instance-level performance on the ground truth. The systems with the best performance scored above 98% in F-measure for all categories of PHI. Most out-of-vocabulary PHI could be identified accurately. However, identifying ambiguous PHI proved challenging. The performance of systems on the test data set is encouraging. Future evaluations of these systems will involve larger data sets from more heterogeneous sources."
},
{
"pmid": "26319540",
"title": "Annotating longitudinal clinical narratives for de-identification: The 2014 i2b2/UTHealth corpus.",
"abstract": "The 2014 i2b2/UTHealth natural language processing shared task featured a track focused on the de-identification of longitudinal medical records. For this track, we de-identified a set of 1304 longitudinal medical records describing 296 patients. This corpus was de-identified under a broad interpretation of the HIPAA guidelines using double-annotation followed by arbitration, rounds of sanity checking, and proof reading. The average token-based F1 measure for the annotators compared to the gold standard was 0.927. The resulting annotations were used both to de-identify the data and to set the gold standard for the de-identification track of the 2014 i2b2/UTHealth shared task. All annotated private health information were replaced with realistic surrogates automatically and then read over and corrected manually. The resulting corpus is the first of its kind made available for de-identification research. This corpus was first used for the 2014 i2b2/UTHealth shared task, during which the systems achieved a mean F-measure of 0.872 and a maximum F-measure of 0.964 using entity-based micro-averaged evaluations."
},
{
"pmid": "10851218",
"title": "PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals.",
"abstract": "The newly inaugurated Research Resource for Complex Physiologic Signals, which was created under the auspices of the National Center for Research Resources of the National Institutes of Health, is intended to stimulate current research and new investigations in the study of cardiovascular and other complex biomedical signals. The resource has 3 interdependent components. PhysioBank is a large and growing archive of well-characterized digital recordings of physiological signals and related data for use by the biomedical research community. It currently includes databases of multiparameter cardiopulmonary, neural, and other biomedical signals from healthy subjects and from patients with a variety of conditions with major public health implications, including life-threatening arrhythmias, congestive heart failure, sleep apnea, neurological disorders, and aging. PhysioToolkit is a library of open-source software for physiological signal processing and analysis, the detection of physiologically significant events using both classic techniques and novel methods based on statistical physics and nonlinear dynamics, the interactive display and characterization of signals, the creation of new databases, the simulation of physiological and other signals, the quantitative evaluation and comparison of analysis methods, and the analysis of nonstationary processes. PhysioNet is an on-line forum for the dissemination and exchange of recorded biomedical signals and open-source software for analyzing them. It provides facilities for the cooperative analysis of data and the evaluation of proposed new algorithms. In addition to providing free electronic access to PhysioBank data and PhysioToolkit software via the World Wide Web (http://www.physionet. org), PhysioNet offers services and training via on-line tutorials to assist users with varying levels of expertise."
},
{
"pmid": "30217670",
"title": "A comparison of word embeddings for the biomedical natural language processing.",
"abstract": "BACKGROUND\nWord embeddings have been prevalently used in biomedical Natural Language Processing (NLP) applications due to the ability of the vector representations being able to capture useful semantic properties and linguistic relationships between words. Different textual resources (e.g., Wikipedia and biomedical literature corpus) have been utilized in biomedical NLP to train word embeddings and these word embeddings have been commonly leveraged as feature input to downstream machine learning models. However, there has been little work on evaluating the word embeddings trained from different textual resources.\n\n\nMETHODS\nIn this study, we empirically evaluated word embeddings trained from four different corpora, namely clinical notes, biomedical publications, Wikipedia, and news. For the former two resources, we trained word embeddings using unstructured electronic health record (EHR) data available at Mayo Clinic and articles (MedLit) from PubMed Central, respectively. For the latter two resources, we used publicly available pre-trained word embeddings, GloVe and Google News. The evaluation was done qualitatively and quantitatively. For the qualitative evaluation, we randomly selected medical terms from three categories (i.e., disorder, symptom, and drug), and manually inspected the five most similar words computed by embeddings for each term. We also analyzed the word embeddings through a 2-dimensional visualization plot of 377 medical terms. For the quantitative evaluation, we conducted both intrinsic and extrinsic evaluation. For the intrinsic evaluation, we evaluated the word embeddings' ability to capture medical semantics by measruing the semantic similarity between medical terms using four published datasets: Pedersen's dataset, Hliaoutakis's dataset, MayoSRS, and UMNSRS. For the extrinsic evaluation, we applied word embeddings to multiple downstream biomedical NLP applications, including clinical information extraction (IE), biomedical information retrieval (IR), and relation extraction (RE), with data from shared tasks.\n\n\nRESULTS\nThe qualitative evaluation shows that the word embeddings trained from EHR and MedLit can find more similar medical terms than those trained from GloVe and Google News. The intrinsic quantitative evaluation verifies that the semantic similarity captured by the word embeddings trained from EHR is closer to human experts' judgments on all four tested datasets. The extrinsic quantitative evaluation shows that the word embeddings trained on EHR achieved the best F1 score of 0.900 for the clinical IE task; no word embeddings improved the performance for the biomedical IR task; and the word embeddings trained on Google News had the best overall F1 score of 0.790 for the RE task.\n\n\nCONCLUSION\nBased on the evaluation results, we can draw the following conclusions. First, the word embeddings trained from EHR and MedLit can capture the semantics of medical terms better, and find semantically relevant medical terms closer to human experts' judgments than those trained from GloVe and Google News. Second, there does not exist a consistent global ranking of word embeddings for all downstream biomedical NLP applications. However, adding word embeddings as extra features will improve results on most downstream tasks. Finally, the word embeddings trained from the biomedical domain corpora do not necessarily have better performance than those trained from the general domain corpora for any downstream biomedical NLP task."
}
] |
Frontiers in Neurorobotics | 24860492 | PMC4030164 | 10.3389/fnbot.2014.00017 | Arousal regulation and affective adaptation to human responsiveness by a robot that explores and learns a novel environment | In the context of our work in developmental robotics regarding robot–human caregiver interactions, in this paper we investigate how a “baby” robot that explores and learns novel environments can adapt its affective regulatory behavior of soliciting help from a “caregiver” to the preferences shown by the caregiver in terms of varying responsiveness. We build on two strands of previous work that assessed independently (a) the differences between two “idealized” robot profiles—a “needy” and an “independent” robot—in terms of their use of a caregiver as a means to regulate the “stress” (arousal) produced by the exploration and learning of a novel environment, and (b) the effects on the robot behaviors of two caregiving profiles varying in their responsiveness—“responsive” and “non-responsive”—to the regulatory requests of the robot. Going beyond previous work, in this paper we (a) assess the effects that the varying regulatory behavior of the two robot profiles has on the exploratory and learning patterns of the robots; (b) bring together the two strands previously investigated in isolation and take a step further by endowing the robot with the capability to adapt its regulatory behavior along the “needy” and “independent” axis as a function of the varying responsiveness of the caregiver; and (c) analyze the effects that the varying regulatory behavior has on the exploratory and learning patterns of the adaptive robot. | 1.3. Related work in robotics1.3.1. ArousalArousal has been used in artificial and robotic systems for different purposes. It has for example been used as a parameter to control the emotional displays of a robot as a function that reflects the levels of external stimulation received by an agent (Breazeal and Scassellati, 1999; Breazeal, 2003). Ogino et al. (2013) propose a motivational model of early parent–infant communication. Their model is based on the need for relatedness and its relationship to the dynamics of the pleasure and arousal in face-to-face interactions. They tested their architecture using a virtual robot on a computer which interacted with a human playing the role of the parent. To that end, their model includes a two-dimensional vector of pleasure and arousal following the circumplex model of emotions introduced by Russel (1980). The arousal of the agent is computed with respect to measures of novelty, stress and the perceived arousal of the human. The pleasure varies proportionally to the pleasure perceived, the relatedness, and the expectancy of the perception of some emotion in the human. Their study intended to reproduce the phenomenology observed during mother–infant interactions and especially during still face episodes (Tronick et al., 1979; Adamson and Frick, 2003; Nadel et al., 2005). These episodes are characterized by a decrease in pleasure and positive emotions when the attachment figure stops responding to the infant's positive signals, such as gazing and smiling. The results they present show that this model reproduces the typical drop in positive affect following a still-face episode. Although the architecture based its novelty on a predictive system learning the likeliest next action the caregiver would produce, the interplay between the behavior of the caregiver and the exploratory behavior and learning of the robot were not studied.1.3.2. The attachment systemIn the few studies trying to model the attachment system and its dynamics, the behaviors related to attachment and their occurrence are studied in isolation from other important facets of (infant) development. Typically, the socio-cognitive development is left aside, the attachment subsystem is considered on its own, and the analysis is solely concerned with the success or failure of a coping strategy or a regulatory behavior. For instance, Petters (2006a) presents simulations of caregiver–infant interactions using several control architectures based on attachment theory. The main goal of these simulations of artificial agents interactions was to model the relationships between the goals and behaviors observed in young infants. The resulting architectures were tested in unsafe or safe (secure or insecure) scenarios. Depending on parameters relating to the sensitivity of the caregiver of the infant agent, the behavior of the infant would vary. Specifically, the architectures comprised several main components inspired by the literature on Attachment theory. First, an Anxiety internal variable increases when the perceptual appraisal of the situation was deemed unfamiliar or unsafe. A Warmth internal variable was introduced to evaluate the positive interactions with the caregiver as hypothesized in the Secure Base paradigm. Based on these internal variables and the current perceptions, the action selection system assigns weights to the current goals and a winner-take-all approach is used to trigger the behavior associated with the most active one. Several variations of this architecture have been tested to include learning and adaption from previous interactions. This adaptation was based on the success or failure to regulate the internal variables, with similar dynamics to the Animat approach to motivational systems (Cañamero, 1997; Avila-Garcia and Cañamero, 2004; Cañamero and Avila-García, 2007). For instance, the agent tries to approach its caregiver when the Anxiety variable is high, and the responsiveness or sensitivity of the carer (a built-in constant in the simulation) defines if the carer will provide Warmth and relieve the Anxiety. The reported results clearly show some emergent categories which are believed to correspond to the ones Ainsworth brought to light (Ainsworth et al., 1978). However, the attachment behavior itself is considered aside from the exploration and its potential consequences on development.In contrast with the models developed and tested in simulations in various studies concerning the emergence of attachment patterns (Smith and Stevens, 2002; Petters, 2006b; Stevens and Zhang, 2009), we have studied the dynamics of the dyadic interactions in a robot-centric manner. Our main aim is to improve the adaptivity of autonomous robots in order to, on the one hand, support their autonomous learning as a function of its interactions with the physical and social environment, and on the other hand improve the affective experience of the human in human–robot dyadic social interactions.However, despite the differences with the other body of work that models attachment dynamics and arousal modulation, we share the common view of the basic interplay between caring styles and behavior variations in affective adaptation. Indeed, these simulation models attempt to have a specific pattern of behavior emerge across several interactions based on a stereotypical caregiving style. This style is based on a sensitivity and responsiveness formalism, similarly to our work.1.3.3. Exploration, curiosity, and intrinsic motivationA growing body of work in the robotics research community has focused on applying Berlyne's concept of curiosity as an intrinsic motivation for developing skills in robots. Following the encouraging results from the “playground experiment” from Oudeyer et al. (2007) and the advances in self-assessment measures related to novelty and learning progress (Şimşek and Barto, 2004), research has been devoted to the improvement of exploratory behavior and self-development of autonomous agents and robots. Most often these architectures use some evaluation of the progress of the agent in terms of learning, computed as the decrease of the prediction error of the Learning System of the robot (Kaplan and Oudeyer, 2004). Typical architectures modeling curiosity aimed at guiding the exploration of a developing robot often focus on specific task learning problem (Kaplan and Oudeyer, 2005; Luciw et al., 2011) and do not take advantage of the potential availability of humans. However, this principle has also been successfully applied to influence and help a robot in navigation tasks (Hasson and Gaussier, 2010; Jauffret et al., 2013). In this contribution, the authors use self-evaluation measures of success and failure for the robot to express its “frustration” and trigger the help from a human when frustration is too high. They show how this strategy can help the robot subjectively identify deadlock situations, and be assisted in solving a given problem with the help of a human.1.3.4. Our previous workOur previous experiments with Aibo robots examined the difference in regulatory behaviors used by the robot, and incidentally their effect on its exploration and learning patterns, when interacting with a responsive human and a non-responsive one. The results showed how a responsive human had a strong influence on the average values of the collative variables collected, to the point that the interventions of the human managed to remove the robot from locations high in novelty and complexity (Hiolle and Cañamero, 2008). Our results also suggested that neither of the extreme strategies of constant responsiveness and no responsiveness were ideal, since at the end of all our runs the robot had learned and classified all the encountered patterns, which kept its arousal always under the lowest threshold, with the effect of making the robot to keep turning fast in the arena in a “bored state,” looking for new features to learn.Finding an appropriate trade-off between constant responsiveness and no responsiveness that could be suited to the environment in question thus required further investigation. In this paper we investigated how that trade-off could be achieved through the dynamics of mutual adaptation between the robot and the caregiver.In a second experimental setup (Hiolle et al., 2012), we tested the same architecture and embodiment with naive users. The subjects interacted with two robots having two different interaction profiles. These profiles differed, behaviorally, in the amount of human attention and help solicited by the robots as a strategy to regulate the duration of the effect of the comfort in the system. In the “needy” profile, the modulatory effect that the comfort provided by the human had on the level of arousal was short-lived. The results gathered in this second experiment demonstrated that the regulatory behavior produced by this robot requested and elicited human help more often. In the other profile, named “independent,” the modulatory effect that the comfort provided by the human had on the level of arousal lasted longer, leading the robot to explore the environment autonomously for longer following the caregiving responses from the humans during a short-term interaction. The self-rating data from the subjects also showed that on average the subjects preferred interacting with a “needy” profile, deemed more responsive.Following up on these results, in this paper we endeavor to assess more precisely the influence that these regulatory profiles—“needy” and “non-needy”—have on the exploration of a new environment by the robot, assess how the responsiveness of the human can influence the interaction, and combine these two elements to endow the robot with the capability to adapt its affective arousal regulation strategies to the affective responsiveness of the human providing comfort. | [
"5490680",
"19330154",
"24410843",
"4643768",
"13190171",
"5260289",
"13610508",
"9306636",
"7984162",
"10617751",
"14395368",
"24115931",
"9287218",
"17292784",
"9579331",
"24062710",
"632477",
"19016474",
"10836570",
"8190826",
"12895666"
] | [
{
"pmid": "19330154",
"title": "The inverted \"u-shaped\" dose-effect relationships in learning and memory: modulation of arousal and consolidation.",
"abstract": "In the ample field of biological non-linear relationships there is also the inverted U-shaped dose-effect. In relation to cognitive functions, this phenomenon has been widely reported for many active compounds, in several learning paradigms, in several animal species and does not depend on either administration route (systemic or endocerebral) or administration time (before or after training). This review summarizes its most interesting aspects. The hypothesized mechanisms supporting it are reported and discussed, with particular emphasis on the participation of emotional arousal levels in the modulation of memory processes. Findings on the well documented relationship between stress, emotional arousal, peripheral epinephrine levels, cerebral norepinephrine levels and memory consolidation are reported. These are discussed and the need for further research is underlined."
},
{
"pmid": "24410843",
"title": "Emotional engagements predict and enhance social cognition in young chimpanzees.",
"abstract": "Social cognition in infancy is evident in coordinated triadic engagements, that is, infants attending jointly with social partners and objects. Current evolutionary theories of primate social cognition tend to highlight species differences in cognition based on human-unique cooperative motives. We consider a developmental model in which engagement experiences produce differential outcomes. We conducted a 10-year-long study in which two groups of laboratory-raised chimpanzee infants were given quantifiably different engagement experiences. Joint attention, cooperativeness, affect, and different levels of cognition were measured in 5- to 12-month-old chimpanzees, and compared to outcomes derived from a normative human database. We found that joint attention skills significantly improved across development for all infants, but by 12 months, the humans significantly surpassed the chimpanzees. We found that cooperativeness was stable in the humans, but by 12 months, the chimpanzee group given enriched engagement experiences significantly surpassed the humans. Past engagement experiences and concurrent affect were significant unique predictors of both joint attention and cooperativeness in 5- to 12-month-old chimpanzees. When engagement experiences and concurrent affect were statistically controlled, joint attention and cooperation were not associated. We explain differential social cognition outcomes in terms of the significant influences of previous engagement experiences and affect, in addition to cognition. Our study highlights developmental processes that underpin the emergence of social cognition in support of evolutionary continuity."
},
{
"pmid": "9306636",
"title": "Sensitivity and attachment: a meta-analysis on parental antecedents of infant attachment.",
"abstract": "This meta-analysis included 66 studies (N = 4,176) on parental antecedents of attachment security. The question addressed was whether maternal sensitivity is associated with infant attachment security, and what the strength of this relation is. It was hypothesized that studies more similar to Ainsworth's Baltimore study (Ainsworth, Blehar, Waters, & Wall, 1978) would show stronger associations than studies diverging from this pioneering study. To create conceptually homogeneous sets of studies, experts divided the studies into 9 groups with similar constructs and measures of parenting. For each domain, a meta-analysis was performed to describe the central tendency, variability, and relevant moderators. After correction for attenuation, the 21 studies (N = 1,099) in which the Strange Situation procedure in nonclinical samples was used, as well as preceding or concurrent observational sensitivity measures, showed a combined effect size of r(1,097) = .24. According to Cohen's (1988) conventional criteria, the association is moderately strong. It is concluded that in normal settings sensitivity is an important but not exclusive condition of attachment security. Several other dimensions of parenting are identified as playing an equally important role. In attachment theory, a move to the contextual level is required to interpret the complex transactions between context and sensitivity in less stable and more stressful settings, and to pay more attention to nonshared environmental influences."
},
{
"pmid": "7984162",
"title": "The effects of mother's physical and emotional unavailability on emotion regulation.",
"abstract": "In summary, emotion dysregulation can develop from brief or more prolonged separations from the mother as well as from the more disturbing effects of her emotional unavailability, such as occurs when she is depressed. Harmonious interaction with the mother or the primary caregiver (attunement) of the mother's physical unavailability were seen in studies of separations from the mother due to her hospitalization or to her conference trips. These separations affected the infants' play behaviors and sleep patterns. Comparisons between hospitalizations and conference trips, however, suggested that the infants' behaviors were more negatively affected by the hospitalizations than the conference trips. This probably related to these being hospitalizations for the birth of another baby--the infants no longer had the special, exclusive relationship with their mothers after the arrival of the new sibling. This finding highlights the critical importance of emotional availability. The mother had returned from the hospital, but, while she was no longer physically unavailable, she was now emotionally unavailable. Emotional unavailability was investigated in an acute form by comparing two laboratory situations, the still face paradigm and the momentary leave taking. The still face had more negative effects on the infants' interaction behaviors than the physical separation. The most extreme form of emotional unavailability, mother's depression, had the most negative effects. The disorganization or emotion dysregulation in this case is more prolonged. Changes in physiology (heart rate, vagal tone, and cortisol levels), in play behavior, affect, activity level, and sleep organization as well as other regulating functions such as eating and toileting, and even in the immune system persist for the duration of the mother's depression. My colleagues and I have suggested that these changes occur because the infant is being chronically deprived of an important external regulator of stimulation (the mother) and thus fails to develop emotion regulation or organized behavioral and physiological rhythms. Finally, individual differences were discussed, including those related to maturity (e.g., prematurity) and temperament/personality (e.g., uninhibited/inhibited or externalizing/internalizing) and those deriving from degree of mother-infant mismatch, such as dissimilar temperaments. Further investigations are needed to determine how long the effects of such early dysregulation endure, how they affect the infant's long-term development, how their effect differs across individuals and across development, and whether they can be modified by early intervention. Eventually, with increasing age, developing skills, and diversity of experience, infants develop individualized regulatory styles. That process, and how it is affected by the mother's physical and emotional unavailability, also requires further investigation."
},
{
"pmid": "10617751",
"title": "Skin-to-skin contact is analgesic in healthy newborns.",
"abstract": "OBJECTIVES\nTo determine whether skin-to-skin contact between mothers and their newborns will reduce the pain experienced by the infant during heel lance.\n\n\nDESIGN\nA prospective, randomized, controlled trial.\n\n\nSETTING\nBoston Medical Center, Boston, Massachusetts.\n\n\nPARTICIPANTS\nA total of 30 newborn infants were studied.\n\n\nINTERVENTIONS\nInfants were assigned randomly to either being held by their mothers in whole body, skin-to-skin contact or to no intervention (swaddled in crib) during a standard heel lance procedure.\n\n\nOUTCOME MEASURES\nThe effectiveness of the intervention was determined by comparing crying, grimacing, and heart rate differences between contact and control infants during and after blood collection.\n\n\nRESULTS\nCrying and grimacing were reduced by 82% and 65%, respectively, from control infant levels during the heel lance procedure. Heart rate also was reduced substantially by contact.\n\n\nCONCLUSION\nSkin-to-skin contact is a remarkably potent intervention against the pain experienced during heel stick in newborns."
},
{
"pmid": "24115931",
"title": "From self-assessment to frustration, a small step toward autonomy in robotic navigation.",
"abstract": "Autonomy and self-improvement capabilities are still challenging in the fields of robotics and machine learning. Allowing a robot to autonomously navigate in wide and unknown environments not only requires a repertoire of robust strategies to cope with miscellaneous situations, but also needs mechanisms of self-assessment for guiding learning and for monitoring strategies. Monitoring strategies requires feedbacks on the behavior's quality, from a given fitness system in order to take correct decisions. In this work, we focus on how a second-order controller can be used to (1) manage behaviors according to the situation and (2) seek for human interactions to improve skills. Following an incremental and constructivist approach, we present a generic neural architecture, based on an on-line novelty detection algorithm that may be able to self-evaluate any sensory-motor strategies. This architecture learns contingencies between sensations and actions, giving the expected sensation from the previous perception. Prediction error, coming from surprising events, provides a measure of the quality of the underlying sensory-motor contingencies. We show how a simple second-order controller (emotional system) based on the prediction progress allows the system to regulate its behavior to solve complex navigation tasks and also succeeds in asking for help if it detects dead-lock situations. We propose that this model could be a key structure toward self-assessment and autonomy. We made several experiments that can account for such properties for two different strategies (road following and place cells based navigation) in different situations."
},
{
"pmid": "9287218",
"title": "Maternal care, hippocampal glucocorticoid receptors, and hypothalamic-pituitary-adrenal responses to stress.",
"abstract": "Variations in maternal care affect the development of individual differences in neuroendocrine responses to stress in rats. As adults, the offspring of mothers that exhibited more licking and grooming of pups during the first 10 days of life showed reduced plasma adrenocorticotropic hormone and corticosterone responses to acute stress, increased hippocampal glucocorticoid receptor messenger RNA expression, enhanced glucocorticoid feedback sensitivity, and decreased levels of hypothalamic corticotropin-releasing hormone messenger RNA. Each measure was significantly correlated with the frequency of maternal licking and grooming (all r's > -0.6). These findings suggest that maternal behavior serves to \"program\" hypothalamic-pituitary-adrenal responses to stress in the offspring."
},
{
"pmid": "17292784",
"title": "Infant and parent factors associated with early maternal sensitivity: a caregiver-attachment systems approach.",
"abstract": "We examined variations in maternal sensitivity at 6 months of child age as a function of child negativity and maternal physiology. We expected maternal vagal withdrawal in response to infant negative affect to facilitate the maintenance of sensitivity, but only for mothers of securely attached children. One hundred and forty-eight infant-mother dyads were observed in multiple contexts at 6 months of child age, and associations among maternal and child variables were examined with respect to 12-month attachment quality. Mothers of later securely attached children were more sensitive than mothers of avoidant children. However, sensitivity decreased for all mothers at high levels of infant negative affect. Furthermore, for mothers of avoidant children, vagal withdrawal was associated with sensitivity to child distress. No association was found between vagal withdrawal and sensitivity for mothers of securely attached children. This suggests that mothers of avoidant children may be uniquely challenged by the affective demands of their infants."
},
{
"pmid": "9579331",
"title": "Brain substrates of infant-mother attachment: contributions of opioids, oxytocin, and norepinephrine.",
"abstract": "The aim of this paper is to review recent work concerning the psychobiological substrates of social bonding, focusing on the literature attributed to opioids, oxytocin and norepinephrine in rats. Existing evidence and thinking about the biological foundations of attachment in young mammalian species and the neurobiology of several other affiliative behaviors including maternal behavior, sexual behavior and social memory is reviewed. We postulate the existence of social motivation circuitry which is common to all mammals and consistent across development. Oxytocin, vasopressin, endogenous opioids and catecholamines appear to participate in a wide variety of affiliative behaviors and are likely to be important components in this circuitry. It is proposed that these same neurochemical and neuroanatomical patterns will emerge as key substrates in the neurobiology of infant attachments to their caregivers."
},
{
"pmid": "24062710",
"title": "A motivation model for interaction between parent and child based on the need for relatedness.",
"abstract": "In parent-child communication, emotions are evoked by various types of intrinsic and extrinsic motivation. Those emotions encourage actions that promote more interactions. We present a motivation model of infant-caregiver interactions, in which relatedness, one of the most important basic psychological needs, is a variable that increases with experiences of emotion sharing. Besides being an important factor of pleasure, relatedness is a meta-factor that affects other factors such as stress and emotional mirroring. The proposed model is implemented in an artificial agent equipped with a system to recognize gestures and facial expressions. The baby-like agent successfully interacts with an actual human and adversely reacts when the caregiver suddenly ceases facial expressions, similar to the \"still-face paradigm\" demonstrated by infants in psychological experiments. In the simulation experiment, two agents, each controlled by the proposed motivation model, show relatedness-dependent emotional communication that mimics actual human communication."
},
{
"pmid": "19016474",
"title": "Enhancement of attachment and cognitive development of young nursery-reared chimpanzees in responsive versus standard care.",
"abstract": "Forty-six nursery-reared chimpanzee infants (22 females and 24 males) receiving either standard care (n = 29) or responsive care (n = 17) at the Great Ape Nursery at Yerkes participated in this study. Standard care (ST) consisted primarily of peer-rearing, with humans providing essential health-related care. Responsive care (RC) consisted of an additional 4 hr of interaction 5 days a week with human caregivers who were specially trained to enhance species-typical chimpanzee socio-emotional and communicative development. At 9 months, ST and RC chimpanzees were examined with the Bayley Scales for Infant Development to assess their Mental Development Index (MDI). At 12 months, the chimpanzees were assessed with their human caregivers in the Ainsworth Strange Situation Procedure (SSP). In this first study to use the SSP in chimpanzees, nursery-reared chimpanzees exhibited the definite patterns of distress, proximity seeking, and exploration that underpin the SSP for human infants. In ST chimpanzees the attachment classification distribution was similar to that of human infants raised in Greek or Romanian orphanages. RC chimpanzees showed less disorganized attachment to their human caregivers, had a more advanced cognitive development, and displayed less object attachment compared to ST chimpanzees. Responsive care stimulates chimpanzees' cognitive and emotional development, and is an important factor in ameliorating some of the adverse effects of institutional care."
},
{
"pmid": "10836570",
"title": "A secure base from which to explore close relationships.",
"abstract": "The theory of attachment as a secure-base relationship integrates insights about affect, cognition, and behavior in close relationships across age and culture. Empirical successes based on this theory include important discoveries about the nature of infant-caregiver and adult-adult close relationships, the importance of early experience, and about stability and change in individual differences. The task now is to preserve these insights and successes and build on them. To accomplish this, we need to continually examine the logic and coherence of attachment theory and redress errors of emphasis and analysis. Views on attachment development, attachment representation, and attachment in family and cross-cultural perspective need to be updated in light of empirical research and advances in developmental theory, behavioral biology, and cognitive psychology. We also need to challenge the theory by formulating and testing hypotheses which, if not confirmed, would require significant changes to the theory. If we can accomplish these tasks, prospects for important developments in attachment theory and research are greater than ever, as are the prospects for integration with other disciplines."
},
{
"pmid": "8190826",
"title": "The development of attachment: from control system to working models.",
"abstract": "After two decades of theoretical and descriptive work, we know a great deal about the developmental course of early attachment relationships. We know considerably less about the mechanisms underlying consistency and change. Indeed, the most pressing issue in attachment theory is to explain well-replicated correlations between early care and subsequent patterns of secure base behavior, and between secure base behavior in infancy and subsequent behavior with parents and siblings, social competence, self-esteem, and behavior problems. As a step in this direction, we examine Bowlby's developmental outline, with an eye toward providing greater detail and incorporating traditional learning mechanisms into Bowlby's attachment theory."
},
{
"pmid": "12895666",
"title": "Emotion regulation and touch in infants: the role of cholecystokinin and opioids.",
"abstract": "Behavioral-pharmacological research in infant rats supports the role of cholecystokinin (CCK) and opioid peptides in mediating early learning of new associations with aspects of the nest and dam, such as maternal odor, milk, and contact. The current paper reviews research that examines the hypothesis that these neuropeptide systems are further involved in mediating emotion regulation in infants, thus playing a role in the emergence of stress-reactivity and other motivational systems. The beneficial effects of maternal proximity, handling, and touch on the development of emotion regulation have been demonstrated in both human and animal models. Interventions that promote tactile stimulation of the infant (\"touch therapy\") and infant-mother contact (\"skin-to-skin contact\" or \"kangaroo care\") have been shown to improve the infant's ability to self-regulate, and to moderate the effects of some risk factors. Theoretical perspectives and empirical findings regarding emotion regulation in infants are first discussed. This is followed by a review of work providing evidence in animal models (and suggestive evidence in humans) for the importance of CCK and opioid neuropeptides in affecting infant emotion regulation and the impact of touch-based interventions, in particular in the context of infant-mother attraction, contact, separation, and attachment."
}
] |
Frontiers in Neuroinformatics | 24904400 | PMC4033081 | 10.3389/fninf.2014.00054 | CBRAIN: a web-based, distributed computing platform for collaborative neuroimaging research | The Canadian Brain Imaging Research Platform (CBRAIN) is a web-based collaborative research platform developed in response to the challenges raised by data-heavy, compute-intensive neuroimaging research. CBRAIN offers transparent access to remote data sources, distributed computing sites, and an array of processing and visualization tools within a controlled, secure environment. Its web interface is accessible through any modern browser and uses graphical interface idioms to reduce the technical expertise required to perform large-scale computational analyses. CBRAIN's flexible meta-scheduling has allowed the incorporation of a wide range of heterogeneous computing sites, currently including nine national research High Performance Computing (HPC) centers in Canada, one in Korea, one in Germany, and several local research servers. CBRAIN leverages remote computing cycles and facilitates resource-interoperability in a transparent manner for the end-user. Compared with typical grid solutions available, our architecture was designed to be easily extendable and deployed on existing remote computing sites with no tool modification, administrative intervention, or special software/hardware configuration. As October 2013, CBRAIN serves over 200 users spread across 53 cities in 17 countries. The platform is built as a generic framework that can accept data and analysis tools from any discipline. However, its current focus is primarily on neuroimaging research and studies of neurological diseases such as Autism, Parkinson's and Alzheimer's diseases, Multiple Sclerosis as well as on normal brain structure and development. This technical report presents the CBRAIN Platform, its current deployment and usage and future direction. | Related workThe CBRAIN platform incorporates the key aspects of a grid middleware, namely security (Authentication, Authorization, Accounting—AAA), distributed file management, and job execution on multiple distributed sites. Grid middleware has received a lot of attention in the last 15 years (Foster and Kesselman, 2003), and resulting technologies and concepts are now used in large computing infrastructures such as the Open-Science Grid (Pordes et al., 2007), Teragrid (Catlett, 2002), and the European Grid Infrastructure (Kranzlmüller et al., 2010). CBRAIN is unique in the sense that it integrates all these functions in a single, consistent, lightweight, self-contained, independent framework that is therefore easily administrated and extended. For example, grid security usually relies on X509 certificates signed by trusted authorities, from which time-limited proxy certificates are generated, delegated to the services involved in the platform, and used to authenticate all user operations, for instance job execution and data transfers (Foster et al., 1998). In practice, this mechanism burdens users with the handling of certificates, restricts the range of usable technologies, generates user-specific errors, and complicates debugging. To avoid these issues, CBRAIN decouples user AAA from system AAA: users authenticate to the portal with straightforward login and password, while the portal handles data and computing authorizations, and then authenticates to the services using a single or a few group credentials. Such decoupled approach is being adopted more broadly by portals using so-called robot X509 authentication to infrastructure services (Barbera et al., 2009).Distributed file management commonly consists of a logical layer providing a uniform view of physical storage distributed over the infrastructure. CBRAIN's file metadata contain similar information to that stored in grid file catalogs, for instance the LCG File Catalog (Baud et al., 2005) or the Globus RLS (Chervenak et al., 2009). However, CBRAIN's file transfer architecture notably differs from the main grid solutions: (i) its throttled data transfer model avoids overloading storage providers, a problem commonly observed in grid infrastructures and addressed in a similar way by the Advanced Resource Connector (Ellert et al., 2007) (ii) it caches files on the computing sites, a feature only provided in a few grid middleware and often implemented at the application level.Job execution on multiple distributed computing sites is performed either by a meta-scheduler which dispatches jobs to the different sites (Huedo et al., 2001; Andreetto et al., 2008) or by pilot-job approaches provisioning computing resources with generic agents that pull tasks from a central queue when they reach a computing node (Frey et al., 2002; Brook et al., 2003). In neuroimaging, however, due to variations of software and/or libraries, the execution site often has to be controlled by the users to guarantee the correctness and reproducibility of computations (Gronenschild et al., 2012). This is why CBRAIN usually delegates site selection to the users, providing them historical information about queuing times. The matchmaking between tasks and resources, which involves elaborate resource descriptions when performed by a generic grid middleware (Andreetto et al., 2010), is done statically by CBRAIN administrators who map application versions to sites based on their knowledge of the infrastructure.The decision to develop SCIR as a streamlined meta-scheduler to abstract scheduler differences away from the core platform was based on pragmatic cross-site deployment experience. Libraries with similar goals do exist, but they did not demonstrate enough agility and flexibility for the HPC landscape we faced. The DRMAA (Tröger et al., 2007) and SAGA (Jha et al., 2007) projects, from the Open Grid Forum Working Group, were just emerging standards at the time of the initial CBRAIN deployment. DRMAA is a universal scheduler API library that was used in earlier versions of CBRAIN. Unfortunately, from our experience, although the library defines a fairly complete low-level API, the modules that actually interact with the cluster job schedulers were found to leave certain scheduler versions unsupported and were not designed to be easily extended for interaction with in-house schedulers. Our objectives for low-footprint and flexibility run contrary to dictating scheduler requirements to a diverse array of computing sites, so we created a library suited to our specific needs.A few other science-gateway frameworks exist to facilitate the building of web portals accessing distributed infrastructures for scientific computing (Marru et al., 2011; Kacsuk et al., 2012). These frameworks provide toolboxes of components meant to be reused in customized assemblies to build domain-specific platforms. To ensure performance and flexibility, CBRAIN developed its own custom portal, which allows fine-grained, optimized interactions with infrastructure services. Other similar leading platforms providing access to neuroimaging applications executed on distributed infrastructures are LONI (Dinov et al., 2010), neuGRID (Redolfi et al., 2009), and A-Brain (Antoniu et al., 2012). While sharing similar overall goals, each platform uses often radically different approaches and philosophy, allowing them to excel in specific niches. For example, LONI offers an advanced and flexible graphical workflow builder that has, to our knowledge, no equivalent in the field. Within CBRAIN, our team took the design decision of supporting only mature, validated workflows as needs arise from our community. CBRAIN users are free to launch any tools or pipelines they have access to, but cannot create and share an automated workflow using multiple tools, the way it would be done in LONI, without contacting the core team. This has the advantage of preventing failures and waste of resources and of enforcing staged validation and quality control, however it does limit the rate of automated workflow integration and flexibility for the users. NeuGRID has a strong remote desktop component capable of providing remote users with native data visualization applications (centralized approach), CBRAIN handles all visualization applications through web-based applications (decentralized approach). These two approaches to the same problem have different characteristics, while the centralized approach procures users with familiar applications in their native mode, supporting usage growth can require large infrastructure investments. The decentralized approach uses very light infrastructure to push modern HTML5 applications to large amounts remote clients, respecting the CBRAIN scalability philosophy, however these applications have to be web compatible or developed anew. The A-Brain platform has done extensive work on low-latency data-intensive processing by building an optimized prototype MapReduce framework for Microsoft's Azure cloud platform on the basis of TomusBlobs (Costan et al., 2013). In comparison, CBRAIN focused on a lightweight, flexible and low-footprint catalog and data grid mechanism that acts as a transparent interface for regular multi-site batch-type projects. While it is clear that the CBRAIN grid cannot move and process multi-terabyte studies with the same ease as A-BRAIN, our goal was to ensure that all user sites can integrate securely in our grid their own data repositories with a minimum of requirements. This leads to a mix of faster and slower storage segments, which CBRAIN manages asynchronously with its caching mechanism. Most of our large imaging projects, with thousands of subjects representing hundreds of gigabytes of data can be processed as-is with the CBRAIN grid. Some multi-terabyte, data-intensive projects, such as our 3D histological reconstruction (Amunts et al., 2013), required special infrastructure for processing and visualization.The modular plugin approach used to develop many of CBRAIN's components makes the platform easily extensible. New data providers, execution servers, visualization tools and other components can be added to the platform with a minimal investment of time and effort. On a deeper level, a small investment in development time can extend the base data provider and SCIR APIs to allow compatibility with new types of storage and cluster management. As an example, our team has begun experimenting with the integration of Amazon's S3 cloud as a data provider. CBRAIN as a meta-scheduler does more than provide a uniform API to the heterogeneous scheduling of various sites; it handles maximum queue allocations, node vs. core scheduling, max load per node, specific environment variables, caches locations, and data transfer tools/protocols on a per site basis. The platform excels at bridging the gap in common standards between existing cyber-infrastructures, providing transparent access to grids, public HPC sites, and private infrastructure through a single common framework. | [
"23788795",
"19534747",
"19265007",
"15879210",
"23230161",
"22319489",
"20927408",
"19649168",
"21727938",
"23014715",
"21897815",
"22675527",
"21979382",
"21249532",
"23732878",
"15896981",
"16624590",
"18519166",
"24139651",
"20371412",
"22430496",
"12880830",
"24113873"
] | [
{
"pmid": "23788795",
"title": "BigBrain: an ultrahigh-resolution 3D human brain model.",
"abstract": "Reference brains are indispensable tools in human brain mapping, enabling integration of multimodal data into an anatomically realistic standard space. Available reference brains, however, are restricted to the macroscopic scale and do not provide information on the functionally important microscopic dimension. We created an ultrahigh-resolution three-dimensional (3D) model of a human brain at nearly cellular resolution of 20 micrometers, based on the reconstruction of 7404 histological sections. \"BigBrain\" is a free, publicly available tool that provides considerable neuroanatomical insight into the human brain, thereby allowing the extraction of microscopic data for modeling and simulation. BigBrain enables testing of hypotheses on optimal path lengths between interconnected cortical regions or on spatial organization of genetic patterning, redefining the traditional neuroanatomy maps such as those of Brodmann and von Economo."
},
{
"pmid": "19534747",
"title": "The GENIUS Grid Portal and robot certificates: a new tool for e-Science.",
"abstract": "BACKGROUND\nGrid technology is the computing model which allows users to share a wide pletora of distributed computational resources regardless of their geographical location. Up to now, the high security policy requested in order to access distributed computing resources has been a rather big limiting factor when trying to broaden the usage of Grids into a wide community of users. Grid security is indeed based on the Public Key Infrastructure (PKI) of X.509 certificates and the procedure to get and manage those certificates is unfortunately not straightforward. A first step to make Grids more appealing for new users has recently been achieved with the adoption of robot certificates.\n\n\nMETHODS\nRobot certificates have recently been introduced to perform automated tasks on Grids on behalf of users. They are extremely useful for instance to automate grid service monitoring, data processing production, distributed data collection systems. Basically these certificates can be used to identify a person responsible for an unattended service or process acting as client and/or server. Robot certificates can be installed on a smart card and used behind a portal by everyone interested in running the related applications in a Grid environment using a user-friendly graphic interface. In this work, the GENIUS Grid Portal, powered by EnginFrame, has been extended in order to support the new authentication based on the adoption of these robot certificates.\n\n\nRESULTS\nThe work carried out and reported in this manuscript is particularly relevant for all users who are not familiar with personal digital certificates and the technical aspects of the Grid Security Infrastructure (GSI). The valuable benefits introduced by robot certificates in e-Science can so be extended to users belonging to several scientific domains, providing an asset in raising Grid awareness to a wide number of potential users.\n\n\nCONCLUSION\nThe adoption of Grid portals extended with robot certificates, can really contribute to creating transparent access to computational resources of Grid Infrastructures, enhancing the spread of this new paradigm in researchers' working life to address new global scientific challenges. The evaluated solution can of course be extended to other portals, applications and scientific communities."
},
{
"pmid": "15879210",
"title": "Cyberinfrastructure: empowering a \"third way\" in biomedical research.",
"abstract": "Biomedicine has experienced explosive growth, fueled in parts by the substantial increase of government support, continued development of the biotechnology industry, and the increasing adoption of molecular-based medicine. At its core, it is composed of fiercely independent, innovative, entrepreneurial individuals, organizations, and institutions. The field has developed unprecedented capacity to characterize biologic systems at their most fundamental levels with the use of tools and technologies almost unimaginable a generation ago. Biomedicine is at the precipice of unlocking the very essence of biologic life and enabling a new generation of medicine. Development and deployment of cyberinfrastructure may prove to be on the critical path to obtaining these goals."
},
{
"pmid": "23230161",
"title": "Developing cloud applications using the e-Science Central platform.",
"abstract": "This paper describes the e-Science Central (e-SC) cloud data processing system and its application to a number of e-Science projects. e-SC provides both software as a service (SaaS) and platform as a service for scientific data management, analysis and collaboration. It is a portable system and can be deployed on both private (e.g. Eucalyptus) and public clouds (Amazon AWS and Microsoft Windows Azure). The SaaS application allows scientists to upload data, edit and run workflows and share results in the cloud, using only a Web browser. It is underpinned by a scalable cloud platform consisting of a set of components designed to support the needs of scientists. The platform is exposed to developers so that they can easily upload their own analysis services into the system and make these available to other users. A representational state transfer-based application programming interface (API) is also provided so that external applications can leverage the platform's functionality, making it easier to build scalable, secure cloud-based applications. This paper describes the design of e-SC, its API and its use in three different case studies: spectral data visualization, medical data capture and analysis, and chemical property prediction."
},
{
"pmid": "22319489",
"title": "LORIS: a web-based data management system for multi-center studies.",
"abstract": "Longitudinal Online Research and Imaging System (LORIS) is a modular and extensible web-based data management system that integrates all aspects of a multi-center study: from heterogeneous data acquisition (imaging, clinical, behavior, and genetics) to storage, processing, and ultimately dissemination. It provides a secure, user-friendly, and streamlined platform to automate the flow of clinical trials and complex multi-center studies. A subject-centric internal organization allows researchers to capture and subsequently extract all information, longitudinal or cross-sectional, from any subset of the study cohort. Extensive error-checking and quality control procedures, security, data management, data querying, and administrative functions provide LORIS with a triple capability (1) continuous project coordination and monitoring of data acquisition (2) data storage/cleaning/querying, (3) interface with arbitrary external data processing \"pipelines.\" LORIS is a complete solution that has been thoroughly tested through a full 10 year life cycle of a multi-center longitudinal project and is now supporting numerous international neurodevelopment and neurodegeneration research projects."
},
{
"pmid": "20927408",
"title": "Neuroimaging study designs, computational analyses and data provenance using the LONI pipeline.",
"abstract": "Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges--management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu."
},
{
"pmid": "19649168",
"title": "Efficient, Distributed and Interactive Neuroimaging Data Analysis Using the LONI Pipeline.",
"abstract": "The LONI Pipeline is a graphical environment for construction, validation and execution of advanced neuroimaging data analysis protocols (Rex et al., 2003). It enables automated data format conversion, allows Grid utilization, facilitates data provenance, and provides a significant library of computational tools. There are two main advantages of the LONI Pipeline over other graphical analysis workflow architectures. It is built as a distributed Grid computing environment and permits efficient tool integration, protocol validation and broad resource distribution. To integrate existing data and computational tools within the LONI Pipeline environment, no modification of the resources themselves is required. The LONI Pipeline provides several types of process submissions based on the underlying server hardware infrastructure. Only workflow instructions and references to data, executable scripts and binary instructions are stored within the LONI Pipeline environment. This makes it portable, computationally efficient, distributed and independent of the individual binary processes involved in pipeline data-analysis workflows. We have expanded the LONI Pipeline (V.4.2) to include server-to-server (peer-to-peer) communication and a 3-tier failover infrastructure (Grid hardware, Sun Grid Engine/Distributed Resource Management Application API middleware, and the Pipeline server). Additionally, the LONI Pipeline provides three layers of background-server executions for all users/sites/systems. These new LONI Pipeline features facilitate resource-interoperability, decentralized computing, construction and validation of efficient and robust neuroimaging data-analysis workflows. Using brain imaging data from the Alzheimer's Disease Neuroimaging Initiative (Mueller et al., 2005), we demonstrate integration of disparate resources, graphical construction of complex neuroimaging analysis protocols and distributed parallel computing. The LONI Pipeline, its features, specifications, documentation and usage are available online (http://Pipeline.loni.ucla.edu)."
},
{
"pmid": "21727938",
"title": "Virtual imaging laboratories for marker discovery in neurodegenerative diseases.",
"abstract": "The unprecedented growth, availability and accessibility of imaging data from people with neurodegenerative conditions has led to the development of computational infrastructures, which offer scientists access to large image databases and e-Science services such as sophisticated image analysis algorithm pipelines and powerful computational resources, as well as three-dimensional visualization and statistical tools. Scientific e-infrastructures have been and are being developed in Europe and North America that offer a suite of services for computational neuroscientists. The convergence of these initiatives represents a worldwide infrastructure that will constitute a global virtual imaging laboratory. This will provide computational neuroscientists with a virtual space that is accessible through an ordinary web browser, where image data sets and related clinical variables, algorithm pipelines, computational resources, and statistical and visualization tools will be transparently accessible to users irrespective of their physical location. Such an experimental environment will be instrumental to the success of ambitious scientific initiatives with high societal impact, such as the prevention of Alzheimer disease. In this article, we provide an overview of the currently available e-infrastructures and consider how computational neuroscience in neurodegenerative disease might evolve in the future."
},
{
"pmid": "23014715",
"title": "A virtual imaging platform for multi-modality medical image simulation.",
"abstract": "This paper presents the Virtual Imaging Platform (VIP), a platform accessible at http://vip.creatis.insa-lyon.fr to facilitate the sharing of object models and medical image simulators, and to provide access to distributed computing and storage resources. A complete overview is presented, describing the ontologies designed to share models in a common repository, the workflow template used to integrate simulators, and the tools and strategies used to exploit computing and storage resources. Simulation results obtained in four image modalities and with different models show that VIP is versatile and robust enough to support large simulations. The platform currently has 200 registered users who consumed 33 years of CPU time in 2011."
},
{
"pmid": "21897815",
"title": "Nipype: a flexible, lightweight and extensible neuroimaging data processing framework in python.",
"abstract": "Current neuroimaging software offer users an incredible opportunity to analyze their data in different ways, with different underlying assumptions. Several sophisticated software packages (e.g., AFNI, BrainVoyager, FSL, FreeSurfer, Nipy, R, SPM) are used to process and analyze large and often diverse (highly multi-dimensional) data. However, this heterogeneous collection of specialized applications creates several issues that hinder replicable, efficient, and optimal use of neuroimaging analysis approaches: (1) No uniform access to neuroimaging analysis software and usage information; (2) No framework for comparative algorithm development and dissemination; (3) Personnel turnover in laboratories often limits methodological continuity and training new personnel takes time; (4) Neuroimaging software packages do not address computational efficiency; and (5) Methods sections in journal articles are inadequate for reproducing results. To address these issues, we present Nipype (Neuroimaging in Python: Pipelines and Interfaces; http://nipy.org/nipype), an open-source, community-developed, software package, and scriptable library. Nipype solves the issues by providing Interfaces to existing neuroimaging software with uniform usage semantics and by facilitating interaction between these packages using Workflows. Nipype provides an environment that encourages interactive exploration of algorithms, eases the design of Workflows within and between packages, allows rapid comparative development of algorithms and reduces the learning curve necessary to use different packages. Nipype supports both local and remote execution on multi-core machines and clusters, without additional scripting. Nipype is Berkeley Software Distribution licensed, allowing anyone unrestricted usage. An open, community-driven development philosophy allows the software to quickly adapt and address the varied needs of the evolving neuroimaging community, especially in the context of increasing demand for reproducible research."
},
{
"pmid": "22675527",
"title": "The effects of FreeSurfer version, workstation type, and Macintosh operating system version on anatomical volume and cortical thickness measurements.",
"abstract": "FreeSurfer is a popular software package to measure cortical thickness and volume of neuroanatomical structures. However, little if any is known about measurement reliability across various data processing conditions. Using a set of 30 anatomical T1-weighted 3T MRI scans, we investigated the effects of data processing variables such as FreeSurfer version (v4.3.1, v4.5.0, and v5.0.0), workstation (Macintosh and Hewlett-Packard), and Macintosh operating system version (OSX 10.5 and OSX 10.6). Significant differences were revealed between FreeSurfer version v5.0.0 and the two earlier versions. These differences were on average 8.8 ± 6.6% (range 1.3-64.0%) (volume) and 2.8 ± 1.3% (1.1-7.7%) (cortical thickness). About a factor two smaller differences were detected between Macintosh and Hewlett-Packard workstations and between OSX 10.5 and OSX 10.6. The observed differences are similar in magnitude as effect sizes reported in accuracy evaluations and neurodegenerative studies.The main conclusion is that in the context of an ongoing study, users are discouraged to update to a new major release of either FreeSurfer or operating system or to switch to a different type of workstation without repeating the analysis; results thus give a quantitative support to successive recommendations stated by FreeSurfer developers over the years. Moreover, in view of the large and significant cross-version differences, it is concluded that formal assessment of the accuracy of FreeSurfer is desirable."
},
{
"pmid": "21979382",
"title": "FSL.",
"abstract": "FSL (the FMRIB Software Library) is a comprehensive library of analysis tools for functional, structural and diffusion MRI brain imaging data, written mainly by members of the Analysis Group, FMRIB, Oxford. For this NeuroImage special issue on \"20 years of fMRI\" we have been asked to write about the history, developments and current status of FSL. We also include some descriptions of parts of FSL that are not well covered in the existing literature. We hope that some of this content might be of interest to users of FSL, and also maybe to new research groups considering creating, releasing and supporting new software packages for brain image analysis."
},
{
"pmid": "21249532",
"title": "Unified framework for development, deployment and robust testing of neuroimaging algorithms.",
"abstract": "Developing both graphical and command-line user interfaces for neuroimaging algorithms requires considerable effort. Neuroimaging algorithms can meet their potential only if they can be easily and frequently used by their intended users. Deployment of a large suite of such algorithms on multiple platforms requires consistency of user interface controls, consistent results across various platforms and thorough testing. We present the design and implementation of a novel object-oriented framework that allows for rapid development of complex image analysis algorithms with many reusable components and the ability to easily add graphical user interface controls. Our framework also allows for simplified yet robust nightly testing of the algorithms to ensure stability and cross platform interoperability. All of the functionality is encapsulated into a software object requiring no separate source code for user interfaces, testing or deployment. This formulation makes our framework ideal for developing novel, stable and easy-to-use algorithms for medical image analysis and computer assisted interventions. The framework has been both deployed at Yale and released for public use in the open source multi-platform image analysis software--BioImage Suite (bioimagesuite.org)."
},
{
"pmid": "23732878",
"title": "Childhood cognitive ability accounts for associations between cognitive ability and brain cortical thickness in old age.",
"abstract": "Associations between brain cortical tissue volume and cognitive function in old age are frequently interpreted as suggesting that preservation of cortical tissue is the foundation of successful cognitive aging. However, this association could also, in part, reflect a lifelong association between cognitive ability and cortical tissue. We analyzed data on 588 subjects from the Lothian Birth Cohort 1936 who had intelligence quotient (IQ) scores from the same cognitive test available at both 11 and 70 years of age as well as high-resolution brain magnetic resonance imaging data obtained at approximately 73 years of age. Cortical thickness was estimated at 81 924 sampling points across the cortex for each subject using an automated pipeline. Multiple regression was used to assess associations between cortical thickness and the IQ measures at 11 and 70 years. Childhood IQ accounted for more than two-third of the association between IQ at 70 years and cortical thickness measured at age 73 years. This warns against ascribing a causal interpretation to the association between cognitive ability and cortical tissue in old age based on assumptions about, and exclusive reference to, the aging process and any associated disease. Without early-life measures of cognitive ability, it would have been tempting to conclude that preservation of cortical thickness in old age is a foundation for successful cognitive aging when, instead, it is a lifelong association. This being said, results should not be construed as meaning that all studies on aging require direct measures of childhood IQ, but as suggesting that proxy measures of prior cognitive function can be useful to take into consideration."
},
{
"pmid": "15896981",
"title": "Automated 3-D extraction and evaluation of the inner and outer cortical surfaces using a Laplacian map and partial volume effect classification.",
"abstract": "Accurate reconstruction of the inner and outer cortical surfaces of the human cerebrum is a critical objective for a wide variety of neuroimaging analysis purposes, including visualization, morphometry, and brain mapping. The Anatomic Segmentation using Proximity (ASP) algorithm, previously developed by our group, provides a topology-preserving cortical surface deformation method that has been extensively used for the aforementioned purposes. However, constraints in the algorithm to ensure topology preservation occasionally produce incorrect thickness measurements due to a restriction in the range of allowable distances between the gray and white matter surfaces. This problem is particularly prominent in pediatric brain images with tightly folded gyri. This paper presents a novel method for improving the conventional ASP algorithm by making use of partial volume information through probabilistic classification in order to allow for topology preservation across a less restricted range of cortical thickness values. The new algorithm also corrects the classification of the insular cortex by masking out subcortical tissues. For 70 pediatric brains, validation experiments for the modified algorithm, Constrained Laplacian ASP (CLASP), were performed by three methods: (i) volume matching between surface-masked gray matter (GM) and conventional tissue-classified GM, (ii) surface matching between simulated and CLASP-extracted surfaces, and (iii) repeatability of the surface reconstruction among 16 MRI scans of the same subject. In the volume-based evaluation, the volume enclosed by the CLASP WM and GM surfaces matched the classified GM volume 13% more accurately than using conventional ASP. In the surface-based evaluation, using synthesized thick cortex, the average difference between simulated and extracted surfaces was 4.6 +/- 1.4 mm for conventional ASP and 0.5 +/- 0.4 mm for CLASP. In a repeatability study, CLASP produced a 30% lower RMS error for the GM surface and a 8% lower RMS error for the WM surface compared with ASP."
},
{
"pmid": "16624590",
"title": "Mapping anatomical correlations across cerebral cortex (MACACC) using cortical thickness from MRI.",
"abstract": "We introduce MACACC-Mapping Anatomical Correlations Across Cerebral Cortex-to study correlated changes within and across different cortical networks. The principal topic of investigation is whether the thickness of one area of the cortex changes in a statistically correlated fashion with changes in thickness of other cortical regions. We further extend these methods by introducing techniques to test whether different population groupings exhibit significantly varying MACACC patterns. The methods are described in detail and applied to a normal childhood development population (n = 292), and show that association cortices have the highest correlation strengths. Taking Brodmann Area (BA) 44 as a seed region revealed MACACC patterns strikingly similar to tractography maps obtained from diffusion tensor imaging. Furthermore, the MACACC map of BA 44 changed with age, older subjects featuring tighter correlations with BA 44 in the anterior portions of the superior temporal gyri. Lastly, IQ-dependent MACACC differences were investigated, revealing steeper correlations between BA 44 and multiple frontal and parietal regions for the higher IQ group, most significantly (t = 4.0) in the anterior cingulate."
},
{
"pmid": "18519166",
"title": "Provenance in neuroimaging.",
"abstract": "Provenance, the description of the history of a set of data, has grown more important with the proliferation of research consortia-related efforts in neuroimaging. Knowledge about the origin and history of an image is crucial for establishing data and results quality; detailed information about how it was processed, including the specific software routines and operating systems that were used, is necessary for proper interpretation, high fidelity replication and re-use. We have drafted a mechanism for describing provenance in a simple and easy to use environment, alleviating the burden of documentation from the user while still providing a rich description of an image's provenance. This combination of ease of use and highly descriptive metadata should greatly facilitate the collection of provenance and subsequent sharing of data."
},
{
"pmid": "24139651",
"title": "Seven challenges for neuroscience.",
"abstract": "Although twenty-first century neuroscience is a major scientific enterprise, advances in basic research have not yet translated into benefits for society. In this paper, I outline seven fundamental challenges that need to be overcome. First, neuroscience has to become \"big science\" - we need big teams with the resources and competences to tackle the big problems. Second, we need to create interlinked sets of data providing a complete picture of single areas of the brain at their different levels of organization with \"rungs\" linking the descriptions for humans and other species. Such \"data ladders\" will help us to meet the third challenge - the development of efficient predictive tools, enabling us to drastically increase the information we can extract from expensive experiments. The fourth challenge goes one step further: we have to develop novel hardware and software sufficiently powerful to simulate the brain. In the future, supercomputer-based brain simulation will enable us to make in silico manipulations and recordings, which are currently completely impossible in the lab. The fifth and sixth challenges are translational. On the one hand we need to develop new ways of classifying and simulating brain disease, leading to better diagnosis and more effective drug discovery. On the other, we have to exploit our knowledge to build new brain-inspired technologies, with potentially huge benefits for industry and for society. This leads to the seventh challenge. Neuroscience can indeed deliver huge benefits but we have to be aware of widespread social concern about our work. We need to recognize the fears that exist, lay them to rest, and actively build public support for neuroscience research. We have to set goals for ourselves that the public can recognize and share. And then we have to deliver on our promises. Only in this way, will we receive the support and funding we need."
},
{
"pmid": "20371412",
"title": "A virtual laboratory for medical image analysis.",
"abstract": "This paper presents the design, implementation, and usage of a virtual laboratory for medical image analysis. It is fully based on the Dutch grid, which is part of the Enabling Grids for E-sciencE (EGEE) production infrastructure and driven by the gLite middleware. The adopted service-oriented architecture enables decoupling the user-friendly clients running on the user's workstation from the complexity of the grid applications and infrastructure. Data are stored on grid resources and can be browsed/viewed interactively by the user with the Virtual Resource Browser (VBrowser). Data analysis pipelines are described as Scufl workflows and enacted on the grid infrastructure transparently using the MOTEUR workflow management system. VBrowser plug-ins allow for easy experiment monitoring and error detection. Because of the strict compliance to the grid authentication model, all operations are performed on behalf of the user, ensuring basic security and facilitating collaboration across organizations. The system has been operational and in daily use for eight months (December 2008), with six users, leading to the submission of 9000 jobs/month in average and the production of several terabytes of data."
},
{
"pmid": "22430496",
"title": "Within-subject template estimation for unbiased longitudinal image analysis.",
"abstract": "Longitudinal image analysis has become increasingly important in clinical studies of normal aging and neurodegenerative disorders. Furthermore, there is a growing appreciation of the potential utility of longitudinally acquired structural images and reliable image processing to evaluate disease modifying therapies. Challenges have been related to the variability that is inherent in the available cross-sectional processing tools, to the introduction of bias in longitudinal processing and to potential over-regularization. In this paper we introduce a novel longitudinal image processing framework, based on unbiased, robust, within-subject template creation, for automatic surface reconstruction and segmentation of brain MRI of arbitrarily many time points. We demonstrate that it is essential to treat all input images exactly the same as removing only interpolation asymmetries is not sufficient to remove processing bias. We successfully reduce variability and avoid over-regularization by initializing the processing in each time point with common information from the subject template. The presented results show a significant increase in precision and discrimination power while preserving the ability to detect large anatomical deviations; as such they hold great potential in clinical applications, e.g. allowing for smaller sample sizes or shorter trials to establish disease specific biomarkers or to quantify drug effects."
},
{
"pmid": "12880830",
"title": "The LONI Pipeline Processing Environment.",
"abstract": "The analysis of raw data in neuroimaging has become a computationally entrenched process with many intricate steps run on increasingly larger datasets. Many software packages exist that provide either complete analyses or specific steps in an analysis. These packages often possess diverse input and output requirements, utilize different file formats, run in particular environments, and have limited abilities with certain types of data. The combination of these packages to achieve more sensitive and accurate results has become a common tactic in brain mapping studies but requires much work to ensure valid interoperation between programs. The handling, organization, and storage of intermediate data can prove difficult as well. The LONI Pipeline Processing Environment is a simple, efficient, and distributed computing solution to these problems enabling software inclusion from different laboratories in different environments. It is used here to derive a T1-weighted MRI atlas of the human brain from 452 normal young adult subjects with fully automated processing. The LONI Pipeline Processing Environment's parallel processing efficiency using an integrated client/server dataflow model was 80.9% when running the atlas generation pipeline from a PC client (Acer TravelMate 340T) on 48 dedicated server processors (Silicon Graphics Inc. Origin 3000). The environment was 97.5% efficient when the same analysis was run on eight dedicated processors."
},
{
"pmid": "24113873",
"title": "Human neuroimaging as a \"Big Data\" science.",
"abstract": "The maturation of in vivo neuroimaging has led to incredible quantities of digital information about the human brain. While much is made of the data deluge in science, neuroimaging represents the leading edge of this onslaught of \"big data\". A range of neuroimaging databasing approaches has streamlined the transmission, storage, and dissemination of data from such brain imaging studies. Yet few, if any, common solutions exist to support the science of neuroimaging. In this article, we discuss how modern neuroimaging research represents a multifactorial and broad ranging data challenge, involving the growing size of the data being acquired; sociological and logistical sharing issues; infrastructural challenges for multi-site, multi-datatype archiving; and the means by which to explore and mine these data. As neuroimaging advances further, e.g. aging, genetics, and age-related disease, new vision is needed to manage and process this information while marshalling of these resources into novel results. Thus, \"big data\" can become \"big\" brain science."
}
] |