text
stringlengths 49
577
| label
stringclasses 7
values | metadata
sequence |
---|---|---|
Nevertheless , this approach required [[ user initialization ]] of the << tracking process >> . | USED-FOR | [
5,
6,
9,
10
] |
This paper solves the << automatic initial-ization problem >> by performing [[ boosted shape detection ]] as a generic measurement process and integrating it in our tracking framework . | USED-FOR | [
9,
11,
4,
6
] |
This paper solves the automatic initial-ization problem by performing << boosted shape detection >> as a [[ generic measurement process ]] and integrating it in our tracking framework . | USED-FOR | [
14,
16,
9,
11
] |
This paper solves the automatic initial-ization problem by performing boosted shape detection as a generic measurement process and integrating [[ it ]] in our << tracking framework >> . | PART-OF | [
19,
19,
22,
23
] |
As a result , we treat all sources of information in a unified way and derive the << posterior shape model >> as the shape with the [[ maximum likelihood ]] . | USED-FOR | [
25,
26,
17,
19
] |
Our [[ framework ]] is applied for the << automatic tracking of endocardium >> in ultrasound sequences of the human heart . | USED-FOR | [
1,
1,
6,
9
] |
Our framework is applied for the automatic tracking of [[ endocardium ]] in << ultrasound sequences of the human heart >> . | PART-OF | [
9,
9,
11,
16
] |
Reliable [[ detection ]] and robust << tracking >> results are achieved when compared to existing approaches and inter-expert variations . | CONJUNCTION | [
1,
1,
4,
4
] |
Reliable detection and robust tracking results are achieved when compared to existing [[ approaches ]] and << inter-expert variations >> . | CONJUNCTION | [
12,
12,
14,
15
] |
We present a [[ syntax-based constraint ]] for << word alignment >> , known as the cohesion constraint . | USED-FOR | [
3,
4,
6,
7
] |
We present a << syntax-based constraint >> for word alignment , known as the [[ cohesion constraint ]] . | HYPONYM-OF | [
12,
13,
3,
4
] |
<< It >> requires disjoint [[ English phrases ]] to be mapped to non-overlapping intervals in the French sentence . | USED-FOR | [
3,
4,
0,
0
] |
We evaluate the utility of this << constraint >> in two different [[ algorithms ]] . | EVALUATE-FOR | [
10,
10,
6,
6
] |
The results show that << it >> can provide a significant improvement in [[ alignment quality ]] . | EVALUATE-FOR | [
11,
12,
4,
4
] |
We present a novel << entity-based representation of discourse >> which is inspired by [[ Centering Theory ]] and can be computed automatically from raw text . | USED-FOR | [
12,
13,
4,
7
] |
We present a novel << entity-based representation of discourse >> which is inspired by Centering Theory and can be computed automatically from [[ raw text ]] . | USED-FOR | [
20,
21,
4,
7
] |
We view << coherence assessment >> as a [[ ranking learning problem ]] and show that the proposed discourse representation supports the effective learning of a ranking function . | USED-FOR | [
6,
8,
2,
3
] |
We view coherence assessment as a ranking learning problem and show that the proposed [[ discourse representation ]] supports the effective learning of a << ranking function >> . | USED-FOR | [
14,
15,
22,
23
] |
Our experiments demonstrate that the [[ induced model ]] achieves significantly higher accuracy than a state-of-the-art << coherence model >> . | COMPARE | [
5,
6,
14,
15
] |
Our experiments demonstrate that the << induced model >> achieves significantly higher [[ accuracy ]] than a state-of-the-art coherence model . | EVALUATE-FOR | [
10,
10,
5,
6
] |
Our experiments demonstrate that the induced model achieves significantly higher [[ accuracy ]] than a state-of-the-art << coherence model >> . | EVALUATE-FOR | [
10,
10,
14,
15
] |
This paper introduces a [[ robust interactive method ]] for << speech understanding >> . | USED-FOR | [
4,
6,
8,
9
] |
The << generalized LR parsing >> is enhanced in this [[ approach ]] . | USED-FOR | [
8,
8,
1,
3
] |
When a very noisy portion is detected , the << parser >> skips that portion using a fake [[ non-terminal symbol ]] . | USED-FOR | [
16,
17,
9,
9
] |
This [[ method ]] is also capable of handling << unknown words >> , which is important in practical systems . | USED-FOR | [
1,
1,
7,
8
] |
This paper shows that it is very often possible to identify the source language of [[ medium-length speeches ]] in the << EUROPARL corpus >> on the basis of frequency counts of word n-grams -LRB- 87.2 % -96.7 % accuracy depending on classification method -RRB- . | PART-OF | [
15,
16,
19,
20
] |
This paper shows that it is very often possible to identify the source language of medium-length speeches in the EUROPARL corpus on the basis of frequency counts of word n-grams -LRB- 87.2 % -96.7 % [[ accuracy ]] depending on << classification method >> -RRB- . | EVALUATE-FOR | [
35,
35,
38,
39
] |
We investigated whether [[ automatic phonetic transcriptions -LRB- APTs -RRB- ]] can replace << manually verified phonetic transcriptions >> -LRB- MPTs -RRB- in a large corpus-based study on pronunciation variation . | COMPARE | [
3,
8,
11,
14
] |
We investigated whether [[ automatic phonetic transcriptions -LRB- APTs -RRB- ]] can replace manually verified phonetic transcriptions -LRB- MPTs -RRB- in a large corpus-based study on << pronunciation variation >> . | USED-FOR | [
3,
8,
24,
25
] |
We investigated whether automatic phonetic transcriptions -LRB- APTs -RRB- can replace [[ manually verified phonetic transcriptions ]] -LRB- MPTs -RRB- in a large corpus-based study on << pronunciation variation >> . | USED-FOR | [
11,
14,
24,
25
] |
We trained << classifiers >> on the [[ speech processes ]] extracted from the alignments of an APT and an MPT with a canonical transcription . | USED-FOR | [
5,
6,
2,
2
] |
We trained classifiers on the << speech processes >> extracted from the [[ alignments ]] of an APT and an MPT with a canonical transcription . | USED-FOR | [
10,
10,
5,
6
] |
We trained classifiers on the speech processes extracted from the [[ alignments ]] of an << APT >> and an MPT with a canonical transcription . | USED-FOR | [
10,
10,
13,
13
] |
We trained classifiers on the speech processes extracted from the [[ alignments ]] of an APT and an << MPT >> with a canonical transcription . | USED-FOR | [
10,
10,
16,
16
] |
We trained classifiers on the speech processes extracted from the alignments of an [[ APT ]] and an << MPT >> with a canonical transcription . | CONJUNCTION | [
13,
13,
16,
16
] |
We trained classifiers on the speech processes extracted from the << alignments >> of an APT and an MPT with a [[ canonical transcription ]] . | USED-FOR | [
19,
20,
10,
10
] |
We tested whether the [[ classifiers ]] were equally good at verifying whether << unknown transcriptions >> represent read speech or telephone dialogues , and whether the same speech processes were identified to distinguish between transcriptions of the two situational settings . | USED-FOR | [
4,
4,
11,
12
] |
We tested whether the classifiers were equally good at verifying whether [[ unknown transcriptions ]] represent << read speech >> or telephone dialogues , and whether the same speech processes were identified to distinguish between transcriptions of the two situational settings . | USED-FOR | [
11,
12,
14,
15
] |
We tested whether the classifiers were equally good at verifying whether [[ unknown transcriptions ]] represent read speech or << telephone dialogues >> , and whether the same speech processes were identified to distinguish between transcriptions of the two situational settings . | USED-FOR | [
11,
12,
17,
18
] |
We tested whether the classifiers were equally good at verifying whether unknown transcriptions represent [[ read speech ]] or << telephone dialogues >> , and whether the same speech processes were identified to distinguish between transcriptions of the two situational settings . | CONJUNCTION | [
14,
15,
17,
18
] |
Our results not only show that similar distinguishing speech processes were identified ; our [[ APT-based classifier ]] yielded better classification accuracy than the << MPT-based classifier >> whilst using fewer classification features . | COMPARE | [
14,
15,
22,
23
] |
Our results not only show that similar distinguishing speech processes were identified ; our << APT-based classifier >> yielded better [[ classification accuracy ]] than the MPT-based classifier whilst using fewer classification features . | EVALUATE-FOR | [
18,
19,
14,
15
] |
Our results not only show that similar distinguishing speech processes were identified ; our APT-based classifier yielded better [[ classification accuracy ]] than the << MPT-based classifier >> whilst using fewer classification features . | EVALUATE-FOR | [
18,
19,
22,
23
] |
Our results not only show that similar distinguishing speech processes were identified ; our << APT-based classifier >> yielded better classification accuracy than the MPT-based classifier whilst using fewer [[ classification features ]] . | USED-FOR | [
27,
28,
14,
15
] |
Our results not only show that similar distinguishing speech processes were identified ; our APT-based classifier yielded better classification accuracy than the << MPT-based classifier >> whilst using fewer [[ classification features ]] . | USED-FOR | [
27,
28,
22,
23
] |
Machine reading is a relatively new field that features [[ computer programs ]] designed to read << flowing text >> and extract fact assertions expressed by the narrative content . | USED-FOR | [
9,
10,
14,
15
] |
Machine reading is a relatively new field that features [[ computer programs ]] designed to read flowing text and extract << fact assertions >> expressed by the narrative content . | USED-FOR | [
9,
10,
18,
19
] |
Machine reading is a relatively new field that features computer programs designed to read flowing text and extract [[ fact assertions ]] expressed by the << narrative content >> . | FEATURE-OF | [
18,
19,
23,
24
] |
This << task >> involves two core technologies : [[ natural language processing -LRB- NLP -RRB- ]] and information extraction -LRB- IE -RRB- . | PART-OF | [
7,
12,
1,
1
] |
This << task >> involves two core technologies : natural language processing -LRB- NLP -RRB- and [[ information extraction -LRB- IE -RRB- ]] . | PART-OF | [
14,
18,
1,
1
] |
In this paper we describe a << machine reading system >> that we have developed within a [[ cognitive architecture ]] . | FEATURE-OF | [
15,
16,
6,
8
] |
We show how we have integrated into the framework several levels of knowledge for a particular domain , ideas from [[ cognitive semantics ]] and << construction grammar >> , plus tools from prior NLP and IE research . | CONJUNCTION | [
20,
21,
23,
24
] |
We show how we have integrated into the framework several levels of knowledge for a particular domain , ideas from cognitive semantics and construction grammar , plus tools from [[ prior NLP ]] and << IE research >> . | CONJUNCTION | [
29,
30,
32,
33
] |
The result is a [[ system ]] that is capable of reading and interpreting complex and fairly << idiosyncratic texts >> in the family history domain . | USED-FOR | [
4,
4,
15,
16
] |
The result is a system that is capable of reading and interpreting complex and fairly << idiosyncratic texts >> in the [[ family history domain ]] . | FEATURE-OF | [
19,
21,
15,
16
] |
We present two [[ methods ]] for capturing << nonstationary chaos >> , then present a few examples including biological signals , ocean waves and traffic flow . | USED-FOR | [
3,
3,
6,
7
] |
We present two methods for capturing nonstationary chaos , then present a few << examples >> including [[ biological signals ]] , ocean waves and traffic flow . | HYPONYM-OF | [
15,
16,
13,
13
] |
We present two methods for capturing nonstationary chaos , then present a few examples including [[ biological signals ]] , << ocean waves >> and traffic flow . | CONJUNCTION | [
15,
16,
18,
19
] |
We present two methods for capturing nonstationary chaos , then present a few << examples >> including biological signals , [[ ocean waves ]] and traffic flow . | HYPONYM-OF | [
18,
19,
13,
13
] |
We present two methods for capturing nonstationary chaos , then present a few examples including biological signals , [[ ocean waves ]] and << traffic flow >> . | CONJUNCTION | [
18,
19,
21,
22
] |
We present two methods for capturing nonstationary chaos , then present a few << examples >> including biological signals , ocean waves and [[ traffic flow ]] . | HYPONYM-OF | [
21,
22,
13,
13
] |
This paper presents a [[ formal analysis ]] for a large class of words called << alternative markers >> , which includes other -LRB- than -RRB- , such -LRB- as -RRB- , and besides . | USED-FOR | [
4,
5,
13,
14
] |
These [[ words ]] appear frequently enough in << dialog >> to warrant serious attention , yet present natural language search engines perform poorly on queries containing them . | PART-OF | [
1,
1,
6,
6
] |
I show that the performance of a << search engine >> can be improved dramatically by incorporating an [[ approximation of the formal analysis ]] that is compatible with the search engine 's operational semantics . | PART-OF | [
16,
20,
7,
8
] |
I show that the performance of a search engine can be improved dramatically by incorporating an approximation of the formal analysis that is compatible with the << search engine >> 's [[ operational semantics ]] . | PART-OF | [
29,
30,
26,
27
] |
The value of this approach is that as the [[ operational semantics ]] of << natural language applications >> improve , even larger improvements are possible . | PART-OF | [
9,
10,
12,
14
] |
We find that simple << interpolation methods >> , like [[ log-linear and linear interpolation ]] , improve the performance but fall short of the performance of an oracle . | HYPONYM-OF | [
8,
11,
4,
5
] |
Actually , the oracle acts like a << dynamic combiner >> with [[ hard decisions ]] using the reference . | FEATURE-OF | [
10,
11,
7,
8
] |
We suggest a << method >> that mimics the behavior of the oracle using a [[ neural network ]] or a decision tree . | USED-FOR | [
13,
14,
3,
3
] |
We suggest a << method >> that mimics the behavior of the oracle using a neural network or a [[ decision tree ]] . | USED-FOR | [
17,
18,
3,
3
] |
We suggest a method that mimics the behavior of the oracle using a << neural network >> or a [[ decision tree ]] . | CONJUNCTION | [
17,
18,
13,
14
] |
The [[ method ]] amounts to tagging << LMs >> with confidence measures and picking the best hypothesis corresponding to the LM with the best confidence . | USED-FOR | [
1,
1,
5,
5
] |
The << method >> amounts to tagging LMs with [[ confidence measures ]] and picking the best hypothesis corresponding to the LM with the best confidence . | USED-FOR | [
7,
8,
1,
1
] |
We describe a new [[ method ]] for the representation of << NLP structures >> within reranking approaches . | USED-FOR | [
4,
4,
9,
10
] |
We describe a new method for the representation of << NLP structures >> within [[ reranking approaches ]] . | FEATURE-OF | [
12,
13,
9,
10
] |
We make use of a << conditional log-linear model >> , with [[ hidden variables ]] representing the assignment of lexical items to word clusters or word senses . | USED-FOR | [
10,
11,
5,
7
] |
We make use of a conditional log-linear model , with hidden variables representing the assignment of lexical items to [[ word clusters ]] or << word senses >> . | CONJUNCTION | [
19,
20,
22,
23
] |
The << model >> learns to automatically make these assignments based on a [[ discriminative training criterion ]] . | USED-FOR | [
11,
13,
1,
1
] |
Training and decoding with the model requires summing over an exponential number of hidden-variable assignments : the required << summations >> can be computed efficiently and exactly using [[ dynamic programming ]] . | USED-FOR | [
26,
27,
18,
18
] |
As a case study , we apply the [[ model ]] to << parse reranking >> . | USED-FOR | [
8,
8,
10,
11
] |
The [[ model ]] gives an F-measure improvement of ~ 1.25 % beyond the << base parser >> , and an ~ 0.25 % improvement beyond Collins -LRB- 2000 -RRB- reranker . | COMPARE | [
1,
1,
12,
13
] |
The << model >> gives an [[ F-measure ]] improvement of ~ 1.25 % beyond the base parser , and an ~ 0.25 % improvement beyond Collins -LRB- 2000 -RRB- reranker . | EVALUATE-FOR | [
4,
4,
1,
1
] |
The model gives an F-measure improvement of ~ 1.25 % beyond the [[ base parser ]] , and an ~ 0.25 % improvement beyond << Collins -LRB- 2000 -RRB- reranker >> . | COMPARE | [
12,
13,
22,
26
] |
Although our experiments are focused on << parsing >> , the [[ techniques ]] described generalize naturally to NLP structures other than parse trees . | USED-FOR | [
9,
9,
6,
6
] |
Although our experiments are focused on parsing , the [[ techniques ]] described generalize naturally to << NLP structures >> other than parse trees . | USED-FOR | [
9,
9,
14,
15
] |
Although our experiments are focused on parsing , the [[ techniques ]] described generalize naturally to NLP structures other than << parse trees >> . | USED-FOR | [
9,
9,
18,
19
] |
Although our experiments are focused on parsing , the techniques described generalize naturally to << NLP structures >> other than [[ parse trees ]] . | CONJUNCTION | [
18,
19,
14,
15
] |
This paper presents an [[ algorithm ]] for << learning the time-varying shape of a non-rigid 3D object >> from uncalibrated 2D tracking data . | USED-FOR | [
4,
4,
6,
14
] |
We constrain the problem by assuming that the << object shape >> at each time instant is drawn from a [[ Gaussian distribution ]] . | USED-FOR | [
18,
19,
8,
9
] |
Based on this assumption , the [[ algorithm ]] simultaneously estimates << 3D shape and motion >> for each time frame , learns the parameters of the Gaussian , and robustly fills-in missing data points . | USED-FOR | [
6,
6,
9,
12
] |
We then extend the [[ algorithm ]] to model << temporal smoothness in object shape >> , thus allowing it to handle severe cases of missing data . | USED-FOR | [
4,
4,
7,
11
] |
We then extend the algorithm to model temporal smoothness in object shape , thus allowing [[ it ]] to handle severe cases of << missing data >> . | USED-FOR | [
15,
15,
21,
22
] |
[[ Automatic summarization ]] and << information extraction >> are two important Internet services . | CONJUNCTION | [
0,
1,
3,
4
] |
[[ MUC ]] and << SUMMAC >> play their appropriate roles in the next generation Internet . | CONJUNCTION | [
0,
0,
2,
2
] |
This paper focuses on the automatic summarization and proposes two different [[ models ]] to extract sentences for << summary generation >> under two tasks initiated by SUMMAC-1 . | USED-FOR | [
11,
11,
16,
17
] |
This paper focuses on the automatic summarization and proposes two different [[ models ]] to extract sentences for summary generation under two << tasks >> initiated by SUMMAC-1 . | USED-FOR | [
11,
11,
20,
20
] |
This paper focuses on the automatic summarization and proposes two different models to extract sentences for summary generation under two [[ tasks ]] initiated by << SUMMAC-1 >> . | PART-OF | [
20,
20,
23,
23
] |
For << categorization task >> , [[ positive feature vectors ]] and negative feature vectors are used cooperatively to construct generic , indicative summaries . | USED-FOR | [
4,
6,
1,
2
] |
For categorization task , [[ positive feature vectors ]] and << negative feature vectors >> are used cooperatively to construct generic , indicative summaries . | CONJUNCTION | [
4,
6,
8,
10
] |
For categorization task , [[ positive feature vectors ]] and negative feature vectors are used cooperatively to construct << generic , indicative summaries >> . | USED-FOR | [
4,
6,
16,
19
] |