{"text": "This paper presents an [[ algorithm ]] for << computing optical flow , shape , motion , lighting , and albedo >> from an image sequence of a rigidly-moving Lambertian object under distant illumination .", "label": "USED-FOR", "metadata": [4, 4, 6, 17]} | |
{"text": "This paper presents an << algorithm >> for computing optical flow , shape , motion , lighting , and albedo from an [[ image sequence ]] of a rigidly-moving Lambertian object under distant illumination .", "label": "USED-FOR", "metadata": [20, 21, 4, 4]} | |
{"text": "This paper presents an algorithm for computing optical flow , shape , motion , lighting , and albedo from an << image sequence >> of a [[ rigidly-moving Lambertian object ]] under distant illumination .", "label": "FEATURE-OF", "metadata": [24, 26, 20, 21]} | |
{"text": "This paper presents an algorithm for computing optical flow , shape , motion , lighting , and albedo from an image sequence of a << rigidly-moving Lambertian object >> under [[ distant illumination ]] .", "label": "FEATURE-OF", "metadata": [28, 29, 24, 26]} | |
{"text": "The problem is formulated in a manner that subsumes structure from [[ motion ]] , << multi-view stereo >> , and photo-metric stereo as special cases .", "label": "CONJUNCTION", "metadata": [11, 11, 13, 14]} | |
{"text": "The problem is formulated in a manner that subsumes structure from motion , [[ multi-view stereo ]] , and << photo-metric stereo >> as special cases .", "label": "CONJUNCTION", "metadata": [13, 14, 17, 18]} | |
{"text": "The << algorithm >> utilizes both [[ spatial and temporal intensity variation ]] as cues : the former constrains flow and the latter constrains surface orientation ; combining both cues enables dense reconstruction of both textured and texture-less surfaces .", "label": "USED-FOR", "metadata": [4, 8, 1, 1]} | |
{"text": "The algorithm utilizes both spatial and temporal intensity variation as << cues >> : the [[ former ]] constrains flow and the latter constrains surface orientation ; combining both cues enables dense reconstruction of both textured and texture-less surfaces .", "label": "HYPONYM-OF", "metadata": [13, 13, 10, 10]} | |
{"text": "The algorithm utilizes both spatial and temporal intensity variation as cues : the [[ former ]] constrains << flow >> and the latter constrains surface orientation ; combining both cues enables dense reconstruction of both textured and texture-less surfaces .", "label": "USED-FOR", "metadata": [13, 13, 15, 15]} | |
{"text": "The algorithm utilizes both spatial and temporal intensity variation as cues : the [[ former ]] constrains flow and the << latter >> constrains surface orientation ; combining both cues enables dense reconstruction of both textured and texture-less surfaces .", "label": "CONJUNCTION", "metadata": [13, 13, 18, 18]} | |
{"text": "The algorithm utilizes both spatial and temporal intensity variation as << cues >> : the former constrains flow and the [[ latter ]] constrains surface orientation ; combining both cues enables dense reconstruction of both textured and texture-less surfaces .", "label": "HYPONYM-OF", "metadata": [18, 18, 10, 10]} | |
{"text": "The algorithm utilizes both spatial and temporal intensity variation as cues : the former constrains flow and the [[ latter ]] constrains << surface orientation >> ; combining both cues enables dense reconstruction of both textured and texture-less surfaces .", "label": "USED-FOR", "metadata": [18, 18, 20, 21]} | |
{"text": "The algorithm utilizes both spatial and temporal intensity variation as cues : the former constrains flow and the latter constrains surface orientation ; combining both [[ cues ]] enables << dense reconstruction of both textured and texture-less surfaces >> .", "label": "USED-FOR", "metadata": [25, 25, 27, 34]} | |
{"text": "The << algorithm >> works by iteratively [[ estimating affine camera parameters , illumination , shape , and albedo ]] in an alternating fashion .", "label": "USED-FOR", "metadata": [5, 15, 1, 1]} | |
{"text": "An [[ entity-oriented approach ]] to << restricted-domain parsing >> is proposed .", "label": "USED-FOR", "metadata": [1, 2, 4, 5]} | |
{"text": "Like semantic grammar , [[ this ]] allows easy exploitation of << limited domain semantics >> .", "label": "USED-FOR", "metadata": [4, 4, 9, 11]} | |
{"text": "In addition , [[ it ]] facilitates << fragmentary recognition >> and the use of multiple parsing strategies , and so is particularly useful for robust recognition of extra-grammatical input .", "label": "USED-FOR", "metadata": [3, 3, 5, 6]} | |
{"text": "In addition , [[ it ]] facilitates fragmentary recognition and the use of << multiple parsing strategies >> , and so is particularly useful for robust recognition of extra-grammatical input .", "label": "USED-FOR", "metadata": [3, 3, 11, 13]} | |
{"text": "In addition , it facilitates fragmentary recognition and the use of [[ multiple parsing strategies ]] , and so is particularly useful for robust << recognition of extra-grammatical input >> .", "label": "USED-FOR", "metadata": [11, 13, 22, 25]} | |
{"text": "Representative samples from an entity-oriented language definition are presented , along with a [[ control structure ]] for an << entity-oriented parser >> , some parsing strategies that use the control structure , and worked examples of parses .", "label": "USED-FOR", "metadata": [13, 14, 17, 18]} | |
{"text": "Representative samples from an entity-oriented language definition are presented , along with a control structure for an entity-oriented parser , some << parsing strategies >> that use the [[ control structure ]] , and worked examples of parses .", "label": "USED-FOR", "metadata": [26, 27, 21, 22]} | |
{"text": "A << parser >> incorporating the [[ control structure ]] and the parsing strategies is currently under implementation .", "label": "PART-OF", "metadata": [4, 5, 1, 1]} | |
{"text": "This paper summarizes the formalism of Category Cooccurrence Restrictions -LRB- CCRs -RRB- and describes two [[ parsing algorithms ]] that interpret << it >> .", "label": "USED-FOR", "metadata": [15, 16, 19, 19]} | |
{"text": "The use of CCRs leads to << syntactic descriptions >> formulated entirely with [[ restrictive statements ]] .", "label": "FEATURE-OF", "metadata": [11, 12, 6, 7]} | |
{"text": "The paper shows how conventional [[ algorithms ]] for the analysis of context free languages can be adapted to the << CCR formalism >> .", "label": "USED-FOR", "metadata": [5, 5, 18, 19]} | |
{"text": "The paper shows how conventional << algorithms >> for the analysis of [[ context free languages ]] can be adapted to the CCR formalism .", "label": "USED-FOR", "metadata": [10, 12, 5, 5]} | |
{"text": "Special attention is given to the part of the parser that checks the fulfillment of [[ logical well-formedness conditions ]] on << trees >> .", "label": "FEATURE-OF", "metadata": [15, 17, 19, 19]} | |
{"text": "We present a [[ text mining method ]] for finding << synonymous expressions >> based on the distributional hypothesis in a set of coherent corpora .", "label": "USED-FOR", "metadata": [3, 5, 8, 9]} | |
{"text": "We present a << text mining method >> for finding synonymous expressions based on the [[ distributional hypothesis ]] in a set of coherent corpora .", "label": "USED-FOR", "metadata": [13, 14, 3, 5]} | |
{"text": "This paper proposes a new methodology to improve the [[ accuracy ]] of a << term aggregation system >> using each author 's text as a coherent corpus .", "label": "EVALUATE-FOR", "metadata": [9, 9, 12, 14]} | |
{"text": "This paper proposes a new << methodology >> to improve the accuracy of a [[ term aggregation system ]] using each author 's text as a coherent corpus .", "label": "EVALUATE-FOR", "metadata": [12, 14, 5, 5]} | |
{"text": "Our proposed method improves the [[ accuracy ]] of our << term aggregation system >> , showing that our approach is successful .", "label": "EVALUATE-FOR", "metadata": [5, 5, 8, 10]} | |
{"text": "Our proposed << method >> improves the accuracy of our [[ term aggregation system ]] , showing that our approach is successful .", "label": "EVALUATE-FOR", "metadata": [8, 10, 2, 2]} | |
{"text": "In this work , we present a [[ technique ]] for << robust estimation >> , which by explicitly incorporating the inherent uncertainty of the estimation procedure , results in a more efficient robust estimation algorithm .", "label": "USED-FOR", "metadata": [7, 7, 9, 10]} | |
{"text": "In this work , we present a [[ technique ]] for robust estimation , which by explicitly incorporating the inherent uncertainty of the estimation procedure , results in a more << efficient robust estimation algorithm >> .", "label": "USED-FOR", "metadata": [7, 7, 28, 31]} | |
{"text": "In this work , we present a << technique >> for robust estimation , which by explicitly incorporating the [[ inherent uncertainty of the estimation procedure ]] , results in a more efficient robust estimation algorithm .", "label": "USED-FOR", "metadata": [17, 22, 7, 7]} | |
{"text": "The combination of these two [[ strategies ]] results in a << robust estimation procedure >> that provides a significant speed-up over existing RANSAC techniques , while requiring no prior information to guide the sampling process .", "label": "USED-FOR", "metadata": [5, 5, 9, 11]} | |
{"text": "The combination of these two strategies results in a << robust estimation procedure >> that provides a significant speed-up over existing [[ RANSAC techniques ]] , while requiring no prior information to guide the sampling process .", "label": "COMPARE", "metadata": [19, 20, 9, 11]} | |
{"text": "In particular , our [[ algorithm ]] requires , on average , 3-10 times fewer samples than standard << RANSAC >> , which is in close agreement with theoretical predictions .", "label": "COMPARE", "metadata": [4, 4, 16, 16]} | |
{"text": "The efficiency of the << algorithm >> is demonstrated on a selection of [[ geometric estimation problems ]] .", "label": "EVALUATE-FOR", "metadata": [11, 13, 4, 4]} | |
{"text": "An attempt has been made to use an [[ Augmented Transition Network ]] as a procedural << dialog model >> .", "label": "HYPONYM-OF", "metadata": [8, 10, 14, 15]} | |
{"text": "The development of such a model appears to be important in several respects : as a << device >> to represent and to use different [[ dialog schemata ]] proposed in empirical conversation analysis ; as a device to represent and to use models of verbal interaction ; as a device combining knowledge about dialog schemata and about verbal interaction with knowledge about task-oriented and goal-directed dialogs .", "label": "USED-FOR", "metadata": [23, 24, 16, 16]} | |
{"text": "The development of such a model appears to be important in several respects : as a device to represent and to use different [[ dialog schemata ]] proposed in empirical << conversation analysis >> ; as a device to represent and to use models of verbal interaction ; as a device combining knowledge about dialog schemata and about verbal interaction with knowledge about task-oriented and goal-directed dialogs .", "label": "USED-FOR", "metadata": [23, 24, 28, 29]} | |
{"text": "The development of such a model appears to be important in several respects : as a device to represent and to use different dialog schemata proposed in empirical conversation analysis ; as a << device >> to represent and to use [[ models ]] of verbal interaction ; as a device combining knowledge about dialog schemata and about verbal interaction with knowledge about task-oriented and goal-directed dialogs .", "label": "USED-FOR", "metadata": [39, 39, 33, 33]} | |
{"text": "The development of such a model appears to be important in several respects : as a device to represent and to use different dialog schemata proposed in empirical conversation analysis ; as a device to represent and to use [[ models ]] of << verbal interaction >> ; as a device combining knowledge about dialog schemata and about verbal interaction with knowledge about task-oriented and goal-directed dialogs .", "label": "USED-FOR", "metadata": [39, 39, 41, 42]} | |
{"text": "The development of such a model appears to be important in several respects : as a device to represent and to use different dialog schemata proposed in empirical conversation analysis ; as a device to represent and to use models of verbal interaction ; as a device combining knowledge about [[ dialog schemata ]] and about << verbal interaction >> with knowledge about task-oriented and goal-directed dialogs .", "label": "CONJUNCTION", "metadata": [50, 51, 54, 55]} | |
{"text": "A standard [[ ATN ]] should be further developed in order to account for the << verbal interactions >> of task-oriented dialogs .", "label": "USED-FOR", "metadata": [2, 2, 13, 14]} | |
{"text": "A standard ATN should be further developed in order to account for the [[ verbal interactions ]] of << task-oriented dialogs >> .", "label": "FEATURE-OF", "metadata": [13, 14, 16, 17]} | |
{"text": "We present a practically [[ unsupervised learning method ]] to produce << single-snippet answers >> to definition questions in question answering systems that supplement Web search engines .", "label": "USED-FOR", "metadata": [4, 6, 9, 10]} | |
{"text": "We present a practically unsupervised learning method to produce single-snippet answers to definition questions in [[ question answering systems ]] that supplement << Web search engines >> .", "label": "USED-FOR", "metadata": [15, 17, 20, 22]} | |
{"text": "The [[ method ]] exploits << on-line encyclopedias and dictionaries >> to generate automatically an arbitrarily large number of positive and negative definition examples , which are then used to train an svm to separate the two classes .", "label": "USED-FOR", "metadata": [1, 1, 3, 6]} | |
{"text": "The method exploits [[ on-line encyclopedias and dictionaries ]] to generate automatically an arbitrarily large number of << positive and negative definition examples >> , which are then used to train an svm to separate the two classes .", "label": "USED-FOR", "metadata": [3, 6, 15, 19]} | |
{"text": "The method exploits on-line encyclopedias and dictionaries to generate automatically an arbitrarily large number of [[ positive and negative definition examples ]] , which are then used to train an << svm >> to separate the two classes .", "label": "USED-FOR", "metadata": [15, 19, 28, 28]} | |
{"text": "We show experimentally that the proposed method is viable , that [[ it ]] outperforms the << alternative >> of training the system on questions and news articles from trec , and that it helps the search engine handle definition questions significantly better .", "label": "COMPARE", "metadata": [11, 11, 14, 14]} | |
{"text": "We show experimentally that the proposed method is viable , that it outperforms the alternative of training the << system >> on questions and [[ news articles ]] from trec , and that it helps the search engine handle definition questions significantly better .", "label": "USED-FOR", "metadata": [22, 23, 18, 18]} | |
{"text": "We show experimentally that the proposed method is viable , that it outperforms the alternative of training the system on questions and [[ news articles ]] from << trec >> , and that it helps the search engine handle definition questions significantly better .", "label": "PART-OF", "metadata": [22, 23, 25, 25]} | |
{"text": "We show experimentally that the proposed method is viable , that it outperforms the alternative of training the system on questions and news articles from trec , and that [[ it ]] helps the << search engine >> handle definition questions significantly better .", "label": "USED-FOR", "metadata": [29, 29, 32, 33]} | |
{"text": "We revisit the << classical decision-theoretic problem of weighted expert voting >> from a [[ statistical learning perspective ]] .", "label": "USED-FOR", "metadata": [12, 14, 3, 9]} | |
{"text": "In the case of known expert competence levels , we give [[ sharp error estimates ]] for the << optimal rule >> .", "label": "USED-FOR", "metadata": [11, 13, 16, 17]} | |
{"text": "We analyze a [[ reweighted version of the Kikuchi approximation ]] for estimating the << log partition function of a product distribution >> defined over a region graph .", "label": "USED-FOR", "metadata": [3, 8, 12, 18]} | |
{"text": "We analyze a reweighted version of the Kikuchi approximation for estimating the [[ log partition function of a product distribution ]] defined over a << region graph >> .", "label": "FEATURE-OF", "metadata": [12, 18, 22, 23]} | |
{"text": "We establish sufficient conditions for the [[ concavity ]] of our << reweighted objective function >> in terms of weight assignments in the Kikuchi expansion , and show that a reweighted version of the sum product algorithm applied to the Kikuchi region graph will produce global optima of the Kikuchi approximation whenever the algorithm converges .", "label": "FEATURE-OF", "metadata": [6, 6, 9, 11]} | |
{"text": "We establish sufficient conditions for the concavity of our reweighted objective function in terms of weight assignments in the Kikuchi expansion , and show that a [[ reweighted version of the sum product algorithm ]] applied to the << Kikuchi region graph >> will produce global optima of the Kikuchi approximation whenever the algorithm converges .", "label": "USED-FOR", "metadata": [26, 32, 36, 38]} | |
{"text": "We establish sufficient conditions for the concavity of our reweighted objective function in terms of weight assignments in the Kikuchi expansion , and show that a reweighted version of the sum product algorithm applied to the Kikuchi region graph will produce [[ global optima ]] of the << Kikuchi approximation >> whenever the algorithm converges .", "label": "FEATURE-OF", "metadata": [41, 42, 45, 46]} | |
{"text": "Finally , we provide an explicit characterization of the polytope of concavity in terms of the [[ cycle structure ]] of the << region graph >> .", "label": "FEATURE-OF", "metadata": [16, 17, 20, 21]} | |
{"text": "We apply a [[ decision tree based approach ]] to << pronoun resolution >> in spoken dialogue .", "label": "USED-FOR", "metadata": [3, 6, 8, 9]} | |
{"text": "We apply a decision tree based approach to [[ pronoun resolution ]] in << spoken dialogue >> .", "label": "USED-FOR", "metadata": [8, 9, 11, 12]} | |
{"text": "Our [[ system ]] deals with << pronouns >> with NP - and non-NP-antecedents .", "label": "USED-FOR", "metadata": [1, 1, 4, 4]} | |
{"text": "Our system deals with << pronouns >> with [[ NP - and non-NP-antecedents ]] .", "label": "USED-FOR", "metadata": [6, 9, 4, 4]} | |
{"text": "We present a set of [[ features ]] designed for << pronoun resolution >> in spoken dialogue and determine the most promising features .", "label": "USED-FOR", "metadata": [5, 5, 8, 9]} | |
{"text": "We present a set of features designed for [[ pronoun resolution ]] in << spoken dialogue >> and determine the most promising features .", "label": "USED-FOR", "metadata": [8, 9, 11, 12]} | |
{"text": "We evaluate the << system >> on twenty [[ Switchboard dialogues ]] and show that it compares well to Byron 's -LRB- 2002 -RRB- manually tuned system .", "label": "EVALUATE-FOR", "metadata": [6, 7, 3, 3]} | |
{"text": "We evaluate the system on twenty Switchboard dialogues and show that [[ it ]] compares well to << Byron 's -LRB- 2002 -RRB- manually tuned system >> .", "label": "COMPARE", "metadata": [11, 11, 15, 22]} | |
{"text": "We present a new [[ approach ]] for building an efficient and robust << classifier >> for the two class problem , that localizes objects that may appear in the image under different orien-tations .", "label": "USED-FOR", "metadata": [4, 4, 11, 11]} | |
{"text": "We present a new approach for building an efficient and robust [[ classifier ]] for the two << class problem >> , that localizes objects that may appear in the image under different orien-tations .", "label": "USED-FOR", "metadata": [11, 11, 15, 16]} | |
{"text": "In contrast to other works that address this problem using multiple classifiers , each one specialized for a specific orientation , we propose a simple two-step << approach >> with an [[ estimation stage ]] and a classification stage .", "label": "PART-OF", "metadata": [29, 30, 26, 26]} | |
{"text": "In contrast to other works that address this problem using multiple classifiers , each one specialized for a specific orientation , we propose a simple two-step approach with an [[ estimation stage ]] and a << classification stage >> .", "label": "CONJUNCTION", "metadata": [29, 30, 33, 34]} | |
{"text": "In contrast to other works that address this problem using multiple classifiers , each one specialized for a specific orientation , we propose a simple two-step << approach >> with an estimation stage and a [[ classification stage ]] .", "label": "PART-OF", "metadata": [33, 34, 26, 26]} | |
{"text": "The estimator yields an initial set of potential << object poses >> that are then validated by the [[ classifier ]] .", "label": "USED-FOR", "metadata": [16, 16, 8, 9]} | |
{"text": "This methodology allows reducing the [[ time complexity ]] of the << algorithm >> while classification results remain high .", "label": "EVALUATE-FOR", "metadata": [5, 6, 9, 9]} | |
{"text": "The << classifier >> we use in both stages is based on a [[ boosted combination of Random Ferns ]] over local histograms of oriented gradients -LRB- HOGs -RRB- , which we compute during a pre-processing step .", "label": "USED-FOR", "metadata": [11, 15, 1, 1]} | |
{"text": "The classifier we use in both stages is based on a << boosted combination of Random Ferns >> over [[ local histograms of oriented gradients -LRB- HOGs -RRB- ]] , which we compute during a pre-processing step .", "label": "FEATURE-OF", "metadata": [17, 24, 11, 15]} | |
{"text": "The classifier we use in both stages is based on a boosted combination of Random Ferns over << local histograms of oriented gradients -LRB- HOGs -RRB- >> , which we compute during a [[ pre-processing step ]] .", "label": "USED-FOR", "metadata": [31, 32, 17, 24]} | |
{"text": "Both the use of [[ supervised learning ]] and working on the gradient space makes our << approach >> robust while being efficient at run-time .", "label": "USED-FOR", "metadata": [4, 5, 14, 14]} | |
{"text": "Both the use of supervised learning and working on the [[ gradient space ]] makes our << approach >> robust while being efficient at run-time .", "label": "USED-FOR", "metadata": [10, 11, 14, 14]} | |
{"text": "We show these properties by thorough testing on standard databases and on a new << database >> made of [[ motorbikes under planar rotations ]] , and with challenging conditions such as cluttered backgrounds , changing illumination conditions and partial occlusions .", "label": "FEATURE-OF", "metadata": [17, 20, 14, 14]} | |
{"text": "We show these properties by thorough testing on standard databases and on a new << database >> made of motorbikes under planar rotations , and with challenging [[ conditions ]] such as cluttered backgrounds , changing illumination conditions and partial occlusions .", "label": "FEATURE-OF", "metadata": [25, 25, 14, 14]} | |
{"text": "We show these properties by thorough testing on standard databases and on a new database made of motorbikes under planar rotations , and with challenging << conditions >> such as [[ cluttered backgrounds ]] , changing illumination conditions and partial occlusions .", "label": "HYPONYM-OF", "metadata": [28, 29, 25, 25]} | |
{"text": "We show these properties by thorough testing on standard databases and on a new database made of motorbikes under planar rotations , and with challenging conditions such as [[ cluttered backgrounds ]] , << changing illumination conditions >> and partial occlusions .", "label": "CONJUNCTION", "metadata": [28, 29, 31, 33]} | |
{"text": "We show these properties by thorough testing on standard databases and on a new database made of motorbikes under planar rotations , and with challenging << conditions >> such as cluttered backgrounds , [[ changing illumination conditions ]] and partial occlusions .", "label": "HYPONYM-OF", "metadata": [31, 33, 25, 25]} | |
{"text": "We show these properties by thorough testing on standard databases and on a new database made of motorbikes under planar rotations , and with challenging conditions such as cluttered backgrounds , [[ changing illumination conditions ]] and << partial occlusions >> .", "label": "CONJUNCTION", "metadata": [31, 33, 35, 36]} | |
{"text": "We show these properties by thorough testing on standard databases and on a new database made of motorbikes under planar rotations , and with challenging << conditions >> such as cluttered backgrounds , changing illumination conditions and [[ partial occlusions ]] .", "label": "HYPONYM-OF", "metadata": [35, 36, 25, 25]} | |
{"text": "A very simple improved [[ duration model ]] has reduced the error rate by about 10 % in both << triphone and semiphone systems >> .", "label": "USED-FOR", "metadata": [4, 5, 17, 20]} | |
{"text": "A very simple improved duration model has reduced the [[ error rate ]] by about 10 % in both << triphone and semiphone systems >> .", "label": "EVALUATE-FOR", "metadata": [9, 10, 17, 20]} | |
{"text": "A new << training strategy >> has been tested which , by itself , did not provide useful improvements but suggests that improvements can be obtained by a related [[ rapid adaptation technique ]] .", "label": "USED-FOR", "metadata": [27, 29, 2, 3]} | |
{"text": "Finally , the << recognizer >> has been modified to use [[ bigram back-off language models ]] .", "label": "USED-FOR", "metadata": [9, 12, 3, 3]} | |
{"text": "The [[ system ]] was then transferred from the << RM task >> to the ATIS CSR task and a limited number of development tests performed .", "label": "USED-FOR", "metadata": [1, 1, 7, 8]} | |
{"text": "The [[ system ]] was then transferred from the RM task to the << ATIS CSR task >> and a limited number of development tests performed .", "label": "USED-FOR", "metadata": [1, 1, 11, 13]} | |
{"text": "The system was then transferred from the [[ RM task ]] to the << ATIS CSR task >> and a limited number of development tests performed .", "label": "CONJUNCTION", "metadata": [7, 8, 11, 13]} | |
{"text": "A new [[ approach ]] for << Interactive Machine Translation >> where the author interacts during the creation or the modification of the document is proposed .", "label": "USED-FOR", "metadata": [2, 2, 4, 6]} | |
{"text": "This paper presents a new << interactive disambiguation scheme >> based on the [[ paraphrasing ]] of a parser 's multiple output .", "label": "USED-FOR", "metadata": [11, 11, 5, 7]} | |
{"text": "We describe a novel [[ approach ]] to << statistical machine translation >> that combines syntactic information in the source language with recent advances in phrasal translation .", "label": "USED-FOR", "metadata": [4, 4, 6, 8]} | |
{"text": "We describe a novel << approach >> to statistical machine translation that combines [[ syntactic information ]] in the source language with recent advances in phrasal translation .", "label": "PART-OF", "metadata": [11, 12, 4, 4]} | |
{"text": "We describe a novel approach to statistical machine translation that combines [[ syntactic information ]] in the source language with recent advances in << phrasal translation >> .", "label": "CONJUNCTION", "metadata": [11, 12, 21, 22]} | |
{"text": "We describe a novel << approach >> to statistical machine translation that combines syntactic information in the source language with recent advances in [[ phrasal translation ]] .", "label": "PART-OF", "metadata": [21, 22, 4, 4]} | |
{"text": "This << method >> requires a [[ source-language dependency parser ]] , target language word segmentation and an unsupervised word alignment component .", "label": "USED-FOR", "metadata": [4, 6, 1, 1]} | |
{"text": "This method requires a [[ source-language dependency parser ]] , << target language word segmentation >> and an unsupervised word alignment component .", "label": "CONJUNCTION", "metadata": [4, 6, 8, 11]} | |
{"text": "This << method >> requires a source-language dependency parser , [[ target language word segmentation ]] and an unsupervised word alignment component .", "label": "USED-FOR", "metadata": [8, 11, 1, 1]} | |
{"text": "This method requires a source-language dependency parser , [[ target language word segmentation ]] and an << unsupervised word alignment component >> .", "label": "CONJUNCTION", "metadata": [8, 11, 14, 17]} | |
{"text": "This << method >> requires a source-language dependency parser , target language word segmentation and an [[ unsupervised word alignment component ]] .", "label": "USED-FOR", "metadata": [14, 17, 1, 1]} | |
{"text": "We describe an efficient decoder and show that using these [[ tree-based models ]] in combination with conventional << SMT models >> provides a promising approach that incorporates the power of phrasal SMT with the linguistic generality available in a parser .", "label": "CONJUNCTION", "metadata": [10, 11, 16, 17]} | |
{"text": "We describe an efficient decoder and show that using these [[ tree-based models ]] in combination with conventional SMT models provides a promising << approach >> that incorporates the power of phrasal SMT with the linguistic generality available in a parser .", "label": "USED-FOR", "metadata": [10, 11, 21, 21]} | |
{"text": "We describe an efficient decoder and show that using these tree-based models in combination with conventional [[ SMT models ]] provides a promising << approach >> that incorporates the power of phrasal SMT with the linguistic generality available in a parser .", "label": "USED-FOR", "metadata": [16, 17, 21, 21]} | |
{"text": "We describe an efficient decoder and show that using these tree-based models in combination with conventional SMT models provides a promising approach that incorporates the power of [[ phrasal SMT ]] with the << linguistic generality >> available in a parser .", "label": "CONJUNCTION", "metadata": [27, 28, 31, 32]} | |
{"text": "We describe an efficient decoder and show that using these tree-based models in combination with conventional SMT models provides a promising approach that incorporates the power of [[ phrasal SMT ]] with the linguistic generality available in a << parser >> .", "label": "USED-FOR", "metadata": [27, 28, 36, 36]} | |
{"text": "We describe an efficient decoder and show that using these tree-based models in combination with conventional SMT models provides a promising approach that incorporates the power of phrasal SMT with the [[ linguistic generality ]] available in a << parser >> .", "label": "FEATURE-OF", "metadata": [31, 32, 36, 36]} | |
{"text": "<< Video >> provides not only rich [[ visual cues ]] such as motion and appearance , but also much less explored long-range temporal interactions among objects .", "label": "FEATURE-OF", "metadata": [5, 6, 0, 0]} | |
{"text": "Video provides not only rich << visual cues >> such as [[ motion ]] and appearance , but also much less explored long-range temporal interactions among objects .", "label": "HYPONYM-OF", "metadata": [9, 9, 5, 6]} | |
{"text": "Video provides not only rich visual cues such as [[ motion ]] and << appearance >> , but also much less explored long-range temporal interactions among objects .", "label": "CONJUNCTION", "metadata": [9, 9, 11, 11]} | |
{"text": "Video provides not only rich << visual cues >> such as motion and [[ appearance ]] , but also much less explored long-range temporal interactions among objects .", "label": "HYPONYM-OF", "metadata": [11, 11, 5, 6]} | |
{"text": "We aim to capture such interactions and to construct a powerful [[ intermediate-level video representation ]] for subsequent << recognition >> .", "label": "USED-FOR", "metadata": [11, 13, 16, 16]} | |
{"text": "First , we develop an efficient << spatio-temporal video segmentation algorithm >> , which naturally incorporates [[ long-range motion cues ]] from the past and future frames in the form of clusters of point tracks with coherent motion .", "label": "USED-FOR", "metadata": [14, 16, 6, 9]} | |
{"text": "First , we develop an efficient spatio-temporal video segmentation algorithm , which naturally incorporates << long-range motion cues >> from the past and future frames in the form of [[ clusters of point tracks ]] with coherent motion .", "label": "USED-FOR", "metadata": [27, 30, 14, 16]} | |
{"text": "Second , we devise a new << track clustering cost function >> that includes [[ occlusion reasoning ]] , in the form of depth ordering constraints , as well as motion similarity along the tracks .", "label": "PART-OF", "metadata": [12, 13, 6, 9]} | |
{"text": "Second , we devise a new track clustering cost function that includes << occlusion reasoning >> , in the form of [[ depth ordering constraints ]] , as well as motion similarity along the tracks .", "label": "FEATURE-OF", "metadata": [19, 21, 12, 13]} | |
{"text": "Second , we devise a new << track clustering cost function >> that includes occlusion reasoning , in the form of depth ordering constraints , as well as [[ motion similarity ]] along the tracks .", "label": "PART-OF", "metadata": [26, 27, 6, 9]} | |
{"text": "We evaluate the proposed << approach >> on a challenging set of [[ video sequences of office scenes ]] from feature length movies .", "label": "EVALUATE-FOR", "metadata": [10, 14, 4, 4]} | |
{"text": "In this paper , we introduce [[ KAZE features ]] , a novel << multiscale 2D feature detection and description algorithm >> in nonlinear scale spaces .", "label": "HYPONYM-OF", "metadata": [6, 7, 11, 17]} | |
{"text": "In this paper , we introduce KAZE features , a novel << multiscale 2D feature detection and description algorithm >> in [[ nonlinear scale spaces ]] .", "label": "FEATURE-OF", "metadata": [19, 21, 11, 17]} | |
{"text": "In contrast , we detect and describe << 2D features >> in a [[ nonlinear scale space ]] by means of nonlinear diffusion filtering .", "label": "FEATURE-OF", "metadata": [11, 13, 7, 8]} | |
{"text": "In contrast , we detect and describe << 2D features >> in a nonlinear scale space by means of [[ nonlinear diffusion filtering ]] .", "label": "USED-FOR", "metadata": [17, 19, 7, 8]} | |
{"text": "The << nonlinear scale space >> is built using efficient [[ Additive Operator Splitting -LRB- AOS -RRB- techniques ]] and variable con-ductance diffusion .", "label": "USED-FOR", "metadata": [8, 14, 1, 3]} | |
{"text": "The nonlinear scale space is built using efficient [[ Additive Operator Splitting -LRB- AOS -RRB- techniques ]] and << variable con-ductance diffusion >> .", "label": "CONJUNCTION", "metadata": [8, 14, 16, 18]} | |
{"text": "The << nonlinear scale space >> is built using efficient Additive Operator Splitting -LRB- AOS -RRB- techniques and [[ variable con-ductance diffusion ]] .", "label": "USED-FOR", "metadata": [16, 18, 1, 3]} | |
{"text": "Even though our [[ features ]] are somewhat more expensive to compute than << SURF >> due to the construction of the nonlinear scale space , but comparable to SIFT , our results reveal a step forward in performance both in detection and description against previous state-of-the-art methods .", "label": "COMPARE", "metadata": [3, 3, 11, 11]} | |
{"text": "Even though our [[ features ]] are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space , but comparable to << SIFT >> , our results reveal a step forward in performance both in detection and description against previous state-of-the-art methods .", "label": "COMPARE", "metadata": [3, 3, 25, 25]} | |
{"text": "Even though our features are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space , but comparable to SIFT , our [[ results ]] reveal a step forward in performance both in detection and description against previous << state-of-the-art methods >> .", "label": "COMPARE", "metadata": [28, 28, 42, 43]} | |
{"text": "Even though our features are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space , but comparable to SIFT , our << results >> reveal a step forward in performance both in [[ detection ]] and description against previous state-of-the-art methods .", "label": "EVALUATE-FOR", "metadata": [37, 37, 28, 28]} | |
{"text": "Even though our features are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space , but comparable to SIFT , our results reveal a step forward in performance both in [[ detection ]] and << description >> against previous state-of-the-art methods .", "label": "CONJUNCTION", "metadata": [37, 37, 39, 39]} | |
{"text": "Even though our features are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space , but comparable to SIFT , our results reveal a step forward in performance both in [[ detection ]] and description against previous << state-of-the-art methods >> .", "label": "EVALUATE-FOR", "metadata": [37, 37, 42, 43]} | |
{"text": "Even though our features are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space , but comparable to SIFT , our << results >> reveal a step forward in performance both in detection and [[ description ]] against previous state-of-the-art methods .", "label": "EVALUATE-FOR", "metadata": [39, 39, 28, 28]} | |
{"text": "Even though our features are somewhat more expensive to compute than SURF due to the construction of the nonlinear scale space , but comparable to SIFT , our results reveal a step forward in performance both in detection and [[ description ]] against previous << state-of-the-art methods >> .", "label": "EVALUATE-FOR", "metadata": [39, 39, 42, 43]} | |
{"text": "[[ Creating summaries ]] on lengthy Semantic Web documents for quick << identification of the corresponding entity >> has been of great contemporary interest .", "label": "USED-FOR", "metadata": [0, 1, 9, 13]} | |
{"text": "<< Creating summaries >> on [[ lengthy Semantic Web documents ]] for quick identification of the corresponding entity has been of great contemporary interest .", "label": "USED-FOR", "metadata": [3, 6, 0, 1]} | |
{"text": "Specifically , we highlight the importance of << diversified -LRB- faceted -RRB- summaries >> by combining three dimensions : [[ diversity ]] , uniqueness , and popularity .", "label": "FEATURE-OF", "metadata": [17, 17, 7, 11]} | |
{"text": "Specifically , we highlight the importance of diversified -LRB- faceted -RRB- summaries by combining three dimensions : [[ diversity ]] , << uniqueness >> , and popularity .", "label": "CONJUNCTION", "metadata": [17, 17, 19, 19]} | |
{"text": "Specifically , we highlight the importance of << diversified -LRB- faceted -RRB- summaries >> by combining three dimensions : diversity , [[ uniqueness ]] , and popularity .", "label": "FEATURE-OF", "metadata": [19, 19, 7, 11]} | |
{"text": "Specifically , we highlight the importance of diversified -LRB- faceted -RRB- summaries by combining three dimensions : diversity , [[ uniqueness ]] , and << popularity >> .", "label": "CONJUNCTION", "metadata": [19, 19, 22, 22]} | |
{"text": "Specifically , we highlight the importance of << diversified -LRB- faceted -RRB- summaries >> by combining three dimensions : diversity , uniqueness , and [[ popularity ]] .", "label": "FEATURE-OF", "metadata": [22, 22, 7, 11]} | |
{"text": "Our novel << diversity-aware entity summarization approach >> mimics [[ human conceptual clustering techniques ]] to group facts , and picks representative facts from each group to form concise -LRB- i.e. , short -RRB- and comprehensive -LRB- i.e. , improved coverage through diversity -RRB- summaries .", "label": "USED-FOR", "metadata": [7, 10, 2, 5]} | |
{"text": "We evaluate our [[ approach ]] against the state-of-the-art techniques and show that our work improves both the quality and the efficiency of << entity summarization >> .", "label": "USED-FOR", "metadata": [3, 3, 21, 22]} | |
{"text": "We evaluate our << approach >> against the [[ state-of-the-art techniques ]] and show that our work improves both the quality and the efficiency of entity summarization .", "label": "COMPARE", "metadata": [6, 7, 3, 3]} | |
{"text": "We evaluate our approach against the [[ state-of-the-art techniques ]] and show that our work improves both the quality and the efficiency of << entity summarization >> .", "label": "USED-FOR", "metadata": [6, 7, 21, 22]} | |
{"text": "We evaluate our approach against the state-of-the-art techniques and show that our work improves both the [[ quality ]] and the efficiency of << entity summarization >> .", "label": "EVALUATE-FOR", "metadata": [16, 16, 21, 22]} | |
{"text": "We evaluate our approach against the state-of-the-art techniques and show that our work improves both the quality and the [[ efficiency ]] of << entity summarization >> .", "label": "EVALUATE-FOR", "metadata": [19, 19, 21, 22]} | |
{"text": "We present a [[ framework ]] for the << fast computation of lexical affinity models >> .", "label": "USED-FOR", "metadata": [3, 3, 6, 11]} | |
{"text": "The << framework >> is composed of a novel [[ algorithm ]] to efficiently compute the co-occurrence distribution between pairs of terms , an independence model , and a parametric affinity model .", "label": "PART-OF", "metadata": [7, 7, 1, 1]} | |
{"text": "The framework is composed of a novel [[ algorithm ]] to efficiently compute the << co-occurrence distribution >> between pairs of terms , an independence model , and a parametric affinity model .", "label": "USED-FOR", "metadata": [7, 7, 12, 13]} | |
{"text": "The framework is composed of a novel [[ algorithm ]] to efficiently compute the co-occurrence distribution between pairs of terms , an << independence model >> , and a parametric affinity model .", "label": "CONJUNCTION", "metadata": [7, 7, 20, 21]} | |
{"text": "The << framework >> is composed of a novel algorithm to efficiently compute the co-occurrence distribution between pairs of terms , an [[ independence model ]] , and a parametric affinity model .", "label": "PART-OF", "metadata": [20, 21, 1, 1]} | |
{"text": "The framework is composed of a novel algorithm to efficiently compute the co-occurrence distribution between pairs of terms , an [[ independence model ]] , and a << parametric affinity model >> .", "label": "CONJUNCTION", "metadata": [20, 21, 25, 27]} | |
{"text": "The << framework >> is composed of a novel algorithm to efficiently compute the co-occurrence distribution between pairs of terms , an independence model , and a [[ parametric affinity model ]] .", "label": "PART-OF", "metadata": [25, 27, 1, 1]} | |
{"text": "In comparison with previous models , which either use arbitrary windows to compute similarity between words or use [[ lexical affinity ]] to create << sequential models >> , in this paper we focus on models intended to capture the co-occurrence patterns of any pair of words or phrases at any distance in the corpus .", "label": "USED-FOR", "metadata": [18, 19, 22, 23]} | |
{"text": "In comparison with previous << models >> , which either use arbitrary windows to compute similarity between words or use lexical affinity to create sequential models , in this paper we focus on [[ models ]] intended to capture the co-occurrence patterns of any pair of words or phrases at any distance in the corpus .", "label": "COMPARE", "metadata": [31, 31, 4, 4]} | |
{"text": "In comparison with previous models , which either use arbitrary windows to compute similarity between words or use lexical affinity to create sequential models , in this paper we focus on [[ models ]] intended to capture the << co-occurrence patterns >> of any pair of words or phrases at any distance in the corpus .", "label": "USED-FOR", "metadata": [31, 31, 36, 37]} | |
{"text": "We apply [[ it ]] in combination with a terabyte corpus to answer << natural language tests >> , achieving encouraging results .", "label": "USED-FOR", "metadata": [2, 2, 11, 13]} | |
{"text": "We apply << it >> in combination with a [[ terabyte corpus ]] to answer natural language tests , achieving encouraging results .", "label": "EVALUATE-FOR", "metadata": [7, 8, 2, 2]} | |
{"text": "This paper introduces a [[ system ]] for << categorizing unknown words >> .", "label": "USED-FOR", "metadata": [4, 4, 6, 8]} | |
{"text": "The << system >> is based on a [[ multi-component architecture ]] where each component is responsible for identifying one class of unknown words .", "label": "USED-FOR", "metadata": [6, 7, 1, 1]} | |
{"text": "The system is based on a << multi-component architecture >> where each [[ component ]] is responsible for identifying one class of unknown words .", "label": "PART-OF", "metadata": [10, 10, 6, 7]} | |
{"text": "The system is based on a multi-component architecture where each [[ component ]] is responsible for identifying one class of << unknown words >> .", "label": "USED-FOR", "metadata": [10, 10, 18, 19]} | |
{"text": "The focus of this paper is the [[ components ]] that identify << names >> and spelling errors .", "label": "USED-FOR", "metadata": [7, 7, 10, 10]} | |
{"text": "The focus of this paper is the [[ components ]] that identify names and << spelling errors >> .", "label": "USED-FOR", "metadata": [7, 7, 12, 13]} | |
{"text": "The focus of this paper is the components that identify [[ names ]] and << spelling errors >> .", "label": "CONJUNCTION", "metadata": [10, 10, 12, 13]} | |
{"text": "Each << component >> uses a [[ decision tree architecture ]] to combine multiple types of evidence about the unknown word .", "label": "USED-FOR", "metadata": [4, 6, 1, 1]} | |
{"text": "The << system >> is evaluated using data from [[ live closed captions ]] - a genre replete with a wide variety of unknown words .", "label": "EVALUATE-FOR", "metadata": [7, 9, 1, 1]} | |
{"text": "At MIT Lincoln Laboratory , we have been developing a << Korean-to-English machine translation system >> [[ CCLINC -LRB- Common Coalition Language System at Lincoln Laboratory -RRB- ]] .", "label": "HYPONYM-OF", "metadata": [14, 23, 10, 13]} | |
{"text": "The << CCLINC Korean-to-English translation system >> consists of two [[ core modules ]] , language understanding and generation modules mediated by a language neutral meaning representation called a semantic frame .", "label": "PART-OF", "metadata": [8, 9, 1, 4]} | |
{"text": "The CCLINC Korean-to-English translation system consists of two core modules , << language understanding and generation modules >> mediated by a [[ language neutral meaning representation ]] called a semantic frame .", "label": "USED-FOR", "metadata": [19, 22, 11, 15]} | |
{"text": "The CCLINC Korean-to-English translation system consists of two core modules , language understanding and generation modules mediated by a << language neutral meaning representation >> called a [[ semantic frame ]] .", "label": "HYPONYM-OF", "metadata": [25, 26, 19, 22]} | |
{"text": "The key features of the system include : -LRB- i -RRB- Robust efficient parsing of [[ Korean ]] -LRB- a << verb final language >> with overt case markers , relatively free word order , and frequent omissions of arguments -RRB- .", "label": "HYPONYM-OF", "metadata": [15, 15, 18, 20]} | |
{"text": "The key features of the system include : -LRB- i -RRB- Robust efficient parsing of Korean -LRB- a << verb final language >> with [[ overt case markers ]] , relatively free word order , and frequent omissions of arguments -RRB- .", "label": "FEATURE-OF", "metadata": [22, 24, 18, 20]} | |
{"text": "-LRB- ii -RRB- High quality << translation >> via [[ word sense disambiguation ]] and accurate word order generation of the target language .", "label": "USED-FOR", "metadata": [7, 9, 5, 5]} | |
{"text": "-LRB- ii -RRB- High quality translation via [[ word sense disambiguation ]] and accurate << word order generation >> of the target language .", "label": "CONJUNCTION", "metadata": [7, 9, 12, 14]} | |
{"text": "-LRB- ii -RRB- High quality << translation >> via word sense disambiguation and accurate [[ word order generation ]] of the target language .", "label": "USED-FOR", "metadata": [12, 14, 5, 5]} | |
{"text": "Having been trained on [[ Korean newspaper articles ]] on missiles and chemical biological warfare , the << system >> produces the translation output sufficient for content understanding of the original document .", "label": "USED-FOR", "metadata": [4, 6, 15, 15]} | |
{"text": "Having been trained on << Korean newspaper articles >> on [[ missiles and chemical biological warfare ]] , the system produces the translation output sufficient for content understanding of the original document .", "label": "FEATURE-OF", "metadata": [8, 12, 4, 6]} | |
{"text": "The [[ JAVELIN system ]] integrates a flexible , planning-based architecture with a variety of language processing modules to provide an << open-domain question answering capability >> on free text .", "label": "USED-FOR", "metadata": [1, 2, 19, 22]} | |
{"text": "The << JAVELIN system >> integrates a flexible , [[ planning-based architecture ]] with a variety of language processing modules to provide an open-domain question answering capability on free text .", "label": "PART-OF", "metadata": [7, 8, 1, 2]} | |
{"text": "The << JAVELIN system >> integrates a flexible , planning-based architecture with a variety of [[ language processing modules ]] to provide an open-domain question answering capability on free text .", "label": "PART-OF", "metadata": [13, 15, 1, 2]} | |
{"text": "The JAVELIN system integrates a flexible , << planning-based architecture >> with a variety of [[ language processing modules ]] to provide an open-domain question answering capability on free text .", "label": "CONJUNCTION", "metadata": [13, 15, 7, 8]} | |
{"text": "We present the first application of the [[ head-driven statistical parsing model ]] of Collins -LRB- 1999 -RRB- as a << simultaneous language model >> and parser for large-vocabulary speech recognition .", "label": "USED-FOR", "metadata": [7, 10, 18, 20]} | |
{"text": "We present the first application of the [[ head-driven statistical parsing model ]] of Collins -LRB- 1999 -RRB- as a simultaneous language model and << parser >> for large-vocabulary speech recognition .", "label": "USED-FOR", "metadata": [7, 10, 22, 22]} | |
{"text": "We present the first application of the head-driven statistical parsing model of Collins -LRB- 1999 -RRB- as a [[ simultaneous language model ]] and << parser >> for large-vocabulary speech recognition .", "label": "CONJUNCTION", "metadata": [18, 20, 22, 22]} | |
{"text": "We present the first application of the head-driven statistical parsing model of Collins -LRB- 1999 -RRB- as a [[ simultaneous language model ]] and parser for << large-vocabulary speech recognition >> .", "label": "USED-FOR", "metadata": [18, 20, 24, 26]} | |
{"text": "We present the first application of the head-driven statistical parsing model of Collins -LRB- 1999 -RRB- as a simultaneous language model and [[ parser ]] for << large-vocabulary speech recognition >> .", "label": "USED-FOR", "metadata": [22, 22, 24, 26]} | |
{"text": "The [[ model ]] is adapted to an << online left to right chart-parser >> for word lattices , integrating acoustic , n-gram , and parser probabilities .", "label": "USED-FOR", "metadata": [1, 1, 6, 10]} | |
{"text": "The model is adapted to an [[ online left to right chart-parser ]] for << word lattices >> , integrating acoustic , n-gram , and parser probabilities .", "label": "USED-FOR", "metadata": [6, 10, 12, 13]} | |
{"text": "The model is adapted to an << online left to right chart-parser >> for word lattices , integrating [[ acoustic , n-gram , and parser probabilities ]] .", "label": "PART-OF", "metadata": [16, 22, 6, 10]} | |
{"text": "The << parser >> uses [[ structural and lexical dependencies ]] not considered by n-gram models , conditioning recognition on more linguistically-grounded relationships .", "label": "USED-FOR", "metadata": [3, 6, 1, 1]} | |
{"text": "Experiments on the [[ Wall Street Journal treebank ]] and << lattice corpora >> show word error rates competitive with the standard n-gram language model while extracting additional structural information useful for speech understanding .", "label": "CONJUNCTION", "metadata": [3, 6, 8, 9]} | |
{"text": "Experiments on the [[ Wall Street Journal treebank ]] and lattice corpora show word error rates competitive with the standard << n-gram language model >> while extracting additional structural information useful for speech understanding .", "label": "EVALUATE-FOR", "metadata": [3, 6, 18, 20]} | |
{"text": "Experiments on the Wall Street Journal treebank and [[ lattice corpora ]] show word error rates competitive with the standard << n-gram language model >> while extracting additional structural information useful for speech understanding .", "label": "EVALUATE-FOR", "metadata": [8, 9, 18, 20]} | |
{"text": "Experiments on the Wall Street Journal treebank and lattice corpora show [[ word error rates ]] competitive with the standard << n-gram language model >> while extracting additional structural information useful for speech understanding .", "label": "EVALUATE-FOR", "metadata": [11, 13, 18, 20]} | |
{"text": "Experiments on the Wall Street Journal treebank and lattice corpora show word error rates competitive with the standard n-gram language model while extracting additional [[ structural information ]] useful for << speech understanding >> .", "label": "USED-FOR", "metadata": [24, 25, 28, 29]} | |
{"text": "[[ Image composition -LRB- or mosaicing -RRB- ]] has attracted a growing attention in recent years as one of the main elements in << video analysis and representation >> .", "label": "PART-OF", "metadata": [0, 5, 21, 24]} | |
{"text": "In this paper we deal with the problem of [[ global alignment ]] and << super-resolution >> .", "label": "CONJUNCTION", "metadata": [9, 10, 12, 12]} | |
{"text": "We also propose to evaluate the quality of the resulting << mosaic >> by measuring the [[ amount of blurring ]] .", "label": "EVALUATE-FOR", "metadata": [14, 16, 10, 10]} | |
{"text": "<< Global registration >> is achieved by combining a [[ graph-based technique ]] -- that exploits the topological structure of the sequence induced by the spatial overlap -- with a bundle adjustment which uses only the homographies computed in the previous steps .", "label": "USED-FOR", "metadata": [7, 8, 0, 1]} | |
{"text": "Global registration is achieved by combining a [[ graph-based technique ]] -- that exploits the << topological structure >> of the sequence induced by the spatial overlap -- with a bundle adjustment which uses only the homographies computed in the previous steps .", "label": "USED-FOR", "metadata": [7, 8, 13, 14]} | |
{"text": "Global registration is achieved by combining a [[ graph-based technique ]] -- that exploits the topological structure of the sequence induced by the spatial overlap -- with a << bundle adjustment >> which uses only the homographies computed in the previous steps .", "label": "CONJUNCTION", "metadata": [7, 8, 26, 27]} | |
{"text": "<< Global registration >> is achieved by combining a graph-based technique -- that exploits the topological structure of the sequence induced by the spatial overlap -- with a [[ bundle adjustment ]] which uses only the homographies computed in the previous steps .", "label": "USED-FOR", "metadata": [26, 27, 0, 1]} | |
{"text": "Global registration is achieved by combining a graph-based technique -- that exploits the topological structure of the sequence induced by the spatial overlap -- with a << bundle adjustment >> which uses only the [[ homographies ]] computed in the previous steps .", "label": "USED-FOR", "metadata": [32, 32, 26, 27]} | |
{"text": "Experimental comparison with other << techniques >> shows the effectiveness of our [[ approach ]] .", "label": "COMPARE", "metadata": [10, 10, 4, 4]} | |
{"text": "The main of this project is << computer-assisted acquisition and morpho-syntactic description of verb-noun collocations >> in [[ Polish ]] .", "label": "USED-FOR", "metadata": [15, 15, 6, 13]} | |
{"text": "We present methodology and resources obtained in three main project << phases >> which are : [[ dictionary-based acquisition of collocation lexicon ]] , feasibility study for corpus-based lexicon enlargement phase , corpus-based lexicon enlargement and collocation description .", "label": "HYPONYM-OF", "metadata": [14, 18, 10, 10]} | |
{"text": "We present methodology and resources obtained in three main project phases which are : [[ dictionary-based acquisition of collocation lexicon ]] , << feasibility study >> for corpus-based lexicon enlargement phase , corpus-based lexicon enlargement and collocation description .", "label": "CONJUNCTION", "metadata": [14, 18, 20, 21]} | |
{"text": "We present methodology and resources obtained in three main project << phases >> which are : dictionary-based acquisition of collocation lexicon , [[ feasibility study ]] for corpus-based lexicon enlargement phase , corpus-based lexicon enlargement and collocation description .", "label": "HYPONYM-OF", "metadata": [20, 21, 10, 10]} | |
{"text": "We present methodology and resources obtained in three main project phases which are : dictionary-based acquisition of collocation lexicon , [[ feasibility study ]] for << corpus-based lexicon enlargement phase >> , corpus-based lexicon enlargement and collocation description .", "label": "USED-FOR", "metadata": [20, 21, 23, 26]} | |
{"text": "We present methodology and resources obtained in three main project << phases >> which are : dictionary-based acquisition of collocation lexicon , feasibility study for corpus-based lexicon enlargement phase , [[ corpus-based lexicon enlargement and collocation description ]] .", "label": "HYPONYM-OF", "metadata": [28, 33, 10, 10]} | |
{"text": "We present methodology and resources obtained in three main project phases which are : dictionary-based acquisition of collocation lexicon , << feasibility study >> for corpus-based lexicon enlargement phase , [[ corpus-based lexicon enlargement and collocation description ]] .", "label": "CONJUNCTION", "metadata": [28, 33, 20, 21]} | |
{"text": "The presented here [[ corpus-based approach ]] permitted us to triple the size the << verb-noun collocation dictionary >> for Polish .", "label": "USED-FOR", "metadata": [3, 4, 12, 14]} | |
{"text": "The presented here corpus-based approach permitted us to triple the size the << verb-noun collocation dictionary >> for [[ Polish ]] .", "label": "FEATURE-OF", "metadata": [16, 16, 12, 14]} | |
{"text": "Along with the increasing requirements , the [[ hash-tag recommendation task ]] for << microblogs >> has been receiving considerable attention in recent years .", "label": "USED-FOR", "metadata": [7, 9, 11, 11]} | |
{"text": "Motivated by the successful use of [[ convolutional neural networks -LRB- CNNs -RRB- ]] for many << natural language processing tasks >> , in this paper , we adopt CNNs to perform the hashtag recommendation problem .", "label": "USED-FOR", "metadata": [6, 11, 14, 17]} | |
{"text": "To incorporate the << trigger words >> whose effectiveness have been experimentally evaluated in several previous works , we propose a novel [[ architecture ]] with an attention mechanism .", "label": "USED-FOR", "metadata": [20, 20, 3, 4]} | |
{"text": "To incorporate the trigger words whose effectiveness have been experimentally evaluated in several previous works , we propose a novel << architecture >> with an [[ attention mechanism ]] .", "label": "FEATURE-OF", "metadata": [23, 24, 20, 20]} | |
{"text": "The results of experiments on the [[ data ]] collected from a real world microblogging service demonstrated that the proposed << model >> outperforms state-of-the-art methods .", "label": "EVALUATE-FOR", "metadata": [6, 6, 18, 18]} | |
{"text": "The results of experiments on the data collected from a real world microblogging service demonstrated that the proposed [[ model ]] outperforms << state-of-the-art methods >> .", "label": "COMPARE", "metadata": [18, 18, 20, 21]} | |
{"text": "By incorporating trigger words into the consideration , the relative improvement of the proposed [[ method ]] over the << state-of-the-art method >> is around 9.4 % in the F1-score .", "label": "COMPARE", "metadata": [14, 14, 17, 18]} | |
{"text": "By incorporating trigger words into the consideration , the relative improvement of the proposed method over the << state-of-the-art method >> is around 9.4 % in the [[ F1-score ]] .", "label": "EVALUATE-FOR", "metadata": [25, 25, 17, 18]} | |
{"text": "In this paper , we improve an << unsupervised learning method >> using the [[ Expectation-Maximization -LRB- EM -RRB- algorithm ]] proposed by Nigam et al. for text classification problems in order to apply it to word sense disambiguation -LRB- WSD -RRB- problems .", "label": "USED-FOR", "metadata": [12, 16, 7, 9]} | |
{"text": "In this paper , we improve an unsupervised learning method using the [[ Expectation-Maximization -LRB- EM -RRB- algorithm ]] proposed by Nigam et al. for << text classification problems >> in order to apply it to word sense disambiguation -LRB- WSD -RRB- problems .", "label": "USED-FOR", "metadata": [12, 16, 23, 25]} | |
{"text": "In this paper , we improve an unsupervised learning method using the Expectation-Maximization -LRB- EM -RRB- algorithm proposed by Nigam et al. for text classification problems in order to apply [[ it ]] to << word sense disambiguation -LRB- WSD -RRB- problems >> .", "label": "USED-FOR", "metadata": [30, 30, 32, 38]} | |
{"text": "In experiments , we solved 50 noun WSD problems in the [[ Japanese Dictionary Task ]] in << SENSEVAL2 >> .", "label": "FEATURE-OF", "metadata": [11, 13, 15, 15]} | |
{"text": "Furthermore , our [[ methods ]] were confirmed to be effective also for << verb WSD problems >> .", "label": "USED-FOR", "metadata": [3, 3, 11, 13]} | |
{"text": "[[ Dividing sentences in chunks of words ]] is a useful preprocessing step for << parsing >> , information extraction and information retrieval .", "label": "USED-FOR", "metadata": [0, 5, 12, 12]} | |
{"text": "[[ Dividing sentences in chunks of words ]] is a useful preprocessing step for parsing , << information extraction >> and information retrieval .", "label": "USED-FOR", "metadata": [0, 5, 14, 15]} | |
{"text": "[[ Dividing sentences in chunks of words ]] is a useful preprocessing step for parsing , information extraction and << information retrieval >> .", "label": "USED-FOR", "metadata": [0, 5, 17, 18]} | |
{"text": "Dividing sentences in chunks of words is a useful preprocessing step for [[ parsing ]] , << information extraction >> and information retrieval .", "label": "CONJUNCTION", "metadata": [12, 12, 14, 15]} | |
{"text": "Dividing sentences in chunks of words is a useful preprocessing step for parsing , [[ information extraction ]] and << information retrieval >> .", "label": "CONJUNCTION", "metadata": [14, 15, 17, 18]} | |
{"text": "-LRB- Ramshaw and Marcus , 1995 -RRB- have introduced a `` convenient '' [[ data representation ]] for << chunking >> by converting it to a tagging task .", "label": "USED-FOR", "metadata": [13, 14, 16, 16]} | |
{"text": "In this paper we will examine seven different [[ data representations ]] for the problem of << recognizing noun phrase chunks >> .", "label": "USED-FOR", "metadata": [8, 9, 14, 17]} | |
{"text": "However , equipped with the most suitable [[ data representation ]] , our << memory-based learning chunker >> was able to improve the best published chunking results for a standard data set .", "label": "USED-FOR", "metadata": [7, 8, 11, 13]} | |
{"text": "However , equipped with the most suitable data representation , our << memory-based learning chunker >> was able to improve the best published chunking results for a standard [[ data set ]] .", "label": "EVALUATE-FOR", "metadata": [26, 27, 11, 13]} | |
{"text": "We focus on << FAQ-like questions and answers >> , and build our [[ system ]] around a noisy-channel architecture which exploits both a language model for answers and a transformation model for answer/question terms , trained on a corpus of 1 million question/answer pairs collected from the Web .", "label": "USED-FOR", "metadata": [11, 11, 3, 6]} | |
{"text": "We focus on FAQ-like questions and answers , and build our << system >> around a [[ noisy-channel architecture ]] which exploits both a language model for answers and a transformation model for answer/question terms , trained on a corpus of 1 million question/answer pairs collected from the Web .", "label": "USED-FOR", "metadata": [14, 15, 11, 11]} | |
{"text": "We focus on FAQ-like questions and answers , and build our system around a [[ noisy-channel architecture ]] which exploits both a << language model >> for answers and a transformation model for answer/question terms , trained on a corpus of 1 million question/answer pairs collected from the Web .", "label": "USED-FOR", "metadata": [14, 15, 20, 21]} | |
{"text": "We focus on FAQ-like questions and answers , and build our system around a [[ noisy-channel architecture ]] which exploits both a language model for answers and a << transformation model >> for answer/question terms , trained on a corpus of 1 million question/answer pairs collected from the Web .", "label": "USED-FOR", "metadata": [14, 15, 26, 27]} | |
{"text": "In this paper we evaluate four objective [[ measures of speech ]] with regards to << intelligibility prediction >> of synthesized speech in diverse noisy situations .", "label": "EVALUATE-FOR", "metadata": [7, 9, 13, 14]} | |
{"text": "In this paper we evaluate four objective measures of speech with regards to << intelligibility prediction >> of [[ synthesized speech ]] in diverse noisy situations .", "label": "USED-FOR", "metadata": [16, 17, 13, 14]} | |
{"text": "In this paper we evaluate four objective measures of speech with regards to intelligibility prediction of << synthesized speech >> in [[ diverse noisy situations ]] .", "label": "FEATURE-OF", "metadata": [19, 21, 16, 17]} | |
{"text": "We evaluated three [[ intel-ligibility measures ]] , the Dau measure , the glimpse proportion and the Speech Intelligibility Index -LRB- SII -RRB- and a << quality measure >> , the Perceptual Evaluation of Speech Quality -LRB- PESQ -RRB- .", "label": "CONJUNCTION", "metadata": [3, 4, 23, 24]} | |
{"text": "We evaluated three << intel-ligibility measures >> , the [[ Dau measure ]] , the glimpse proportion and the Speech Intelligibility Index -LRB- SII -RRB- and a quality measure , the Perceptual Evaluation of Speech Quality -LRB- PESQ -RRB- .", "label": "HYPONYM-OF", "metadata": [7, 8, 3, 4]} | |
{"text": "We evaluated three intel-ligibility measures , the [[ Dau measure ]] , the << glimpse proportion >> and the Speech Intelligibility Index -LRB- SII -RRB- and a quality measure , the Perceptual Evaluation of Speech Quality -LRB- PESQ -RRB- .", "label": "CONJUNCTION", "metadata": [7, 8, 11, 12]} | |
{"text": "We evaluated three << intel-ligibility measures >> , the Dau measure , the [[ glimpse proportion ]] and the Speech Intelligibility Index -LRB- SII -RRB- and a quality measure , the Perceptual Evaluation of Speech Quality -LRB- PESQ -RRB- .", "label": "HYPONYM-OF", "metadata": [11, 12, 3, 4]} | |
{"text": "We evaluated three intel-ligibility measures , the Dau measure , the [[ glimpse proportion ]] and the << Speech Intelligibility Index -LRB- SII -RRB- >> and a quality measure , the Perceptual Evaluation of Speech Quality -LRB- PESQ -RRB- .", "label": "CONJUNCTION", "metadata": [11, 12, 15, 20]} | |
{"text": "We evaluated three << intel-ligibility measures >> , the Dau measure , the glimpse proportion and the [[ Speech Intelligibility Index -LRB- SII -RRB- ]] and a quality measure , the Perceptual Evaluation of Speech Quality -LRB- PESQ -RRB- .", "label": "HYPONYM-OF", "metadata": [15, 20, 3, 4]} | |
{"text": "We evaluated three intel-ligibility measures , the Dau measure , the glimpse proportion and the Speech Intelligibility Index -LRB- SII -RRB- and a << quality measure >> , the [[ Perceptual Evaluation of Speech Quality -LRB- PESQ -RRB- ]] .", "label": "HYPONYM-OF", "metadata": [27, 34, 23, 24]} | |
{"text": "For the << generation of synthesized speech >> we used a state of the art [[ HMM-based speech synthesis system ]] .", "label": "USED-FOR", "metadata": [13, 16, 2, 5]} | |
{"text": "The << noisy conditions >> comprised four [[ additive noises ]] .", "label": "PART-OF", "metadata": [5, 6, 1, 2]} | |
{"text": "The [[ measures ]] were compared with << subjective intelligibility scores >> obtained in listening tests .", "label": "COMPARE", "metadata": [1, 1, 5, 7]} | |
{"text": "The results show the [[ Dau ]] and the << glimpse measures >> to be the best predictors of intelligibility , with correlations of around 0.83 to subjective scores .", "label": "CONJUNCTION", "metadata": [4, 4, 7, 8]} | |
{"text": "The results show the [[ Dau ]] and the glimpse measures to be the best << predictors of intelligibility >> , with correlations of around 0.83 to subjective scores .", "label": "HYPONYM-OF", "metadata": [4, 4, 13, 15]} | |
{"text": "The results show the [[ Dau ]] and the glimpse measures to be the best predictors of intelligibility , with correlations of around 0.83 to << subjective scores >> .", "label": "COMPARE", "metadata": [4, 4, 23, 24]} | |
{"text": "The results show the Dau and the [[ glimpse measures ]] to be the best << predictors of intelligibility >> , with correlations of around 0.83 to subjective scores .", "label": "HYPONYM-OF", "metadata": [7, 8, 13, 15]} | |
{"text": "The results show the Dau and the [[ glimpse measures ]] to be the best predictors of intelligibility , with correlations of around 0.83 to << subjective scores >> .", "label": "COMPARE", "metadata": [7, 8, 23, 24]} | |
{"text": "The results show the << Dau >> and the glimpse measures to be the best predictors of intelligibility , with [[ correlations ]] of around 0.83 to subjective scores .", "label": "EVALUATE-FOR", "metadata": [18, 18, 4, 4]} | |
{"text": "The results show the Dau and the << glimpse measures >> to be the best predictors of intelligibility , with [[ correlations ]] of around 0.83 to subjective scores .", "label": "EVALUATE-FOR", "metadata": [18, 18, 7, 8]} | |
{"text": "All [[ measures ]] gave less accurate << predictions of intelligibility >> for synthetic speech than have previously been found for natural speech ; in particular the SII measure .", "label": "EVALUATE-FOR", "metadata": [1, 1, 5, 7]} | |
{"text": "All measures gave less accurate << predictions of intelligibility >> for [[ synthetic speech ]] than have previously been found for natural speech ; in particular the SII measure .", "label": "USED-FOR", "metadata": [9, 10, 5, 7]} | |
{"text": "All measures gave less accurate predictions of intelligibility for [[ synthetic speech ]] than have previously been found for << natural speech >> ; in particular the SII measure .", "label": "COMPARE", "metadata": [9, 10, 17, 18]} | |
{"text": "All << measures >> gave less accurate predictions of intelligibility for synthetic speech than have previously been found for natural speech ; in particular the [[ SII measure ]] .", "label": "HYPONYM-OF", "metadata": [23, 24, 1, 1]} | |
{"text": "In additional experiments , we processed the << synthesized speech >> by an [[ ideal binary mask ]] before adding noise .", "label": "USED-FOR", "metadata": [11, 13, 7, 8]} | |
{"text": "The [[ Glimpse measure ]] gave the most accurate << intelligibility predictions >> in this situation .", "label": "USED-FOR", "metadata": [1, 2, 7, 8]} | |
{"text": "A [[ '' graphics for vision '' approach ]] is proposed to address the problem of << reconstruction >> from a large and imperfect data set : reconstruction on demand by tensor voting , or ROD-TV .", "label": "USED-FOR", "metadata": [1, 6, 14, 14]} | |
{"text": "A '' graphics for vision '' approach is proposed to address the problem of << reconstruction >> from a [[ large and imperfect data set ]] : reconstruction on demand by tensor voting , or ROD-TV .", "label": "USED-FOR", "metadata": [17, 21, 14, 14]} | |
{"text": "A '' graphics for vision '' approach is proposed to address the problem of reconstruction from a large and imperfect data set : << reconstruction >> on demand by [[ tensor voting ]] , or ROD-TV .", "label": "USED-FOR", "metadata": [27, 28, 23, 23]} | |
{"text": "A '' graphics for vision '' approach is proposed to address the problem of reconstruction from a large and imperfect data set : reconstruction on demand by [[ tensor voting ]] , or << ROD-TV >> .", "label": "CONJUNCTION", "metadata": [27, 28, 31, 31]} | |
{"text": "A '' graphics for vision '' approach is proposed to address the problem of reconstruction from a large and imperfect data set : << reconstruction >> on demand by tensor voting , or [[ ROD-TV ]] .", "label": "USED-FOR", "metadata": [31, 31, 23, 23]} | |
{"text": "<< ROD-TV >> simultaneously delivers good [[ efficiency ]] and robust-ness , by adapting to a continuum of primitive connectivity , view dependence , and levels of detail -LRB- LOD -RRB- .", "label": "EVALUATE-FOR", "metadata": [4, 4, 0, 0]} | |
{"text": "<< ROD-TV >> simultaneously delivers good efficiency and [[ robust-ness ]] , by adapting to a continuum of primitive connectivity , view dependence , and levels of detail -LRB- LOD -RRB- .", "label": "EVALUATE-FOR", "metadata": [6, 6, 0, 0]} | |
{"text": "ROD-TV simultaneously delivers good << efficiency >> and [[ robust-ness ]] , by adapting to a continuum of primitive connectivity , view dependence , and levels of detail -LRB- LOD -RRB- .", "label": "CONJUNCTION", "metadata": [6, 6, 4, 4]} | |
{"text": "ROD-TV simultaneously delivers good efficiency and robust-ness , by adapting to a continuum of << primitive connectivity >> , [[ view dependence ]] , and levels of detail -LRB- LOD -RRB- .", "label": "CONJUNCTION", "metadata": [17, 18, 14, 15]} | |
{"text": "ROD-TV simultaneously delivers good efficiency and robust-ness , by adapting to a continuum of primitive connectivity , << view dependence >> , and [[ levels of detail -LRB- LOD -RRB- ]] .", "label": "CONJUNCTION", "metadata": [21, 26, 17, 18]} | |
{"text": "[[ Locally inferred surface elements ]] are robust to noise and better capture << local shapes >> .", "label": "USED-FOR", "metadata": [0, 3, 11, 12]} | |
{"text": "By inferring [[ per-vertex normals ]] at sub-voxel precision on the fly , we can achieve << interpolative shading >> .", "label": "USED-FOR", "metadata": [2, 3, 14, 15]} | |
{"text": "By inferring << per-vertex normals >> at [[ sub-voxel precision ]] on the fly , we can achieve interpolative shading .", "label": "FEATURE-OF", "metadata": [5, 6, 2, 3]} | |
{"text": "By relaxing the [[ mesh connectivity requirement ]] , we extend ROD-TV and propose a simple but effective << multiscale feature extraction algorithm >> .", "label": "USED-FOR", "metadata": [3, 5, 16, 19]} | |
{"text": "By relaxing the mesh connectivity requirement , we extend [[ ROD-TV ]] and propose a simple but effective << multiscale feature extraction algorithm >> .", "label": "USED-FOR", "metadata": [9, 9, 16, 19]} | |
{"text": "<< ROD-TV >> consists of a [[ hierarchical data structure ]] that encodes different levels of detail .", "label": "PART-OF", "metadata": [4, 6, 0, 0]} | |
{"text": "The << local reconstruction algorithm >> is [[ tensor voting ]] .", "label": "HYPONYM-OF", "metadata": [5, 6, 1, 3]} | |
{"text": "<< It >> is applied on demand to the visible subset of data at a desired level of detail , by [[ traversing the data hierarchy ]] and collecting tensorial support in a neighborhood .", "label": "USED-FOR", "metadata": [19, 22, 0, 0]} | |
{"text": "It is applied on demand to the visible subset of data at a desired level of detail , by [[ traversing the data hierarchy ]] and << collecting tensorial support >> in a neighborhood .", "label": "CONJUNCTION", "metadata": [19, 22, 24, 26]} | |
{"text": "<< It >> is applied on demand to the visible subset of data at a desired level of detail , by traversing the data hierarchy and [[ collecting tensorial support ]] in a neighborhood .", "label": "USED-FOR", "metadata": [24, 26, 0, 0]} | |
{"text": "Both [[ rhetorical structure ]] and << punctuation >> have been helpful in discourse processing .", "label": "CONJUNCTION", "metadata": [1, 2, 4, 4]} | |
{"text": "Both [[ rhetorical structure ]] and punctuation have been helpful in << discourse processing >> .", "label": "USED-FOR", "metadata": [1, 2, 9, 10]} | |
{"text": "Both rhetorical structure and [[ punctuation ]] have been helpful in << discourse processing >> .", "label": "USED-FOR", "metadata": [4, 4, 9, 10]} | |
{"text": "Based on a corpus annotation project , this paper reports the discursive usage of 6 [[ Chinese punctuation marks ]] in << news commentary texts >> : Colon , Dash , Ellipsis , Exclamation Mark , Question Mark , and Semicolon .", "label": "PART-OF", "metadata": [15, 17, 19, 21]} | |
{"text": "Based on a corpus annotation project , this paper reports the discursive usage of 6 << Chinese punctuation marks >> in news commentary texts : [[ Colon ]] , Dash , Ellipsis , Exclamation Mark , Question Mark , and Semicolon .", "label": "HYPONYM-OF", "metadata": [23, 23, 15, 17]} | |
{"text": "Based on a corpus annotation project , this paper reports the discursive usage of 6 Chinese punctuation marks in news commentary texts : [[ Colon ]] , << Dash >> , Ellipsis , Exclamation Mark , Question Mark , and Semicolon .", "label": "CONJUNCTION", "metadata": [23, 23, 25, 25]} | |
{"text": "Based on a corpus annotation project , this paper reports the discursive usage of 6 << Chinese punctuation marks >> in news commentary texts : Colon , [[ Dash ]] , Ellipsis , Exclamation Mark , Question Mark , and Semicolon .", "label": "HYPONYM-OF", "metadata": [25, 25, 15, 17]} | |
{"text": "Based on a corpus annotation project , this paper reports the discursive usage of 6 Chinese punctuation marks in news commentary texts : Colon , [[ Dash ]] , << Ellipsis >> , Exclamation Mark , Question Mark , and Semicolon .", "label": "CONJUNCTION", "metadata": [25, 25, 27, 27]} | |
{"text": "Based on a corpus annotation project , this paper reports the discursive usage of 6 << Chinese punctuation marks >> in news commentary texts : Colon , Dash , [[ Ellipsis ]] , Exclamation Mark , Question Mark , and Semicolon .", "label": "HYPONYM-OF", "metadata": [27, 27, 15, 17]} | |
{"text": "Based on a corpus annotation project , this paper reports the discursive usage of 6 Chinese punctuation marks in news commentary texts : Colon , Dash , [[ Ellipsis ]] , << Exclamation Mark >> , Question Mark , and Semicolon .", "label": "CONJUNCTION", "metadata": [27, 27, 29, 30]} | |
{"text": "Based on a corpus annotation project , this paper reports the discursive usage of 6 << Chinese punctuation marks >> in news commentary texts : Colon , Dash , Ellipsis , [[ Exclamation Mark ]] , Question Mark , and Semicolon .", "label": "HYPONYM-OF", "metadata": [29, 30, 15, 17]} | |
{"text": "Based on a corpus annotation project , this paper reports the discursive usage of 6 Chinese punctuation marks in news commentary texts : Colon , Dash , Ellipsis , [[ Exclamation Mark ]] , << Question Mark >> , and Semicolon .", "label": "CONJUNCTION", "metadata": [29, 30, 32, 33]} | |
{"text": "Based on a corpus annotation project , this paper reports the discursive usage of 6 << Chinese punctuation marks >> in news commentary texts : Colon , Dash , Ellipsis , Exclamation Mark , [[ Question Mark ]] , and Semicolon .", "label": "HYPONYM-OF", "metadata": [32, 33, 15, 17]} | |
{"text": "Based on a corpus annotation project , this paper reports the discursive usage of 6 Chinese punctuation marks in news commentary texts : Colon , Dash , Ellipsis , Exclamation Mark , [[ Question Mark ]] , and << Semicolon >> .", "label": "CONJUNCTION", "metadata": [32, 33, 36, 36]} | |
{"text": "Based on a corpus annotation project , this paper reports the discursive usage of 6 << Chinese punctuation marks >> in news commentary texts : Colon , Dash , Ellipsis , Exclamation Mark , Question Mark , and [[ Semicolon ]] .", "label": "HYPONYM-OF", "metadata": [36, 36, 15, 17]} | |
{"text": "The [[ rhetorical patterns ]] of these << marks >> are compared against patterns around cue phrases in general .", "label": "FEATURE-OF", "metadata": [1, 2, 5, 5]} | |
{"text": "The [[ rhetorical patterns ]] of these marks are compared against << patterns around cue phrases >> in general .", "label": "COMPARE", "metadata": [1, 2, 9, 12]} | |
{"text": "Results show that these [[ Chinese punctuation marks ]] , though fewer in number than << cue phrases >> , are easy to identify , have strong correlation with certain relations , and can be used as distinctive indicators of nuclearity in Chinese texts .", "label": "COMPARE", "metadata": [4, 6, 13, 14]} | |
{"text": "Results show that these [[ Chinese punctuation marks ]] , though fewer in number than cue phrases , are easy to identify , have strong correlation with certain relations , and can be used as distinctive << indicators of nuclearity >> in Chinese texts .", "label": "USED-FOR", "metadata": [4, 6, 34, 36]} | |
{"text": "Results show that these Chinese punctuation marks , though fewer in number than cue phrases , are easy to identify , have strong correlation with certain relations , and can be used as distinctive << indicators of nuclearity >> in [[ Chinese texts ]] .", "label": "FEATURE-OF", "metadata": [38, 39, 34, 36]} | |
{"text": "The << features >> based on [[ Markov random field -LRB- MRF -RRB- models ]] are usually sensitive to the rotation of image textures .", "label": "USED-FOR", "metadata": [4, 10, 1, 1]} | |
{"text": "This paper develops an [[ anisotropic circular Gaussian MRF -LRB- ACGMRF -RRB- model ]] for << modelling rotated image textures >> and retrieving rotation-invariant texture features .", "label": "USED-FOR", "metadata": [4, 11, 13, 16]} | |
{"text": "This paper develops an [[ anisotropic circular Gaussian MRF -LRB- ACGMRF -RRB- model ]] for modelling rotated image textures and << retrieving rotation-invariant texture features >> .", "label": "USED-FOR", "metadata": [4, 11, 18, 21]} | |
{"text": "This paper develops an anisotropic circular Gaussian MRF -LRB- ACGMRF -RRB- model for [[ modelling rotated image textures ]] and << retrieving rotation-invariant texture features >> .", "label": "CONJUNCTION", "metadata": [13, 16, 18, 21]} | |
{"text": "To overcome the [[ singularity problem ]] of the << least squares estimate -LRB- LSE -RRB- method >> , an approximate least squares estimate -LRB- ALSE -RRB- method is proposed to estimate the parameters of the ACGMRF model .", "label": "FEATURE-OF", "metadata": [3, 4, 7, 13]} | |
{"text": "To overcome the singularity problem of the least squares estimate -LRB- LSE -RRB- method , an [[ approximate least squares estimate -LRB- ALSE -RRB- method ]] is proposed to estimate the << parameters of the ACGMRF model >> .", "label": "USED-FOR", "metadata": [16, 23, 29, 33]} | |
{"text": "The << rotation-invariant features >> can be obtained from the [[ parameters of the ACGMRF model ]] by the one-dimensional -LRB- 1-D -RRB- discrete Fourier transform -LRB- DFT -RRB- .", "label": "USED-FOR", "metadata": [8, 12, 1, 2]} | |
{"text": "The << rotation-invariant features >> can be obtained from the parameters of the ACGMRF model by the [[ one-dimensional -LRB- 1-D -RRB- discrete Fourier transform -LRB- DFT -RRB- ]] .", "label": "USED-FOR", "metadata": [15, 24, 1, 2]} | |
{"text": "Significantly improved accuracy can be achieved by applying the [[ rotation-invariant features ]] to classify << SAR -LRB- synthetic aperture radar >> -RRB- sea ice and Brodatz imagery .", "label": "USED-FOR", "metadata": [9, 10, 13, 17]} | |
{"text": "Despite much recent progress on accurate << semantic role labeling >> , previous work has largely used [[ independent classifiers ]] , possibly combined with separate label sequence models via Viterbi decoding .", "label": "USED-FOR", "metadata": [15, 16, 6, 8]} | |
{"text": "Despite much recent progress on accurate semantic role labeling , previous work has largely used [[ independent classifiers ]] , possibly combined with separate << label sequence models >> via Viterbi decoding .", "label": "CONJUNCTION", "metadata": [15, 16, 22, 24]} | |
{"text": "Despite much recent progress on accurate semantic role labeling , previous work has largely used independent classifiers , possibly combined with separate << label sequence models >> via [[ Viterbi decoding ]] .", "label": "USED-FOR", "metadata": [26, 27, 22, 24]} | |
{"text": "We show how to build a joint model of argument frames , incorporating novel [[ features ]] that model these interactions into << discriminative log-linear models >> .", "label": "PART-OF", "metadata": [14, 14, 20, 22]} | |
{"text": "This << system >> achieves an [[ error reduction ]] of 22 % on all arguments and 32 % on core arguments over a state-of-the art independent classifier for gold-standard parse trees on PropBank .", "label": "EVALUATE-FOR", "metadata": [4, 5, 1, 1]} | |
{"text": "This system achieves an [[ error reduction ]] of 22 % on all arguments and 32 % on core arguments over a state-of-the art << independent classifier >> for gold-standard parse trees on PropBank .", "label": "EVALUATE-FOR", "metadata": [4, 5, 22, 23]} | |
{"text": "This << system >> achieves an error reduction of 22 % on all arguments and 32 % on core arguments over a state-of-the art [[ independent classifier ]] for gold-standard parse trees on PropBank .", "label": "COMPARE", "metadata": [22, 23, 1, 1]} | |
{"text": "This << system >> achieves an error reduction of 22 % on all arguments and 32 % on core arguments over a state-of-the art independent classifier for [[ gold-standard parse trees ]] on PropBank .", "label": "EVALUATE-FOR", "metadata": [25, 27, 1, 1]} | |
{"text": "This system achieves an error reduction of 22 % on all arguments and 32 % on core arguments over a state-of-the art << independent classifier >> for [[ gold-standard parse trees ]] on PropBank .", "label": "EVALUATE-FOR", "metadata": [25, 27, 22, 23]} | |
{"text": "This system achieves an error reduction of 22 % on all arguments and 32 % on core arguments over a state-of-the art independent classifier for [[ gold-standard parse trees ]] on << PropBank >> .", "label": "PART-OF", "metadata": [25, 27, 29, 29]} | |
{"text": "In order to deal with << ambiguity >> , the [[ MORphological PArser MORPA ]] is provided with a probabilistic context-free grammar -LRB- PCFG -RRB- , i.e. it combines a `` conventional '' context-free morphological grammar to filter out ungrammatical segmentations with a probability-based scoring function which determines the likelihood of each successful parse .", "label": "USED-FOR", "metadata": [8, 10, 5, 5]} | |
{"text": "In order to deal with ambiguity , the << MORphological PArser MORPA >> is provided with a [[ probabilistic context-free grammar -LRB- PCFG -RRB- ]] , i.e. it combines a `` conventional '' context-free morphological grammar to filter out ungrammatical segmentations with a probability-based scoring function which determines the likelihood of each successful parse .", "label": "USED-FOR", "metadata": [15, 20, 8, 10]} | |
{"text": "In order to deal with ambiguity , the MORphological PArser MORPA is provided with a probabilistic context-free grammar -LRB- PCFG -RRB- , i.e. << it >> combines a [[ `` conventional '' context-free morphological grammar ]] to filter out ungrammatical segmentations with a probability-based scoring function which determines the likelihood of each successful parse .", "label": "USED-FOR", "metadata": [26, 31, 23, 23]} | |
{"text": "In order to deal with ambiguity , the MORphological PArser MORPA is provided with a probabilistic context-free grammar -LRB- PCFG -RRB- , i.e. it combines a [[ `` conventional '' context-free morphological grammar ]] to filter out << ungrammatical segmentations >> with a probability-based scoring function which determines the likelihood of each successful parse .", "label": "USED-FOR", "metadata": [26, 31, 35, 36]} | |
{"text": "In order to deal with ambiguity , the MORphological PArser MORPA is provided with a probabilistic context-free grammar -LRB- PCFG -RRB- , i.e. << it >> combines a `` conventional '' context-free morphological grammar to filter out ungrammatical segmentations with a [[ probability-based scoring function ]] which determines the likelihood of each successful parse .", "label": "USED-FOR", "metadata": [39, 41, 23, 23]} | |
{"text": "In order to deal with ambiguity , the MORphological PArser MORPA is provided with a probabilistic context-free grammar -LRB- PCFG -RRB- , i.e. it combines a << `` conventional '' context-free morphological grammar >> to filter out ungrammatical segmentations with a [[ probability-based scoring function ]] which determines the likelihood of each successful parse .", "label": "CONJUNCTION", "metadata": [39, 41, 26, 31]} | |
{"text": "In order to deal with ambiguity , the MORphological PArser MORPA is provided with a probabilistic context-free grammar -LRB- PCFG -RRB- , i.e. it combines a `` conventional '' context-free morphological grammar to filter out ungrammatical segmentations with a [[ probability-based scoring function ]] which determines the likelihood of each successful << parse >> .", "label": "USED-FOR", "metadata": [39, 41, 49, 49]} | |
{"text": "Test performance data will show that a [[ PCFG ]] yields good results in << morphological parsing >> .", "label": "USED-FOR", "metadata": [7, 7, 12, 13]} | |
{"text": "[[ MORPA ]] is a fully implemented << parser >> developed for use in a text-to-speech conversion system .", "label": "HYPONYM-OF", "metadata": [0, 0, 5, 5]} | |
{"text": "[[ MORPA ]] is a fully implemented parser developed for use in a << text-to-speech conversion system >> .", "label": "USED-FOR", "metadata": [0, 0, 11, 13]} | |
{"text": "MORPA is a fully implemented [[ parser ]] developed for use in a << text-to-speech conversion system >> .", "label": "USED-FOR", "metadata": [5, 5, 11, 13]} | |
{"text": "This paper describes the framework of a << Korean phonological knowledge base system >> using the [[ unification-based grammar formalism ]] : Korean Phonology Structure Grammar -LRB- KPSG -RRB- .", "label": "USED-FOR", "metadata": [14, 16, 7, 11]} | |
{"text": "This paper describes the framework of a Korean phonological knowledge base system using the << unification-based grammar formalism >> : [[ Korean Phonology Structure Grammar -LRB- KPSG -RRB- ]] .", "label": "HYPONYM-OF", "metadata": [18, 24, 14, 16]} | |
{"text": "The [[ approach ]] of << KPSG >> provides an explicit development model for constructing a computational phonological system : speech recognition and synthesis system .", "label": "USED-FOR", "metadata": [1, 1, 3, 3]} | |
{"text": "The approach of [[ KPSG ]] provides an explicit development model for constructing a computational << phonological system >> : speech recognition and synthesis system .", "label": "USED-FOR", "metadata": [3, 3, 13, 14]} | |
{"text": "We show that the proposed [[ approach ]] is more describable than other << approaches >> such as those employing a traditional generative phonological approach .", "label": "COMPARE", "metadata": [5, 5, 11, 11]} | |
{"text": "We show that the proposed approach is more describable than other approaches such as << those >> employing a traditional [[ generative phonological approach ]] .", "label": "USED-FOR", "metadata": [18, 20, 14, 14]} | |
{"text": "In this paper , we study the [[ design of core-selecting payment rules ]] for such << domains >> .", "label": "USED-FOR", "metadata": [7, 11, 14, 14]} | |
{"text": "We design two [[ core-selecting rules ]] that always satisfy << IR >> in expectation .", "label": "USED-FOR", "metadata": [3, 4, 8, 8]} | |
{"text": "To study the performance of our << rules >> we perform a [[ computational Bayes-Nash equilibrium analysis ]] .", "label": "USED-FOR", "metadata": [10, 13, 6, 6]} | |
{"text": "We show that , in equilibrium , our new [[ rules ]] have better incentives , higher efficiency , and a lower rate of ex-post IR violations than standard << core-selecting rules >> .", "label": "COMPARE", "metadata": [9, 9, 27, 28]} | |
{"text": "We show that , in equilibrium , our new << rules >> have better incentives , higher efficiency , and a lower [[ rate of ex-post IR violations ]] than standard core-selecting rules .", "label": "EVALUATE-FOR", "metadata": [20, 24, 9, 9]} | |
{"text": "We show that , in equilibrium , our new rules have better incentives , higher efficiency , and a lower [[ rate of ex-post IR violations ]] than standard << core-selecting rules >> .", "label": "EVALUATE-FOR", "metadata": [20, 24, 27, 28]} | |
{"text": "In this paper , we will describe a [[ search tool ]] for a huge set of << ngrams >> .", "label": "USED-FOR", "metadata": [8, 9, 15, 15]} | |
{"text": "This system can be a very useful [[ tool ]] for << linguistic knowledge discovery >> and other NLP tasks .", "label": "USED-FOR", "metadata": [7, 7, 9, 11]} | |
{"text": "This system can be a very useful [[ tool ]] for linguistic knowledge discovery and other << NLP tasks >> .", "label": "USED-FOR", "metadata": [7, 7, 14, 15]} | |
{"text": "This system can be a very useful tool for [[ linguistic knowledge discovery ]] and other << NLP tasks >> .", "label": "CONJUNCTION", "metadata": [9, 11, 14, 15]} | |
{"text": "This paper explores the role of [[ user modeling ]] in such << systems >> .", "label": "PART-OF", "metadata": [6, 7, 10, 10]} | |
{"text": "Since acquiring the knowledge for a [[ user model ]] is a fundamental problem in << user modeling >> , a section is devoted to this topic .", "label": "USED-FOR", "metadata": [6, 7, 13, 14]} | |
{"text": "Next , the benefits and costs of implementing a [[ user modeling component ]] for a << system >> are weighed in light of several aspects of the interaction requirements that may be imposed by the system .", "label": "PART-OF", "metadata": [9, 11, 14, 14]} | |
{"text": "[[ Information extraction techniques ]] automatically create << structured databases >> from unstructured data sources , such as the Web or newswire documents .", "label": "USED-FOR", "metadata": [0, 2, 5, 6]} | |
{"text": "<< Information extraction techniques >> automatically create structured databases from [[ unstructured data sources ]] , such as the Web or newswire documents .", "label": "USED-FOR", "metadata": [8, 10, 0, 2]} | |
{"text": "Information extraction techniques automatically create structured databases from << unstructured data sources >> , such as the [[ Web ]] or newswire documents .", "label": "HYPONYM-OF", "metadata": [15, 15, 8, 10]} | |
{"text": "Information extraction techniques automatically create structured databases from unstructured data sources , such as the [[ Web ]] or << newswire documents >> .", "label": "CONJUNCTION", "metadata": [15, 15, 17, 18]} | |
{"text": "Information extraction techniques automatically create structured databases from << unstructured data sources >> , such as the Web or [[ newswire documents ]] .", "label": "HYPONYM-OF", "metadata": [17, 18, 8, 10]} | |
{"text": "Despite the successes of these << systems >> , [[ accuracy ]] will always be imperfect .", "label": "EVALUATE-FOR", "metadata": [7, 7, 5, 5]} | |
{"text": "The << information extraction system >> we evaluate is based on a [[ linear-chain conditional random field -LRB- CRF -RRB- ]] , a probabilistic model which has performed well on information extraction tasks because of its ability to capture arbitrary , overlapping features of the input in a Markov model .", "label": "USED-FOR", "metadata": [10, 16, 1, 3]} | |
{"text": "The information extraction system we evaluate is based on a [[ linear-chain conditional random field -LRB- CRF -RRB- ]] , a << probabilistic model >> which has performed well on information extraction tasks because of its ability to capture arbitrary , overlapping features of the input in a Markov model .", "label": "HYPONYM-OF", "metadata": [10, 16, 19, 20]} | |
{"text": "The information extraction system we evaluate is based on a linear-chain conditional random field -LRB- CRF -RRB- , a [[ probabilistic model ]] which has performed well on << information extraction tasks >> because of its ability to capture arbitrary , overlapping features of the input in a Markov model .", "label": "USED-FOR", "metadata": [19, 20, 26, 28]} | |
{"text": "The information extraction system we evaluate is based on a linear-chain conditional random field -LRB- CRF -RRB- , a [[ probabilistic model ]] which has performed well on information extraction tasks because of its ability to capture << arbitrary , overlapping features >> of the input in a Markov model .", "label": "USED-FOR", "metadata": [19, 20, 35, 38]} | |
{"text": "The information extraction system we evaluate is based on a linear-chain conditional random field -LRB- CRF -RRB- , a probabilistic model which has performed well on information extraction tasks because of its ability to capture [[ arbitrary , overlapping features ]] of the << input >> in a Markov model .", "label": "FEATURE-OF", "metadata": [35, 38, 41, 41]} | |
{"text": "The information extraction system we evaluate is based on a linear-chain conditional random field -LRB- CRF -RRB- , a probabilistic model which has performed well on information extraction tasks because of its ability to capture [[ arbitrary , overlapping features ]] of the input in a << Markov model >> .", "label": "PART-OF", "metadata": [35, 38, 44, 45]} | |
{"text": "We implement several techniques to estimate the confidence of both [[ extracted fields ]] and entire << multi-field records >> , obtaining an average precision of 98 % for retrieving correct fields and 87 % for multi-field records .", "label": "CONJUNCTION", "metadata": [10, 11, 14, 15]} | |
{"text": "We implement several << techniques >> to estimate the confidence of both extracted fields and entire multi-field records , obtaining an [[ average precision ]] of 98 % for retrieving correct fields and 87 % for multi-field records .", "label": "EVALUATE-FOR", "metadata": [19, 20, 3, 3]} | |
{"text": "In this paper , we use the [[ information redundancy in multilingual input ]] to correct errors in << machine translation >> and thus improve the quality of multilingual summaries .", "label": "USED-FOR", "metadata": [7, 11, 16, 17]} | |
{"text": "In this paper , we use the [[ information redundancy in multilingual input ]] to correct errors in machine translation and thus improve the quality of << multilingual summaries >> .", "label": "USED-FOR", "metadata": [7, 11, 24, 25]} | |
{"text": "We demonstrate how errors in the << machine translations >> of the input [[ Arabic documents ]] can be corrected by identifying and generating from such redundancy , focusing on noun phrases .", "label": "USED-FOR", "metadata": [11, 12, 6, 7]} | |
{"text": "In this paper , we propose a new [[ approach ]] to generate << oriented object proposals -LRB- OOPs -RRB- >> to reduce the detection error caused by various orientations of the object .", "label": "USED-FOR", "metadata": [8, 8, 11, 16]} | |
{"text": "In this paper , we propose a new approach to generate << oriented object proposals -LRB- OOPs -RRB- >> to reduce the [[ detection error ]] caused by various orientations of the object .", "label": "EVALUATE-FOR", "metadata": [20, 21, 11, 16]} | |
{"text": "To this end , we propose to efficiently locate << object regions >> according to [[ pixelwise object probability ]] , rather than measuring the objectness from a set of sampled windows .", "label": "USED-FOR", "metadata": [13, 15, 9, 10]} | |
{"text": "To this end , we propose to efficiently locate object regions according to [[ pixelwise object probability ]] , rather than measuring the << objectness >> from a set of sampled windows .", "label": "COMPARE", "metadata": [13, 15, 21, 21]} | |
{"text": "We formulate the << proposal generation problem >> as a [[ generative proba-bilistic model ]] such that object proposals of different shapes -LRB- i.e. , sizes and orientations -RRB- can be produced by locating the local maximum likelihoods .", "label": "USED-FOR", "metadata": [8, 10, 3, 5]} | |
{"text": "We formulate the proposal generation problem as a generative proba-bilistic model such that << object proposals >> of different [[ shapes ]] -LRB- i.e. , sizes and orientations -RRB- can be produced by locating the local maximum likelihoods .", "label": "FEATURE-OF", "metadata": [17, 17, 13, 14]} | |
{"text": "We formulate the proposal generation problem as a generative proba-bilistic model such that object proposals of different << shapes >> -LRB- i.e. , [[ sizes ]] and orientations -RRB- can be produced by locating the local maximum likelihoods .", "label": "HYPONYM-OF", "metadata": [21, 21, 17, 17]} | |
{"text": "We formulate the proposal generation problem as a generative proba-bilistic model such that object proposals of different shapes -LRB- i.e. , [[ sizes ]] and << orientations >> -RRB- can be produced by locating the local maximum likelihoods .", "label": "CONJUNCTION", "metadata": [21, 21, 23, 23]} | |
{"text": "We formulate the proposal generation problem as a generative proba-bilistic model such that object proposals of different << shapes >> -LRB- i.e. , sizes and [[ orientations ]] -RRB- can be produced by locating the local maximum likelihoods .", "label": "HYPONYM-OF", "metadata": [23, 23, 17, 17]} | |
{"text": "We formulate the proposal generation problem as a generative proba-bilistic model such that << object proposals >> of different shapes -LRB- i.e. , sizes and orientations -RRB- can be produced by locating the [[ local maximum likelihoods ]] .", "label": "USED-FOR", "metadata": [31, 33, 13, 14]} | |
{"text": "First , it helps the [[ object detector ]] handle objects of different << orientations >> .", "label": "USED-FOR", "metadata": [5, 6, 11, 11]} | |
{"text": "Third , [[ it ]] avoids massive window sampling , and thereby reducing the << number of proposals >> while maintaining a high recall .", "label": "USED-FOR", "metadata": [2, 2, 12, 14]} | |
{"text": "Third , << it >> avoids massive window sampling , and thereby reducing the number of proposals while maintaining a high [[ recall ]] .", "label": "EVALUATE-FOR", "metadata": [19, 19, 2, 2]} | |
{"text": "Experiments on the [[ PASCAL VOC 2007 dataset ]] show that the proposed << OOP >> outperforms the state-of-the-art fast methods .", "label": "EVALUATE-FOR", "metadata": [3, 6, 11, 11]} | |
{"text": "Experiments on the PASCAL VOC 2007 dataset show that the proposed [[ OOP ]] outperforms the << state-of-the-art fast methods >> .", "label": "COMPARE", "metadata": [11, 11, 14, 16]} | |
{"text": "Further experiments show that the [[ rotation invariant property ]] helps a << class-specific object detector >> achieve better performance than the state-of-the-art proposal generation methods in either object rotation scenarios or general scenarios .", "label": "USED-FOR", "metadata": [5, 7, 10, 12]} | |
{"text": "Further experiments show that the rotation invariant property helps a [[ class-specific object detector ]] achieve better performance than the state-of-the-art << proposal generation methods >> in either object rotation scenarios or general scenarios .", "label": "COMPARE", "metadata": [10, 12, 19, 21]} | |
{"text": "Further experiments show that the rotation invariant property helps a << class-specific object detector >> achieve better performance than the state-of-the-art proposal generation methods in either [[ object rotation scenarios ]] or general scenarios .", "label": "EVALUATE-FOR", "metadata": [24, 26, 10, 12]} | |
{"text": "Further experiments show that the rotation invariant property helps a class-specific object detector achieve better performance than the state-of-the-art << proposal generation methods >> in either [[ object rotation scenarios ]] or general scenarios .", "label": "EVALUATE-FOR", "metadata": [24, 26, 19, 21]} | |
{"text": "Further experiments show that the rotation invariant property helps a class-specific object detector achieve better performance than the state-of-the-art proposal generation methods in either [[ object rotation scenarios ]] or << general scenarios >> .", "label": "CONJUNCTION", "metadata": [24, 26, 28, 29]} | |
{"text": "Further experiments show that the rotation invariant property helps a << class-specific object detector >> achieve better performance than the state-of-the-art proposal generation methods in either object rotation scenarios or [[ general scenarios ]] .", "label": "EVALUATE-FOR", "metadata": [28, 29, 10, 12]} | |
{"text": "Further experiments show that the rotation invariant property helps a class-specific object detector achieve better performance than the state-of-the-art << proposal generation methods >> in either object rotation scenarios or [[ general scenarios ]] .", "label": "EVALUATE-FOR", "metadata": [28, 29, 19, 21]} | |
{"text": "This paper describes three relatively [[ domain-independent capabilities ]] recently added to the << Paramax spoken language understanding system >> : non-monotonic reasoning , implicit reference resolution , and database query paraphrase .", "label": "PART-OF", "metadata": [5, 6, 11, 15]} | |
{"text": "This paper describes three relatively << domain-independent capabilities >> recently added to the Paramax spoken language understanding system : [[ non-monotonic reasoning ]] , implicit reference resolution , and database query paraphrase .", "label": "HYPONYM-OF", "metadata": [17, 18, 5, 6]} | |
{"text": "This paper describes three relatively << domain-independent capabilities >> recently added to the Paramax spoken language understanding system : non-monotonic reasoning , [[ implicit reference resolution ]] , and database query paraphrase .", "label": "HYPONYM-OF", "metadata": [20, 22, 5, 6]} | |
{"text": "This paper describes three relatively << domain-independent capabilities >> recently added to the Paramax spoken language understanding system : non-monotonic reasoning , implicit reference resolution , and [[ database query paraphrase ]] .", "label": "HYPONYM-OF", "metadata": [25, 27, 5, 6]} | |
{"text": "Finally , we briefly describe an experiment which we have done in extending the << n-best speech/language integration architecture >> to improving [[ OCR accuracy ]] .", "label": "EVALUATE-FOR", "metadata": [20, 21, 14, 17]} | |
{"text": "We investigate the problem of fine-grained sketch-based image retrieval -LRB- SBIR -RRB- , where [[ free-hand human sketches ]] are used as queries to perform << instance-level retrieval of images >> .", "label": "USED-FOR", "metadata": [14, 16, 23, 26]} | |
{"text": "This is an extremely challenging task because -LRB- i -RRB- visual comparisons not only need to be fine-grained but also executed cross-domain , -LRB- ii -RRB- free-hand -LRB- finger -RRB- sketches are highly abstract , making fine-grained matching harder , and most importantly -LRB- iii -RRB- [[ annotated cross-domain sketch-photo datasets ]] required for training are scarce , challenging many state-of-the-art << machine learning techniques >> .", "label": "USED-FOR", "metadata": [46, 49, 59, 61]} | |
{"text": "We then develop a [[ deep triplet-ranking model ]] for << instance-level SBIR >> with a novel data augmentation and staged pre-training strategy to alleviate the issue of insufficient fine-grained training data .", "label": "USED-FOR", "metadata": [4, 6, 8, 9]} | |
{"text": "We then develop a [[ deep triplet-ranking model ]] for instance-level SBIR with a novel data augmentation and staged pre-training strategy to alleviate the issue of << insufficient fine-grained training data >> .", "label": "USED-FOR", "metadata": [4, 6, 24, 27]} | |
{"text": "We then develop a << deep triplet-ranking model >> for instance-level SBIR with a novel [[ data augmentation ]] and staged pre-training strategy to alleviate the issue of insufficient fine-grained training data .", "label": "USED-FOR", "metadata": [13, 14, 4, 6]} | |
{"text": "We then develop a deep triplet-ranking model for instance-level SBIR with a novel [[ data augmentation ]] and << staged pre-training strategy >> to alleviate the issue of insufficient fine-grained training data .", "label": "CONJUNCTION", "metadata": [13, 14, 16, 18]} | |
{"text": "We then develop a << deep triplet-ranking model >> for instance-level SBIR with a novel data augmentation and [[ staged pre-training strategy ]] to alleviate the issue of insufficient fine-grained training data .", "label": "USED-FOR", "metadata": [16, 18, 4, 6]} | |
{"text": "Extensive experiments are carried out to contribute a variety of insights into the challenges of [[ data sufficiency ]] and << over-fitting avoidance >> when training deep networks for fine-grained cross-domain ranking tasks .", "label": "CONJUNCTION", "metadata": [15, 16, 18, 19]} | |
{"text": "Extensive experiments are carried out to contribute a variety of insights into the challenges of data sufficiency and over-fitting avoidance when training [[ deep networks ]] for << fine-grained cross-domain ranking tasks >> .", "label": "USED-FOR", "metadata": [22, 23, 25, 28]} | |
{"text": "In this paper we target at generating << generic action proposals >> in [[ unconstrained videos ]] .", "label": "USED-FOR", "metadata": [11, 12, 7, 9]} | |
{"text": "Each action proposal corresponds to a << temporal series of spatial bounding boxes >> , i.e. , a [[ spatio-temporal video tube ]] , which has a good potential to locate one human action .", "label": "HYPONYM-OF", "metadata": [16, 18, 6, 11]} | |
{"text": "Each action proposal corresponds to a temporal series of spatial bounding boxes , i.e. , a [[ spatio-temporal video tube ]] , which has a good potential to locate one << human action >> .", "label": "USED-FOR", "metadata": [16, 18, 28, 29]} | |
{"text": "Assuming each action is performed by a human with meaningful motion , both [[ appearance and motion cues ]] are utilized to measure the << ac-tionness >> of the video tubes .", "label": "USED-FOR", "metadata": [13, 16, 22, 22]} | |
{"text": "Assuming each action is performed by a human with meaningful motion , both appearance and motion cues are utilized to measure the [[ ac-tionness ]] of the << video tubes >> .", "label": "EVALUATE-FOR", "metadata": [22, 22, 25, 26]} | |
{"text": "After picking those spatiotem-poral paths of high actionness scores , our << action proposal generation >> is formulated as a [[ maximum set coverage problem ]] , where greedy search is performed to select a set of action proposals that can maximize the overall actionness score .", "label": "USED-FOR", "metadata": [18, 21, 11, 13]} | |
{"text": "After picking those spatiotem-poral paths of high actionness scores , our action proposal generation is formulated as a maximum set coverage problem , where [[ greedy search ]] is performed to select a set of << action proposals >> that can maximize the overall actionness score .", "label": "USED-FOR", "metadata": [24, 25, 33, 34]} | |
{"text": "After picking those spatiotem-poral paths of high actionness scores , our action proposal generation is formulated as a maximum set coverage problem , where greedy search is performed to select a set of << action proposals >> that can maximize the overall [[ actionness score ]] .", "label": "EVALUATE-FOR", "metadata": [40, 41, 33, 34]} | |
{"text": "Compared with existing [[ action proposal approaches ]] , our << action proposals >> do not rely on video segmentation and can be generated in nearly real-time .", "label": "COMPARE", "metadata": [3, 5, 8, 9]} | |
{"text": "Experimental results on two challenging [[ datasets ]] , MSRII and UCF 101 , validate the superior performance of our << action proposals >> as well as competitive results on action detection and search .", "label": "EVALUATE-FOR", "metadata": [5, 5, 18, 19]} | |
{"text": "Experimental results on two challenging << datasets >> , [[ MSRII ]] and UCF 101 , validate the superior performance of our action proposals as well as competitive results on action detection and search .", "label": "HYPONYM-OF", "metadata": [7, 7, 5, 5]} | |
{"text": "Experimental results on two challenging datasets , [[ MSRII ]] and << UCF 101 >> , validate the superior performance of our action proposals as well as competitive results on action detection and search .", "label": "CONJUNCTION", "metadata": [7, 7, 9, 10]} | |
{"text": "Experimental results on two challenging << datasets >> , MSRII and [[ UCF 101 ]] , validate the superior performance of our action proposals as well as competitive results on action detection and search .", "label": "HYPONYM-OF", "metadata": [9, 10, 5, 5]} | |
{"text": "Experimental results on two challenging datasets , MSRII and UCF 101 , validate the superior performance of our << action proposals >> as well as competitive results on [[ action detection and search ]] .", "label": "EVALUATE-FOR", "metadata": [26, 29, 18, 19]} | |
{"text": "This paper reports recent research into [[ methods ]] for << creating natural language text >> .", "label": "USED-FOR", "metadata": [6, 6, 8, 11]} | |
{"text": "<< KDS -LRB- Knowledge Delivery System -RRB- >> , which embodies this [[ paradigm ]] , has distinct parts devoted to creation of the propositional units , to organization of the text , to prevention of excess redundancy , to creation of combinations of units , to evaluation of these combinations as potential sentences , to selection of the best among competing combinations , and to creation of the final text .", "label": "PART-OF", "metadata": [10, 10, 0, 5]} | |
{"text": "The Fragment-and-Compose paradigm and the [[ computational methods ]] of << KDS >> are described .", "label": "USED-FOR", "metadata": [5, 6, 8, 8]} | |
{"text": "This paper explores the issue of using different [[ co-occurrence similarities ]] between terms for separating << query terms >> that are useful for retrieval from those that are harmful .", "label": "USED-FOR", "metadata": [8, 9, 14, 15]} | |
{"text": "This paper explores the issue of using different co-occurrence similarities between terms for separating [[ query terms ]] that are useful for << retrieval >> from those that are harmful .", "label": "USED-FOR", "metadata": [14, 15, 20, 20]} | |
{"text": "This paper explores the issue of using different co-occurrence similarities between terms for separating << query terms >> that are useful for retrieval from [[ those ]] that are harmful .", "label": "COMPARE", "metadata": [22, 22, 14, 15]} | |
{"text": "The hypothesis under examination is that [[ useful terms ]] tend to be more similar to each other than to other << query terms >> .", "label": "COMPARE", "metadata": [6, 7, 19, 20]} | |
{"text": "Preliminary experiments with << similarities >> computed using [[ first-order and second-order co-occurrence ]] seem to confirm the hypothesis .", "label": "USED-FOR", "metadata": [6, 9, 3, 3]} | |
{"text": "We propose a new [[ phrase-based translation model ]] and << decoding algorithm >> that enables us to evaluate and compare several , previously proposed phrase-based translation models .", "label": "CONJUNCTION", "metadata": [4, 6, 8, 9]} | |
{"text": "Within our framework , we carry out a large number of experiments to understand better and explain why [[ phrase-based models ]] outperform << word-based models >> .", "label": "COMPARE", "metadata": [18, 19, 21, 22]} | |
{"text": "Our empirical results , which hold for all examined language pairs , suggest that the highest levels of performance can be obtained through relatively simple << means >> : [[ heuristic learning of phrase translations ]] from word-based alignments and lexical weighting of phrase translations .", "label": "HYPONYM-OF", "metadata": [27, 31, 25, 25]} | |
{"text": "Our empirical results , which hold for all examined language pairs , suggest that the highest levels of performance can be obtained through relatively simple means : << heuristic learning of phrase translations >> from [[ word-based alignments ]] and lexical weighting of phrase translations .", "label": "USED-FOR", "metadata": [33, 34, 27, 31]} | |
{"text": "Our empirical results , which hold for all examined language pairs , suggest that the highest levels of performance can be obtained through relatively simple << means >> : heuristic learning of phrase translations from word-based alignments and [[ lexical weighting of phrase translations ]] .", "label": "HYPONYM-OF", "metadata": [36, 40, 25, 25]} | |
{"text": "Traditional [[ methods ]] for << color constancy >> can improve surface re-flectance estimates from such uncalibrated images , but their output depends significantly on the background scene .", "label": "USED-FOR", "metadata": [1, 1, 3, 4]} | |
{"text": "Traditional [[ methods ]] for color constancy can improve << surface re-flectance estimates >> from such uncalibrated images , but their output depends significantly on the background scene .", "label": "USED-FOR", "metadata": [1, 1, 7, 9]} | |
{"text": "Traditional methods for color constancy can improve << surface re-flectance estimates >> from such [[ uncalibrated images ]] , but their output depends significantly on the background scene .", "label": "USED-FOR", "metadata": [12, 13, 7, 9]} | |
{"text": "We introduce the multi-view color constancy problem , and present a [[ method ]] to recover << estimates of underlying surface re-flectance >> based on joint estimation of these surface properties and the illuminants present in multiple images .", "label": "USED-FOR", "metadata": [11, 11, 14, 18]} | |
{"text": "The [[ method ]] can exploit << image correspondences >> obtained by various alignment techniques , and we show examples based on matching local region features .", "label": "USED-FOR", "metadata": [1, 1, 4, 5]} | |
{"text": "The method can exploit << image correspondences >> obtained by various [[ alignment techniques ]] , and we show examples based on matching local region features .", "label": "USED-FOR", "metadata": [9, 10, 4, 5]} | |
{"text": "Our results show that [[ multi-view constraints ]] can significantly improve << estimates of both scene illuminants and object color -LRB- surface reflectance -RRB- >> when compared to a baseline single-view method .", "label": "USED-FOR", "metadata": [4, 5, 9, 20]} | |
{"text": "Our results show that << multi-view constraints >> can significantly improve estimates of both scene illuminants and object color -LRB- surface reflectance -RRB- when compared to a [[ baseline single-view method ]] .", "label": "COMPARE", "metadata": [25, 27, 4, 5]} | |
{"text": "Our contributions include a [[ concise , modular architecture ]] with reversible processes of << understanding >> and generation , an information-state model of reference , and flexible links between semantics and collaborative problem solving .", "label": "USED-FOR", "metadata": [4, 7, 12, 12]} | |
{"text": "Our contributions include a [[ concise , modular architecture ]] with reversible processes of understanding and << generation >> , an information-state model of reference , and flexible links between semantics and collaborative problem solving .", "label": "USED-FOR", "metadata": [4, 7, 14, 14]} | |
{"text": "Our contributions include a concise , modular architecture with reversible processes of [[ understanding ]] and << generation >> , an information-state model of reference , and flexible links between semantics and collaborative problem solving .", "label": "CONJUNCTION", "metadata": [12, 12, 14, 14]} | |