id
stringlengths 7
12
| sentence1
stringlengths 6
1.27k
| sentence2
stringlengths 6
926
| label
stringclasses 4
values |
---|---|---|---|
train_100 | Task evaluations (for example, (Young, 1999)) are typically done by showing human subjects different texts, and measuring differences in an outcome variable, such as success in performing a task. | despite the above work, we are not aware of any previous evaluation which has compared the effectiveness of NLG texts at meeting a communicative goal against the effectiveness of non-NLG control texts. | contrasting |
train_101 | (Soccer results are much more frequent.) | we now regret this methodological flaw and would like to clean up a little more (as done in the patches), or add back soccer results. | contrasting |
train_102 | The appositive phrases usually provide descriptions of attributes of a person. | the preprocessing component described in Section 2.1 does produce errors in appositive extraction, which are filtered out by syntactic and semantic tests. | contrasting |
train_103 | Given a unique tree structure for a sentence, governor markup may be read off the tree. | in view of the fact that robust broad coverage parsers frequently deliver thousands, millions, or thousands of millions of analyses for sentences of free text, basing annotation on a unique tree (such as the most probable tree analysis generated by a probabilistic grammar) appears arbitrary. | contrasting |
train_104 | All systems supported travel planning and utilized some form of mixedinitiative interaction. | the systems varied in several critical dimensions: (1) They targeted different back-end databases for travel information; (2) System modules such as ASR, NLU, TTS and dialogue management were typically different across systems. | contrasting |
train_105 | We removed system turn duration from the model, to determine whether FlightInfoC would become a significant predictor. | the model fit decreased and FlightInfoC was not a significant predictor. | contrasting |
train_106 | In word N-grams, accurate word prediction can be expected, since a word dependent, unique connectivity from word to word can be represented. | the number of estimated parameters, i.e., the number of combinations of word transitions, is Î AE in vocabulary Î . | contrasting |
train_107 | In class N-grams with classes, the number of estimated parameters is decreased from Î AE to AE . | accuracy of the word prediction capability will be lower than that of word Ngrams with a sufficient size of training data, since the representation capability of the word dependent, unique connectivity attribute will be lost for the approximation base word class. | contrasting |
train_108 | One may partially remedy this problem by decreasing the learning rate parameter Y during the updates. | this is rather ad hoc since it is unclear what is the best way to do so. | contrasting |
train_109 | The resulting formulation is closely related to (2). | instead of looking at one example at a time as in an online formulation, we incorporate all examples at the same time. | contrasting |
train_110 | Some other aspects of the system design (such as dynamic programming, features used, etc) have more impact on the performance. | due to the limitation of space, we will not discuss their impact in detail. | contrasting |
train_111 | This additional level of detail depends on the request itself and on the available information. | here we consider the former factor, viewing it as an initial filter that will guide the content planning process of an enhanced QA system. | contrasting |
train_112 | e.g., "name", "type of", "called" Comparison clues e.g., "similar", "differ", "relate" Intersection clues superlative ADJ, ordinal ADJ, relative clause Topic Itself clues e.g., "show", "picture", "map" answer (which is similar to our Focus). | our Focus decision tree includes additional attributes in its second split (these attributes are added by dprog because they improve predictive performance on the training data). | contrasting |
train_113 | Further, the predictive performance obtained for the set Good4617 is only slightly better than that obtained for the set All5145. | since the set of good queries is 10% smaller, it is considered a better option. | contrasting |
train_114 | In some applications, such as language modeling, this is not the case; instead, the f i are computed on the fly. | with a bit of thought, those data structures also can be rearranged. | contrasting |
train_115 | In particular, let f # (x, y) = i f i (x, y) Then, solve numerically for δ i in the equation Notice that in the special case where f # (x, y) is a constant f # , Equation 4 reduces to Equation 2. | for training instances where f # (x j , y) < f # , the IIS update can take a proportionately larger step. | contrasting |
train_116 | So far, we have only considered TAG grammars in which each elementary tree is assigned a semantics that contains precisely one atom. | there are cases where an elementary tree either has an empty semantics, or a semantics that contains multiple atoms. | contrasting |
train_117 | Gardent and Thater (2001) also propose a con-straint based approach to generation working with a variant of TAG. | the performance of their system decreases rapidly as the input gets larger even when when working with a toy grammar. | contrasting |
train_118 | Possible linearizations of the XTAG representation are generated and then evaluated by a language model to pick the best possible linearization, as in Nitrogen. | the sentence realization system code-named Amalgam (A Machine Learned Generation Module) Gamon et al., 2002b) employs a series of linguistic operations which map a semantic representation to a surface syntactic tree via intermediate syntactic representations. | contrasting |
train_119 | These may include named entity taggers, WordNet, parsers, hand-tagged corpora, and ontology lists (Srihari and Li,00;Harabagiu et al.,01;Hovy et al.,01;Prager et al.,01). | at the recent TREC-10 QA evaluation (Voorhees,01), the winning system used just one resource: a fairly extensive list of surface patterns (Soubbotin and Soubbotin,01). | contrasting |
train_120 | We have to show that all other candidates would also be excluded (for lack of the relevant rules in the non-recursive part). | if it were the recursion in by construction step (20c,d) (only violation-free recursion is possible). | contrasting |
train_121 | That much was previously shown by Markus Hiller and Paul Smolensky (Frank and Satta, 1998), using similar examples. | at least some OT grammars ought to describe regular relations. | contrasting |
train_122 | For task 2 we adopt the substitutional approach advocated by Crouch (1995): The semantic representation of the target § is a copy of the source § where target parallel elements have been substituted for source parallel elements ( ) . | to Higher-Order Unification (HOU) (Dalrymple et al., 1991) substitution is deterministic: Ambiguities somehow cropping up in the interpretation process (i.e. | contrasting |
train_123 | Finally, Hugh ½¿ is repeated in the same sentence. | there are significant practical obstacles to comparing the performance of different pronominalization algorithms using corpus matching criteria instead of "quality" as evaluated by human judges. | contrasting |
train_124 | In the D&R algorithm, the problem stems from the fact that DD are generated recursively: if when generating a DD for some entity The solution adopted by (Dale and Haddock, 1991) is to stipulate that facts from the knowledge base can only be used once within a given call to the algorithm. | the solution follows, in the present algorithm (as in SPUD), from its integration with surface realisation. | contrasting |
train_125 | By the F 1 measure used in the experiments in section 4, an induced dependency PCFG scores 48.2, compared to a score of 82.1 for a supervised PCFG read from local trees of the treebank. | a supervised dependency PCFG scores only 53.5, not much better than the unsupervised version, and worse than a right-branching baseline (of 60.0). | contrasting |
train_126 | PP accuracy is lower, which is not surprising considering that PPs tend to appear rather optionally and in contexts in which other, easier categories also frequently appear. | embedded sentence recall is substantially higher, possibly because of more effective use of the top-level sentences which occur in the signature context -. | contrasting |
train_127 | It is certainly worth exploring methods which supplement or replace tagged input with lexical input. | we address here the more serious criticism: that our results stem from clues latent in the treebank tagging information which are conceptually posterior to knowledge of structure. | contrasting |
train_128 | It goes to some lengths to handle complex cases such as adjunction and where two or more empty nodes' paths cross (in these cases the pattern extracted consists of the union of the local trees that constitute the patterns for each of the empty nodes). | given the low frequency of these constructions, there is probably only one case where this extra complexity is justified: viz., the empty compound SBAR subtree shown in Figure 3. | contrasting |
train_129 | This model deals with phonetic errors significantly better than previous models since it allows a much larger context size. | this model makes residual errors, many of which have to do with word pronunciation. | contrasting |
train_130 | The bigram probabilities are estimated from the training data by maximum likelihood estimation (MLE). | the intrinsic problem of MLE is that of data sparseness: MLE leads to zero-value probabilities for unseen bigrams. | contrasting |
train_131 | We notice that LF probability of Equation 3is very similar to that proposed by Seymore and Rosenfeld (1996), where the loss function is From Equations (2) and (3), we can see that lower LF probability is strongly correlated with lower perplexity. | we found that LF probability is suboptimal as a pruning criterion, evaluated on CER in our experiments. | contrasting |
train_132 | Classical cluster models are symmetric in that the same clusters are employed for both predicted and conditional words. | the symmetric cluster model is suboptimal in practice. | contrasting |
train_133 | If we put the word "Tuesday" into the cluster WEEKDAY, we decompose the probability When each word belongs to one class, simple math shows that this decomposition is a strict equality. | when smoothing is taken into consideration, using the clustered probability will be more accurate than using the non-clustered probability. | contrasting |
train_134 | It is an interesting question whether our techniques for hard clustering can be extended to soft clustering. | soft clustering models tend to be larger than hard clustering models because a given word can belong to multiple clusters, and thus a training instance P(w i |w i-2 w i-1 ) can lead to multiple counts instead of just 1. | contrasting |
train_135 | When the SLM is interpolated with the 3-gram, the h-2+opposite+parent scheme performed the best, reducing the PPL of the baseline SLM by 3.3%. | the parent and opposite+parent schemes are both worse than the baseline, especially before the EM training and with ¢ =0.0. | contrasting |
train_136 | The Jaccard measure is the cardinality ratio of the intersection and union of attribute sets (atts(w n ) is the attribute set for w n ): The generalised Jaccard measure allows each relation to have a significance weight (based on word, attribute and relation frequencies) associated with it: Grefenstette originally used the weighting function: where f (w i , a j ) is the frequency of the relation and n(a j ) is the number of different words a j appears in relations with. | context Description W(L 1 R 1 ) one word to left or right W(L 1 ) one word to the left W(L 1,2 ) one or two words to the left W(L 1−3 ) one to three words to the left Table 1: Window extractors we have found that using the t-test between the joint and independent distributions of a word and its attribute: gives superior performance (curran and Moens, 2002) and is therefore used for our experiments. | contrasting |
train_137 | the point where MINIPAR can process 75M words, we get a best direct match score of 23.5. | we can get the same resultant accuracy by using SEXTANT on a corpus of 116M words or W(L 1 R 1 ) on a corpus of 240M words. | contrasting |
train_138 | In the future, when we see an NC mapped to this rule, we will assign this semantic relationship to it. | the following NCs, having the CP A01 (Body Regions) and M01 (Persons), do not have the same relationship between the component words: abdomen patients, arm amputees, chest physicians, eye patients, skin donor. | contrasting |
train_139 | (2001) mention that dynamic programming can be used to compute the statistics required for conditional estimation of log-linear models based on context-free grammars where the properties can include arbitrary functions of the input string. | miyao and Tsujii (2002) (which 1 because we use conditional estimation, also known as discriminative training, we require at least some discriminating information about the correct parse of a string in order to estimate a stochastic unification grammar. | contrasting |
train_140 | The algorithms described here are exact, but because we are working with unification grammars and apparently arbitrary graphical models we cannot polynomially bound their computational complexity. | it seems reasonable to expect that if the linguistic dependencies in a sentence typically factorize into largely non-interacting cliques then the dynamic programming methods may offer dramatic computational savings compared to current methods that enumerate all possible parses. | contrasting |
train_141 | We use the symbol P r(•) to denote general probability distributions with (nearly) no specific assumptions. | for model-based probability distributions, we use the generic symbol p(•). | contrasting |
train_142 | It even detected frequent sentenceto-sentence translations, since we only imposed a relative length limit for phrasal translations (Section 3). | some of them, such as the one with (in cantonese), are wrong. | contrasting |
train_143 | Using only trigrams obtains the best result for the BLEU score. | the BLEU metric may not be affected by the syntactic aspect of translation quality, and as we saw in Figure 4, we can improve the syntactic quality by introducing the PCFG using some corpus selection techniques. | contrasting |
train_144 | But first observe that Candidate 1 shares "It is a guide to action" with Reference 1, "which" with Reference 2, "ensures that the military" with Reference 1, "always" with References 2 and 3, "commands" with Reference 1, and finally "of the party" with Reference 2 (all ignoring capitalization). | candidate 2 exhibits far fewer matches, and their extent is less. | contrasting |
train_145 | Collins (1999), Charniak (2000)). | the dependencies are typically derived from a context-free phrase structure tree using simple head percolation heuristics. | contrasting |
train_146 | According to an evaluation of unlabeled word-word dependencies, our best model achieves a performance of 89.9%, comparable to the figures given by Collins (1999) for a linguistically less expressive grammar. | to Gildea (2001), we find a significant improvement from modeling wordword dependencies. | contrasting |
train_147 | However, slightly worse performance is obtained for LexCatDep, a model which is identical to the original LexCat model, except that c S is also conditioned on c H , the lexical category of the head node, which introduces a dependency between the lexical categories. | since there is so much information in the lexical categories, one might expect that this would reduce the effect of conditioning the expansion of a constituent on its head word w. we did find a substantial effect. | contrasting |
train_148 | If applied to a text with ¦ noun phrases, this algorithm produces a total of noun phrase pairs. | a number of filters can reasonably be applied at this point. | contrasting |
train_149 | For our experiments we implemented the standard Co-Training algorithm (as described in Section 3) in Java using the Weka machine learning library 3 . | to other Co-Training approaches, we did not use Naive Bayes as base classifiers, but J48 decision trees, which are a Weka re-implementation of C4.5. | contrasting |
train_150 | The resulting Co-Training curve degrades substantially. | with a training size of 1000 and 2000 the Co-Training curves are above their baselines. | contrasting |
train_151 | With L=500 the Co-Training curve is way below the baseline. | with L=1000 and L=2000 Co-Training does show some improvement. | contrasting |
train_152 | Supervised learning of reference resolution classifiers is expensive since it needs unknown amounts of annotated data for training. | reference resolution algorithms based on these classifiers achieve reasonable performance of about 60 to 63% F-measure (Soon et al., 2001). | contrasting |
train_153 | In this case, therefore, Co-Training seems to be able to save manual annotation work. | the definition of the feature views is non-trivial for the task of training a reference resolution classifier, where no obvious or natural feature split suggests itself. | contrasting |
train_154 | Even when a system is deployed, it often keeps evolving, either because customers want to do different things with it, or because new tasks arise out of developments in the underlying application. | real customers of a deployed system may not be willing to give detailed feedback. | contrasting |
train_155 | This is reflected in the User Satisfaction scores in the tree. | dialogues which are long (TurnsOnTask 79 ) can be satisfactory (User Satisfaction = 15.2) as long as the task that is completed is long, i.e., if ground arrangements are made in that dialogue (Task Completion=2). | contrasting |
train_156 | Transliteration between languages that use similar alphabets and sound systems is very simple. | transliterating names from Arabic into English is a non-trivial task, mainly due to the differences in their sound and writing systems. | contrasting |
train_157 | According to this model, the transliteration probability is given by the following equation: The transliterations proposed by this model are generally accurate. | one serious limitation of this method is that only English words with known pronunciations can be produced. | contrasting |
train_158 | To use these normalized counts to score and rank the first name/last name combinations in a way similar to a unigram language model, we would get the following name/score pairs: (John Keele, 0.003), (John Kyl, 0.001), (Jon Keele, 0.0002), and ). | the normalized phrase counts for the possible full names are: (Jon Kyl, 0.8976), (John Kyl, 0.0936), (John Keele, 0.0087), and (Jon Keele, 0.0001), which is more desirable as Jon Kyl is an often-mentioned US Senator. | contrasting |
train_159 | In some cases, this criterion is too rigid, as it will consider perfectly acceptable translations as incorrect. | since we use it mainly to compare our results with those obtained from the human translations and the commercial system, this criterion is sufficient. | contrasting |
train_160 | Although they do not consider the task of classifying reviews, it seems their algorithm could be plugged into the classification algorithm presented in Section 2, where it would replace PMI-IR and equation 3in the second step. | pMI-IR is conceptually simpler, easier to implement, and it can handle phrases and adverbs, in addition to isolated adjectives. | contrasting |
train_161 | This suggests the hypothesis that a good movie will often contain unpleasant scenes (e.g., violence, death, mayhem), and a recommended movie re-view may thus have its average semantic orientation reduced if it contains descriptions of these unpleasant scenes. | if we add a constant value to the average SO of the movie reviews, to compensate for this bias, the accuracy does not improve. | contrasting |
train_162 | This is likely also the explanation for the lower accuracy of the Cancun reviews: good beaches do not necessarily add up to a good vacation. | good automotive parts usually do add up to a good automobile and good banking services add up to a good bank. | contrasting |
train_163 | Inspection of Equation (3) shows that it takes four queries to calculate the semantic orientation of a phrase. | i cached all query results, and since there is no need to recalculate hits("poor") and hits("excellent") for every phrase, each phrase requires an average of slightly less than two queries. | contrasting |
train_164 | A useful feature of such patterns is that when we search for them on the Web they usually produce many hits, thus making statistical approaches applicable. | searching for strict validation statements generally results in a small number of documents (if any) and makes statistical methods irrelevant. | contrasting |
train_165 | We use a Web-mining algorithm that considers the number of pages retrieved by the search engine. | qualitative approaches to Web mining (e.g. | contrasting |
train_166 | In a second type of experiment, for every question and its corresponding answers the program chooses the answer with the highest validity score and calculates a relative threshold on that basis ). | the relative threshold should be larger than a certain minimum value. | contrasting |
train_167 | Another conclusion is that the relative threshold demonstrates superiority over the absolute threshold in both test sets (average 2.3%). | if the percent of the right answers in the answer set is lower, then the efficiency of this approach may decrease. | contrasting |
train_168 | This brief overview shows that the main reason for the use of POS tags in parsing is that they provide useful generalizations and (thereby) counteract the sparse data problem. | there are two objections to this reasoning. | contrasting |
train_169 | From the sequence of predicted function-chunk codes, the complete chunking and function assignment can be reconstructed. | predictions can be inconsistent, blocking a straightforward reconstruction of the complete shallow parse. | contrasting |
train_170 | (1999) achieve 71.2 F-score for grammatical relation assignment on automatically tagged and chunked text after training on about 40,000 Wall Street Journal sentences. | to these studies, we do not chunk before finding grammatical relations; rather, chunking is performed simultaneously with headword function tagging. | contrasting |
train_171 | Owing to the fact that deep and shallow technologies are complementary in nature, integration is a nontrivial task: while SNLP shows its strength in the areas of efficiency and robustness, these aspects are problematic for DNLP systems. | dNLP can deliver highly precise and fine-grained linguistic analyses. | contrasting |
train_172 | Two workshops on Automatic Summarization were held in 2000 and 2001. | the area is still being fleshed out: most past efforts have focused only o n single-document summarization (Mani 2000), and no standard test sets and large scale evaluations have been reported or made available to the Englishspeaking research community except the TIPSTER SUMMAC Text Summarization evaluation (Mani et al. | contrasting |
train_173 | Most of the techniques adopted by NeATS are not new. | applying them in the proper places to summarize multiple documents and evaluating the results on large scale common tasks are new. | contrasting |
train_174 | At K ≥ 1, out of 1424 instances, i.e., sentences, 707 of them are marked positive and 717 are marked negative, so positive and negative instances are evenly spread across the data. | at K ≥ 5, there are only 72 positive instances. | contrasting |
train_175 | These systems are mainly rule-based. | rule-based approaches lack the ability of coping with the problems of robustness and portability. | contrasting |
train_176 | Collins and Singer (1999) investigated named entity classification using Adaboost, CoBoost, and the EM algorithm. | features were extracted using a parser, and performance was evaluated differently (the classes were person, organization, location, and noise). | contrasting |
train_177 | Due to the above reason, it is unreasonable to expect the performance of the upper case NER to match that of the mixed case NER. | we still manage to achieve a considerable reduction of errors between the two NERs when they are tested on the official MUC-6 and MUC-7 test data. | contrasting |
train_178 | The algorithm is shown in Figure 2 in a pseudo-code. | this method has the problem of being computationally costly in training, because the negative examples are created for all the classes other than the true class, and the total number of the training examples becomes large (which is equal to the number of original training examples multiplied by the number of classes). | contrasting |
train_179 | Brill studied transformation-based error-driven learning (TBL) (Brill, 1995), which conducts POS tagging by applying the transformation rules to the POS tags of a given sentence, and has a resemblance to revision learning in that the second model revises the output of the first model. | our method differs from TBL in two ways. | contrasting |
train_180 | On the other hand, the computational cost becomes large because each learner is trained using every training data and answers for every test data. | multi-stage methods can decrease the computational cost, and seem to be effective when a large amount of data is used or when a learner with high computational cost such as SVMs is used. | contrasting |
train_181 | A particular strength is their ability to model both general rules and specific exceptions in a single framework (van den Bosch and Daelemans, 1999). | they have generally only been used in supervised learning techniques where a class label or tag has been associated to each feature vector. | contrasting |
train_182 | When it is completely predictable, as in SLOVENE the performance approaches 100%; similarly a large majority of the less frequent words in English are completely regular, and accordingly the performance on EPT is very good. | in other cases, where the morphology is very irregular the performance will be poor. | contrasting |
train_183 | This approach makes the task comparable to classic word sense disambiguation (WSD), which is also concerned with distinguishing between possible word senses/interpretations. | whereas a classic (supervised) WSD algorithm is trained on a set of labelled instances of one particular word and assigns word senses to new test instances of the same word, (supervised) metonymy recognition can be trained on a set of labelled instances of different words of one semantic class and assign literal readings and metonymic patterns to new test instances of possibly different words of the same semantic class. | contrasting |
train_184 | Inferring from the metonymy in (4) that "Germany" in "Germany lost a fifth of its territory" is also metonymic, e.g., is wrong and lowers precision. | wrong assignments (based on headmodifier relations) do not constitute a major problem as accuracy is very high (90.2%). | contrasting |
train_185 | We showed that syntactic head-modifier relations are a high precision feature for metonymy recognition. | basing inferences only on the lexical heads seen in the training data leads to data sparseness due to the large number of different lexical heads encountered in natural language texts. | contrasting |
train_186 | We define , in article alignment J and E. When we compare the validity of two sentence alignments in the same article alignment, the rank order of sentence alignments obtained by applying SntScore is the same as that of SIM because they share a common AVSIM. | when we compare the validity of two sentence alignments in different article alignments, SntScore prefers the sentence alignment with the more similar (high AVSIM) article alignment even if their SIM has the same value, while SIM cannot discriminate between the validity of two sentence alignments if their SIM has the same value. | contrasting |
train_187 | For example, with our simple tree A B X Y Z if nodes A and B are considered as one elementary tree, with probability P elem (t a |A ⇒ BZ), their collective children will be reordered with probability giving the desired word ordering XZY. | computational complexity as well as data sparsity prevent us from considering arbitrarily large elementary trees, and the number of nodes considered at once still limits the possible alignments. | contrasting |
train_188 | In fact, Melamed defines a probability distribution P (links(u, v)|cooc(u, v), λ + , λ − ) which appears to make our work redundant. | this distribution refers to the probability that two word types u and v are linked links(u, v) times in the entire corpus. | contrasting |
train_189 | Treebank-based probabilistic parsing has been the subject of intensive research over the past few years, resulting in parsing models that achieve both broad coverage and high parsing accuracy (e.g., Collins 1997;Charniak 2000). | most of the existing models have been developed for English and trained on the Penn Treebank (Marcus et al., 1993), which raises the question whether these models generalize to other languages, and to annotation schemes that differ from the Penn Treebank markup. | contrasting |
train_190 | This is not surprising, as the perfect tags (but not the TnT tags) include grammatical function labels. | we also observe a dramatic reduction in coverage (to about 65%). | contrasting |
train_191 | We added grammatical functions to both the baseline model and the C&R model, as we predicted that this would allow the model to better capture the wordorder facts of German. | this prediction was not borne out: performance with grammatical functions (on TnT tags) was slightly worse than without, and coverage dropped substantially. | contrasting |
train_192 | On the other hand, a performance increase occurs if the tagger also provides grammatical function labels (simulated in the perfect tags condition). | this comes at the price of an unacceptable reduction in coverage. | contrasting |
train_193 | As a possible explanation we considered lack of training data: Negra is about half the size of the Penn Treebank. | the learning curves for the three models failed to produce any evidence that they suffer from sparse data. | contrasting |
train_194 | An obvious challenge for our approach is thus to identify suitable shallow knowledge sources that can deliver compatible constraints for HPSG parsing. | chunks delivered by state-of-the-art shallow parsers are not isomorphic to deep syntactic analyses that explicitly encode phrasal embedding structures. | contrasting |
train_195 | In addition, some studies investigate the extraction of synonymous words and/or patterns from bilingual corpora (Barzilay and Mckeown, 2001;Shimohata and Sumita, 2002). | these methods can only extract synonymous expressions which occur in the bilingual corpus. | contrasting |
train_196 | This is mainly because Method 3 can only extract synonymous collocations which occur in the bilingual corpus. | our method uses the bilingual corpus to train the translation probabilities, where the translations are not necessary to occur in the bilingual corpus. | contrasting |
train_197 | The lack of syntactic information makes the building of semantic space models relatively straightforward and language independent (all that is needed is a corpus of written or spoken text). | this entails that contextual information contributes indiscriminately to a word's meaning. | contrasting |
train_198 | The η 2 statistic revealed that only 5.2% of the variance was accounted for. | a reliable effect of distance was observed for all dependency-based models (p < .001). | contrasting |
train_199 | Taxonomy related relations (e.g., synonymy, antonymy, hyponymy) can be reliably distinguished from conceptual and phrasal association. | no reliable differences were found between closely associated relations such as antonymy and synonymy. | contrasting |