{"id":"761","title":"Towards a NMR implementation of a quantum lattice gas algorithm","abstract":"Recent theoretical results suggest that an array of quantum information processors communicating via classical channels can be used to solve fluid dynamics problems. Quantum lattice-gas algorithms (QLGA) running on such architectures have been shown to solve the diffusion equation and the nonlinear Burgers equations. In this report, we describe progress towards an ensemble nuclear magnetic resonance (NMR) implementation of a QLGA that solves the diffusion equation. The methods rely on NMR techniques to encode an initial mass density into an ensemble of two-qubit quantum information processors. Using standard pulse techniques, the mass density can then manipulated and evolved through the steps of the algorithm. We provide the experimental results of our first attempt to realize the NMR implementation. The results qualitatively follow the ideal simulation, but the observed implementation errors highlight the need for improved control","tok_text":"toward a nmr implement of a quantum lattic ga algorithm \n recent theoret result suggest that an array of quantum inform processor commun via classic channel can be use to solv fluid dynam problem . quantum lattice-ga algorithm ( qlga ) run on such architectur have been shown to solv the diffus equat and the nonlinear burger equat . in thi report , we describ progress toward an ensembl nuclear magnet reson ( nmr ) implement of a qlga that solv the diffus equat . the method reli on nmr techniqu to encod an initi mass densiti into an ensembl of two-qubit quantum inform processor . use standard puls techniqu , the mass densiti can then manipul and evolv through the step of the algorithm . we provid the experiment result of our first attempt to realiz the nmr implement . the result qualit follow the ideal simul , but the observ implement error highlight the need for improv control","ordered_present_kp":[9,28,105,176,288,309,388],"keyphrases":["NMR implementation","quantum lattice gas algorithm","quantum information processors","fluid dynamics problems","diffusion equation","nonlinear Burgers equations","nuclear magnetic resonance","two-qubit quantum information.processors"],"prmu":["P","P","P","P","P","P","P","M"]} {"id":"724","title":"Banking on SMA funds [separately managed accounts]","abstract":"From investment management to technology to back-office services, outsourcers are elbowing their way into the SMA business. Small banks are paying attention-and hoping to reap the rewards","tok_text":"bank on sma fund [ separ manag account ] \n from invest manag to technolog to back-offic servic , outsourc are elbow their way into the sma busi . small bank are pay attention-and hope to reap the reward","ordered_present_kp":[19,48,64,77,97,146],"keyphrases":["separately managed accounts","investment management","technology","back-office services","outsourcers","small banks"],"prmu":["P","P","P","P","P","P"]} {"id":"1371","title":"Design methodology for diagnostic strategies for industrial systems","abstract":"This paper presents a method for the construction of diagnostic systems for complex industrial applications. The approach has been explicitely developed to shorten the design cycle and meet some specific requirements, such as modularity, flexibility, and the possibility of merging many different sources of information. The method allows one to consider multiple simultaneous failures and is specifically designed to make easier the coordination and simplification of local diagnostic algorithms developed by different teams","tok_text":"design methodolog for diagnost strategi for industri system \n thi paper present a method for the construct of diagnost system for complex industri applic . the approach ha been explicit develop to shorten the design cycl and meet some specif requir , such as modular , flexibl , and the possibl of merg mani differ sourc of inform . the method allow one to consid multipl simultan failur and is specif design to make easier the coordin and simplif of local diagnost algorithm develop by differ team","ordered_present_kp":[0,259,451,22,44],"keyphrases":["design methodology","diagnostic strategies","industrial systems","modularity","local diagnostic algorithms"],"prmu":["P","P","P","P","P"]} {"id":"1334","title":"A shy invariant of graphs","abstract":"Moving from a well known result of P.L. Hammer et al. (1982), we introduce a new graph invariant, say lambda (G) referring to any graph G. It is a non-negative integer which is non-zero whenever G contains particular induced odd cycles or, equivalently, admits a particular minimum clique-partition. We show that).(G) can be efficiently evaluated and that its determination allows one to reduce the hard problem of computing a minimum clique-cover of a graph to an identical problem of smaller size and special structure. Furthermore, one has alpha (G) or= e, x >or= 0} associated with a bipartite tournament to be totally dual integral, where H is the cycle-vertex incidence matrix and e is the all-one vector. The consequence is a min-max relation on packing and covering cycles, together with strongly polynomial time algorithms for the feedback vertex set problem and the cycle packing problem on the corresponding bipartite tournaments. In addition, we show that the feedback vertex set problem on general bipartite tournaments is NP-complete and approximable within 3.5 based on the min-max theorem","tok_text":"a min-max theorem on feedback vertex set \n we establish a necessari and suffici condit for the linear system { x : hx > or= e , x > or= 0 } associ with a bipartit tournament to be total dual integr , where h is the cycle-vertex incid matrix and e is the all-on vector . the consequ is a min-max relat on pack and cover cycl , togeth with strongli polynomi time algorithm for the feedback vertex set problem and the cycl pack problem on the correspond bipartit tournament . in addit , we show that the feedback vertex set problem on gener bipartit tournament is np-complet and approxim within 3.5 base on the min-max theorem","ordered_present_kp":[21,2,95,154,215,254,313,338,415,379],"keyphrases":["min-max theorem","feedback vertex sets","linear system","bipartite tournament","cycle-vertex incidence matrix","all-one vector","covering cycles","strongly polynomial time algorithms","feedback vertex set problem","cycle packing problem","necessary sufficient condition","totally dual integral system","NP-complete problem","graphs","combinatorial optimization problems","linear programming duality theory"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R","R","U","M","M"]} {"id":"1174","title":"Optimization of cutting conditions for single pass turning operations using a deterministic approach","abstract":"An optimization analysis, strategy and CAM software for the selection of economic cutting conditions in single pass turning operations are presented using a deterministic approach. The optimization is based on criteria typified by the maximum production rate and includes a host of practical constraints. It is shown that the deterministic optimization approach involving mathematical analyses of constrained economic trends and graphical representation on the feed-speed domain provides a clearly defined strategy that not only provides a unique global optimum solution, but also the software that is suitable for on-line CAM applications. A numerical study has verified the developed optimization strategies and software and has shown the economic benefits of using optimization","tok_text":"optim of cut condit for singl pass turn oper use a determinist approach \n an optim analysi , strategi and cam softwar for the select of econom cut condit in singl pass turn oper are present use a determinist approach . the optim is base on criteria typifi by the maximum product rate and includ a host of practic constraint . it is shown that the determinist optim approach involv mathemat analys of constrain econom trend and graphic represent on the feed-spe domain provid a clearli defin strategi that not onli provid a uniqu global optimum solut , but also the softwar that is suitabl for on-lin cam applic . a numer studi ha verifi the develop optim strategi and softwar and ha shown the econom benefit of use optim","ordered_present_kp":[24,51,106,136,263,381,400],"keyphrases":["single pass turning operations","deterministic approach","CAM software","economic cutting conditions","maximum production rate","mathematical analyses","constrained economic trends","cutting conditions optimization","process planning"],"prmu":["P","P","P","P","P","P","P","R","U"]} {"id":"564","title":"Development of a computer-aided manufacturing system for profiled edge lamination tooling","abstract":"Profiled edge lamination (PEL) tooling is a promising rapid tooling (RT) method involving the assembly of an array of laminations whose top edges are simultaneously profiled and beveled based on a CAD model of the intended tool surface. To facilitate adoption of this RT method by industry, a comprehensive PEL tooling development system is proposed. The two main parts of this system are: (1) iterative tool design based on thermal and structural models; and (2) fabrication of the tool using a computer-aided manufacturing (CAM) software and abrasive water jet cutting. CAM software has been developed to take lamination slice data (profiles) from any proprietary RP software in the form of polylines and create smooth, kinematically desirable cutting trajectories for each tool lamination. Two cutting trajectory algorithms, called identical equidistant profile segmentation and adaptively vector profiles projection (AVPP), were created for this purpose. By comparing the performance of both algorithms with a benchmark part shape, the AVPP algorithm provided better cutting trajectories for complicated tool geometries. A 15-layer aluminum PEL tool was successfully fabricated using a 5-axis CNC AWJ cutter and NC code generated by the CAM software","tok_text":"develop of a computer-aid manufactur system for profil edg lamin tool \n profil edg lamin ( pel ) tool is a promis rapid tool ( rt ) method involv the assembl of an array of lamin whose top edg are simultan profil and bevel base on a cad model of the intend tool surfac . to facilit adopt of thi rt method by industri , a comprehens pel tool develop system is propos . the two main part of thi system are : ( 1 ) iter tool design base on thermal and structur model ; and ( 2 ) fabric of the tool use a computer-aid manufactur ( cam ) softwar and abras water jet cut . cam softwar ha been develop to take lamin slice data ( profil ) from ani proprietari rp softwar in the form of polylin and creat smooth , kinemat desir cut trajectori for each tool lamin . two cut trajectori algorithm , call ident equidist profil segment and adapt vector profil project ( avpp ) , were creat for thi purpos . by compar the perform of both algorithm with a benchmark part shape , the avpp algorithm provid better cut trajectori for complic tool geometri . a 15-layer aluminum pel tool wa success fabric use a 5-axi cnc awj cutter and nc code gener by the cam softwar","ordered_present_kp":[48,114,545,567,760,792,826],"keyphrases":["profiled edge lamination tooling","rapid tooling","abrasive water jet cutting","CAM software","cutting trajectory algorithms","identical equidistant profile segmentation","adaptively vector profiles projection","computer aided manufacturing"],"prmu":["P","P","P","P","P","P","P","M"]} {"id":"599","title":"Keen but confused [workflow & content management]","abstract":"IT users find workflow, content and business process management software appealing but by no means straightforward to implement. Pat Sweet reports on our latest research","tok_text":"keen but confus [ workflow & content manag ] \n it user find workflow , content and busi process manag softwar appeal but by no mean straightforward to implement . pat sweet report on our latest research","ordered_present_kp":[18,29,83,194],"keyphrases":["workflow","content management","business process management software","research","survey","market overview"],"prmu":["P","P","P","P","U","U"]} {"id":"1189","title":"CRM: approaching zenith","abstract":"Looks at how manufacturers are starting to warm up to the concept of customer relationship management. CRM has matured into what is expected to be big business. As CRM software evolves to its second, some say third, generation, it's likely to be more valuable to holdouts in manufacturing and other sectors","tok_text":"crm : approach zenith \n look at how manufactur are start to warm up to the concept of custom relationship manag . crm ha matur into what is expect to be big busi . as crm softwar evolv to it second , some say third , gener , it 's like to be more valuabl to holdout in manufactur and other sector","ordered_present_kp":[36,86,0,36],"keyphrases":["CRM","manufacturers","manufacturers","customer relationship management","manufacturing"],"prmu":["P","P","P","P","P"]} {"id":"1230","title":"Server safeguards tax service","abstract":"Peterborough-based tax consultancy IE Taxguard wanted real-time failover protection for important Windows-based applications. Its solution was to implement a powerful failover server from UK supplier Neverfail in order to provide real-time backup for three core production servers","tok_text":"server safeguard tax servic \n peterborough-bas tax consult ie taxguard want real-tim failov protect for import windows-bas applic . it solut wa to implement a power failov server from uk supplier neverfail in order to provid real-tim backup for three core product server","ordered_present_kp":[47,59,165,196,234],"keyphrases":["tax consultancy","IE Taxguard","failover server","Neverfail","backup"],"prmu":["P","P","P","P","P"]} {"id":"1275","title":"Modeling dynamic objects in distributed systems with nested Petri nets","abstract":"Nested Petri nets (NP-nets) is a Petri net extension, allowing tokens in a net marking to be represented by marked nets themselves. The paper discusses applicability of NP-nets for modeling task planning systems, multi-agent systems and recursive-parallel systems. A comparison of NP-nets with some other formalisms, such as OPNs of R. Valk (2000), recursive parallel programs of O. Kushnarenko and Ph. Schnoebelen (1997) and process algebras is given. Some aspects of decidability for object-oriented Petri net extensions are also discussed","tok_text":"model dynam object in distribut system with nest petri net \n nest petri net ( np-net ) is a petri net extens , allow token in a net mark to be repres by mark net themselv . the paper discuss applic of np-net for model task plan system , multi-ag system and recursive-parallel system . a comparison of np-net with some other formal , such as opn of r. valk ( 2000 ) , recurs parallel program of o. kushnarenko and ph . schnoebelen ( 1997 ) and process algebra is given . some aspect of decid for object-ori petri net extens are also discuss","ordered_present_kp":[22,44,237,257,443,485,495],"keyphrases":["distributed systems","nested Petri nets","multi-agent systems","recursive-parallel systems","process algebras","decidability","object-oriented Petri net","dynamic objects modelling"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"620","title":"Adaptive image enhancement for retinal blood vessel segmentation","abstract":"Retinal blood vessel images are enhanced by removing the nonstationary background, which is adaptively estimated based on local neighbourhood information. The result is a much better segmentation of the blood vessels with a simple algorithm and without the need to obtain a priori illumination knowledge of the imaging system","tok_text":"adapt imag enhanc for retin blood vessel segment \n retin blood vessel imag are enhanc by remov the nonstationari background , which is adapt estim base on local neighbourhood inform . the result is a much better segment of the blood vessel with a simpl algorithm and without the need to obtain a priori illumin knowledg of the imag system","ordered_present_kp":[0,51,155],"keyphrases":["adaptive image enhancement","retinal blood vessel images","local neighbourhood information","nonstationary background removal","image segmentation","personal identification","security applications"],"prmu":["P","P","P","R","R","U","U"]} {"id":"1094","title":"Efficient allocation of knowledge in distributed business structures","abstract":"Accelerated business processes demand new concepts and realizations of information systems and knowledge databases. This paper presents the concept of the collaborative information space (CIS), which supplies the necessary tools to transform individual knowledge into collective useful information. The creation of 'information objects' in the CIS allows an efficient allocation of information in all business process steps at any time. Furthermore, the specific availability of heterogeneous, distributed data is realized by a Web-based user interface, which enables effective search by a multidimensionally hierarchical composition","tok_text":"effici alloc of knowledg in distribut busi structur \n acceler busi process demand new concept and realiz of inform system and knowledg databas . thi paper present the concept of the collabor inform space ( ci ) , which suppli the necessari tool to transform individu knowledg into collect use inform . the creation of ' inform object ' in the ci allow an effici alloc of inform in all busi process step at ani time . furthermor , the specif avail of heterogen , distribut data is realiz by a web-bas user interfac , which enabl effect search by a multidimension hierarch composit","ordered_present_kp":[28,54,108,126,182,320,385,492,547],"keyphrases":["distributed business structures","accelerated business processes","information systems","knowledge databases","collaborative information space","information objects","business process steps","Web-based user interface","multidimensionally hierarchical composition","efficient knowledge allocation","heterogeneous distributed data","interactive system"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","M"]} {"id":"1445","title":"Applying BGL to computational geometry","abstract":"The author applies Boost Graph Library to the domain of computational geometry. First, he formulates a concrete problem in graph terms. Second, he develops a way to transform the output of an existing algorithm into an appropriate Boost Graph Library data structure. Finally, he implements two new algorithms for my Boost Graph Library graph. The first algorithm gets the job done, but could have been written in any programming language. The second algorithm, however, shows the power of Boost Graph Library's generic programming approach.Graphs, graphics, and generic programming combine in this novel use of the Boost Graph Library","tok_text":"appli bgl to comput geometri \n the author appli boost graph librari to the domain of comput geometri . first , he formul a concret problem in graph term . second , he develop a way to transform the output of an exist algorithm into an appropri boost graph librari data structur . final , he implement two new algorithm for my boost graph librari graph . the first algorithm get the job done , but could have been written in ani program languag . the second algorithm , howev , show the power of boost graph librari 's gener program approach . graph , graphic , and gener program combin in thi novel use of the boost graph librari","ordered_present_kp":[48,13,518],"keyphrases":["computational geometry","Boost Graph Library","generic programming approach","Boost libraries","C++","threads","smart pointers","graph-theoretic concepts","directed graph","file dependencies","BGL graph"],"prmu":["P","P","P","R","U","U","U","U","M","U","R"]} {"id":"816","title":"Accelerating filtering techniques for numeric CSPs","abstract":"Search algorithms for solving Numeric CSPs (Constraint Satisfaction Problems) make an extensive use of filtering techniques. In this paper we show how those filtering techniques can be accelerated by discovering and exploiting some regularities during the filtering process. Two kinds of regularities are discussed, cyclic phenomena in the propagation queue and numeric regularities of the domains of the variables. We also present in this paper an attempt to unify numeric CSPs solving methods from two distinct communities, that of CSP in artificial intelligence, and that of interval analysis","tok_text":"acceler filter techniqu for numer csp \n search algorithm for solv numer csp ( constraint satisfact problem ) make an extens use of filter techniqu . in thi paper we show how those filter techniqu can be acceler by discov and exploit some regular dure the filter process . two kind of regular are discuss , cyclic phenomena in the propag queue and numer regular of the domain of the variabl . we also present in thi paper an attempt to unifi numer csp solv method from two distinct commun , that of csp in artifici intellig , and that of interv analysi","ordered_present_kp":[40,28,78,8,505,537,330],"keyphrases":["filtering techniques","Numeric CSPs","search algorithms","Constraint Satisfaction Problems","propagation","artificial intelligence","interval analysis","CSPs-solving","extrapolation methods","pruning"],"prmu":["P","P","P","P","P","P","P","U","M","U"]} {"id":"853","title":"Virtual Development Center","abstract":"The Virtual Development Center of the Institute for Women and Technology seeks to significantly enhance the impact of women on technology. It addresses this goal by increasing the number of women who have input on created technology, enhancing the ways people teach and develop technology, and developing need-based technology that serves the community. Through activities of the Virtual Development Center, a pattern is emerging regarding how computing technologies do or do not satisfy the needs of community groups, particularly those communities serving women. This paper describes the Virtual Development Center program and offers observations on the impact of computing technology on non-technical communities","tok_text":"virtual develop center \n the virtual develop center of the institut for women and technolog seek to significantli enhanc the impact of women on technolog . it address thi goal by increas the number of women who have input on creat technolog , enhanc the way peopl teach and develop technolog , and develop need-bas technolog that serv the commun . through activ of the virtual develop center , a pattern is emerg regard how comput technolog do or do not satisfi the need of commun group , particularli those commun serv women . thi paper describ the virtual develop center program and offer observ on the impact of comput technolog on non-techn commun","ordered_present_kp":[0,72,264,474],"keyphrases":["Virtual Development Center","women","teaching","community groups","information technology","gender issues","computer science education"],"prmu":["P","P","P","P","M","U","M"]} {"id":"778","title":"Access matters","abstract":"Discusses accessibility needs of people with disabilities, both from the perspective of getting the information from I&R programs (including accessible Web sites, TTY access, Braille, and other mechanisms) and from the perspective of being aware of accessibility needs when referring clients to resources. Includes information on ADA legislation requiring accessibility to public places and recommends several organizations and Web sites for additional information","tok_text":"access matter \n discuss access need of peopl with disabl , both from the perspect of get the inform from i&r program ( includ access web site , tti access , braill , and other mechan ) and from the perspect of be awar of access need when refer client to resourc . includ inform on ada legisl requir access to public place and recommend sever organ and web site for addit inform","ordered_present_kp":[24,126,144,157,281,309],"keyphrases":["accessibility needs","accessible Web sites","TTY access","Braille","ADA legislation","public places","disabled people","information and referral programs"],"prmu":["P","P","P","P","P","P","R","M"]} {"id":"1368","title":"Exploratory study of the adoption of manufacturing technology innovations in the USA and the UK","abstract":"Manufacturing technologies, appropriately implemented, provide competitive advantage to manufacturers. The use of manufacturing technologies across countries is difficult to compare. One such comparison has been provided in the literature with a study of US and Japanese practices in advanced manufacturing technology use using a common questionnaire. The present study compares the use of 17 different technologies in similar industries in the USA (n=1025) and UK (n=166) using a common questionnaire. Largely, there are remarkable similarities between the two countries. This may partly correlate with the heavy traffic in foreign direct investment between the two nations. Notable differences are (1) across-the-board, US manufacturers are ahead of the UK firms in computerized integration with units inside and outside manufacturing organizations; (2) US manufacturers show higher labour productivity, which is consistent with macro-economic data, and (3) more UK manufacturers report the use of soft technologies such as just-in-time, total quality manufacturing and manufacturing cells. Hypotheses for future investigation are proposed","tok_text":"exploratori studi of the adopt of manufactur technolog innov in the usa and the uk \n manufactur technolog , appropri implement , provid competit advantag to manufactur . the use of manufactur technolog across countri is difficult to compar . one such comparison ha been provid in the literatur with a studi of us and japanes practic in advanc manufactur technolog use use a common questionnair . the present studi compar the use of 17 differ technolog in similar industri in the usa ( n=1025 ) and uk ( n=166 ) use a common questionnair . larg , there are remark similar between the two countri . thi may partli correl with the heavi traffic in foreign direct invest between the two nation . notabl differ are ( 1 ) across-the-board , us manufactur are ahead of the uk firm in computer integr with unit insid and outsid manufactur organ ; ( 2 ) us manufactur show higher labour product , which is consist with macro-econom data , and ( 3 ) more uk manufactur report the use of soft technolog such as just-in-tim , total qualiti manufactur and manufactur cell . hypothes for futur investig are propos","ordered_present_kp":[34,68,80,136,645,871,910,977,1000,1014,1043],"keyphrases":["manufacturing technology innovations","USA","UK","competitive advantage","foreign direct investment","labour productivity","macro-economic data","soft technologies","just-in-time","total quality manufacturing","manufacturing cells"],"prmu":["P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1395","title":"Work in progress: Developing policies for access to government information in the New South Africa","abstract":"Following South Africa's transition to democracy in 1994, the SA government has adopted policies supporting freedom of expression and freedom of access to information. The Bill of Rights in the new Constitution includes a constitutional right of access to information held by the state. Since 1994 various initiatives have been taken by government and other bodies to promote such access. These include moves to reorganize government printing and publishing, restructure the government's public information services, make government information available on the Internet, and extend telephony and Internet access to poor communities. SA's new Legal Deposit Act, (1997) makes provision for the creation of official publications depositories. The Promotion of Access to Information Act, (2000) was enacted to ensure access to information held by the state and public bodies. However, despite much activity, it has proved difficult to translate principles into practical and well-coordinated measures to improve access to government information. A specific concern is the failure of policy-makers to visualize a role for libraries","tok_text":"work in progress : develop polici for access to govern inform in the new south africa \n follow south africa 's transit to democraci in 1994 , the sa govern ha adopt polici support freedom of express and freedom of access to inform . the bill of right in the new constitut includ a constitut right of access to inform held by the state . sinc 1994 variou initi have been taken by govern and other bodi to promot such access . these includ move to reorgan govern print and publish , restructur the govern 's public inform servic , make govern inform avail on the internet , and extend telephoni and internet access to poor commun . sa 's new legal deposit act , ( 1997 ) make provis for the creation of offici public depositori . the promot of access to inform act , ( 2000 ) wa enact to ensur access to inform held by the state and public bodi . howev , despit much activ , it ha prove difficult to translat principl into practic and well-coordin measur to improv access to govern inform . a specif concern is the failur of policy-mak to visual a role for librari","ordered_present_kp":[48,73,180,203,237,281,454,506,561,701,831,1055],"keyphrases":["government information","South Africa","freedom of expression","freedom of access to information","Bill of Rights","constitutional right of access","government printing","public information services","Internet","official publications depositories","public bodies","libraries","government publishing"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"785","title":"Networking without wires","abstract":"Several types of devices use radio transmitters to send data over thin air. Are WLANs, wireless local area networks, the end to all cables? Will Dalrymple weighs up the costs and benefits","tok_text":"network without wire \n sever type of devic use radio transmitt to send data over thin air . are wlan , wireless local area network , the end to all cabl ? will dalrympl weigh up the cost and benefit","ordered_present_kp":[103,182,191],"keyphrases":["wireless local area networks","costs","benefits"],"prmu":["P","P","P"]} {"id":"1069","title":"Entangling atoms in bad cavities","abstract":"We propose a method to produce entangled spin squeezed states of a large number of atoms inside an optical cavity. By illuminating the atoms with bichromatic light, the coupling to the cavity induces pairwise exchange of excitations which entangles the atoms. Unlike most proposals for entangling atoms by cavity QED, our proposal does not require the strong coupling regime g\/sup 2\/\/ kappa Gamma >>1, where g is the atom cavity coupling strength, kappa is the cavity decay rate, and Gamma is the decay rate of the atoms. In this work the important parameter is Ng\/sup 2\/\/ kappa Gamma , where N is the number of atoms, and our proposal permits the production of entanglement in bad cavities as long as they contain a large number of atoms","tok_text":"entangl atom in bad caviti \n we propos a method to produc entangl spin squeez state of a larg number of atom insid an optic caviti . by illumin the atom with bichromat light , the coupl to the caviti induc pairwis exchang of excit which entangl the atom . unlik most propos for entangl atom by caviti qed , our propos doe not requir the strong coupl regim g \/ sup 2\/\/ kappa gamma > > 1 , where g is the atom caviti coupl strength , kappa is the caviti decay rate , and gamma is the decay rate of the atom . in thi work the import paramet is ng \/ sup 2\/\/ kappa gamma , where n is the number of atom , and our propos permit the product of entangl in bad caviti as long as they contain a larg number of atom","ordered_present_kp":[58,118,180,206,225,294,337,445,403,16],"keyphrases":["bad cavities","entangled spin squeezed states","optical cavity","coupling","pairwise exchange","excitations","cavity QED","strong coupling regime","atom cavity coupling strength","cavity decay rate","atom entanglement","bichromatic light illumination"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"893","title":"Use of natural language processing to translate clinical information from a database of 889,921 chest radiographic reports","abstract":"The aim was to evaluate translation of chest radiographic reports using natural language processing and to compare the findings with those in the literature. A natural language processor coded 10 years of narrative chest radiographic reports from an urban academic medical center. Coding for 150 reports was compared with manual coding. Frequencies and cooccurrences of 24 clinical conditions (diseases, abnormalities, and clinical states) were estimated. The ratio of right to left lung mass, association of pleural effusion with other conditions, and frequency of bullet and stab wounds were compared with independent observations. The sensitivity and specificity of the system's pneumothorax coding were compared with those of manual financial coding. Internal and external validation in this study confirmed the accuracy of natural language processing for translating chest radiographic narrative reports into a large database of information","tok_text":"use of natur languag process to translat clinic inform from a databas of 889,921 chest radiograph report \n the aim wa to evalu translat of chest radiograph report use natur languag process and to compar the find with those in the literatur . a natur languag processor code 10 year of narr chest radiograph report from an urban academ medic center . code for 150 report wa compar with manual code . frequenc and cooccurr of 24 clinic condit ( diseas , abnorm , and clinic state ) were estim . the ratio of right to left lung mass , associ of pleural effus with other condit , and frequenc of bullet and stab wound were compar with independ observ . the sensit and specif of the system 's pneumothorax code were compar with those of manual financi code . intern and extern valid in thi studi confirm the accuraci of natur languag process for translat chest radiograph narr report into a larg databas of inform","ordered_present_kp":[7,321,541,602,687],"keyphrases":["natural language processing","urban academic medical center","pleural effusion","stab wounds","pneumothorax coding","chest radiographic report database","clinical information translation","clinical condition frequency","clinical condition cooccurrence","right to left lung mass ratio","bullet wounds"],"prmu":["P","P","P","P","P","R","R","R","R","R","R"]} {"id":"1054","title":"Choice preferences without inferences: subconscious priming of risk attitudes","abstract":"We present a procedure for subconscious priming of risk attitudes. In Experiment 1, we were reliably able to induce risk-seeking or risk-averse preferences across a range of decision scenarios using this priming procedure. In Experiment 2, we showed that these priming effects can be reversed by drawing participants' attention to the priming event. Our results support claims that the formation of risk preferences can be based on preconscious processing, as for example postulated by the affective primacy hypothesis, rather than rely on deliberative mental operations, as posited by several current models of judgment and decision making","tok_text":"choic prefer without infer : subconsci prime of risk attitud \n we present a procedur for subconsci prime of risk attitud . in experi 1 , we were reliabl abl to induc risk-seek or risk-avers prefer across a rang of decis scenario use thi prime procedur . in experi 2 , we show that these prime effect can be revers by draw particip ' attent to the prime event . our result support claim that the format of risk prefer can be base on preconsci process , as for exampl postul by the affect primaci hypothesi , rather than reli on delib mental oper , as posit by sever current model of judgment and decis make","ordered_present_kp":[29,48,179,214,432,480,527,0],"keyphrases":["choice preferences","subconscious priming","risk attitudes","risk-averse preferences","decision scenarios","preconscious processing","affective primacy hypothesis","deliberative mental operations","risk-seeking preferences"],"prmu":["P","P","P","P","P","P","P","P","R"]} {"id":"1011","title":"A self-organizing context-based approach to the tracking of multiple robot trajectories","abstract":"We have combined competitive and Hebbian learning in a neural network designed to learn and recall complex spatiotemporal sequences. In such sequences, a particular item may occur more than once or the sequence may share states with another sequence. Processing of repeated\/shared states is a hard problem that occurs very often in the domain of robotics. The proposed model consists of two groups of synaptic weights: competitive interlayer and Hebbian intralayer connections, which are responsible for encoding respectively the spatial and temporal features of the input sequence. Three additional mechanisms allow the network to deal with shared states: context units, neurons disabled from learning, and redundancy used to encode sequence states. The network operates by determining the current and the next state of the learned sequences. The model is simulated over various sets of robot trajectories in order to evaluate its storage and retrieval abilities; its sequence sampling effects; its robustness to noise and its tolerance to fault","tok_text":"a self-organ context-bas approach to the track of multipl robot trajectori \n we have combin competit and hebbian learn in a neural network design to learn and recal complex spatiotempor sequenc . in such sequenc , a particular item may occur more than onc or the sequenc may share state with anoth sequenc . process of repeat \/ share state is a hard problem that occur veri often in the domain of robot . the propos model consist of two group of synapt weight : competit interlay and hebbian intralay connect , which are respons for encod respect the spatial and tempor featur of the input sequenc . three addit mechan allow the network to deal with share state : context unit , neuron disabl from learn , and redund use to encod sequenc state . the network oper by determin the current and the next state of the learn sequenc . the model is simul over variou set of robot trajectori in order to evalu it storag and retriev abil ; it sequenc sampl effect ; it robust to nois and it toler to fault","ordered_present_kp":[2,105,165,446,484,275,664,730,58,916,934],"keyphrases":["self-organizing context-based approach","robot trajectories","Hebbian learning","complex spatiotemporal sequences","shared states","synaptic weights","Hebbian intralayer connections","context units","sequence states","retrieval abilities","sequence sampling effects","trajectories tracking","competitive learning","competitive interlayer connections","unsupervised learning","storage abilities","fault tolerance"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","R","R","M","R","R"]} {"id":"745","title":"Intensity based affine registration including feature similarity for spatial normalization","abstract":"This paper presents a new spatial normalization with affine transformation. The quantitative comparison of brain architecture across different subjects requires a common coordinate system. For the analysis of a specific brain area, it is necessary to normalize and compare a region of interest and the global brain. The intensity based registration method matches the global brain well, but a region of interest may not be locally normalized compared to the feature based method. The method in this paper uses feature similarities of local regions as well as intensity similarities. The lateral ventricle and central gray nuclei of the brain, including the corpus callosum, which is used for features in schizophrenia detection, is appropriately normalized. Our method reduces the difference of feature areas such as the corpus callosum (7.7%, 2.4%) and lateral ventricle (8.2%, 13.5%) compared with mutual information and Talairach methods","tok_text":"intens base affin registr includ featur similar for spatial normal \n thi paper present a new spatial normal with affin transform . the quantit comparison of brain architectur across differ subject requir a common coordin system . for the analysi of a specif brain area , it is necessari to normal and compar a region of interest and the global brain . the intens base registr method match the global brain well , but a region of interest may not be local normal compar to the featur base method . the method in thi paper use featur similar of local region as well as intens similar . the later ventricl and central gray nuclei of the brain , includ the corpu callosum , which is use for featur in schizophrenia detect , is appropri normal . our method reduc the differ of featur area such as the corpu callosum ( 7.7 % , 2.4 % ) and later ventricl ( 8.2 % , 13.5 % ) compar with mutual inform and talairach method","ordered_present_kp":[0,33,52,113,157,206,337,310,33,588,607,653,697,897],"keyphrases":["intensity based affine registration","feature similarity","feature similarity","spatial normalization","affine transformation","brain architecture","common coordinate system","region of interest","global brain","lateral ventricle","central gray nuclei","corpus callosum","schizophrenia detection","Talairach method","feature similarities","mutual information method"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"700","title":"Digital stochastic realization of complex analog controllers","abstract":"Stochastic logic is based on digital processing of a random pulse stream, where the information is codified as the probability of a high level in a finite sequence. This binary pulse sequence can be digitally processed exploiting the similarity between Boolean algebra and statistical algebra. Given a random pulse sequence, any Boolean operation among individual pulses will correspond to an algebraic expression among the variables represented by their respective average pulse rates. Subsequently, this pulse stream can be digitally processed to perform analog operations. In this paper, we propose a stochastic approach to the digital implementation of complex controllers using programmable devices as an alternative to traditional digital signal processors. As an example, a practical realization of nonlinear dissipative controllers for a series resonant converter is presented","tok_text":"digit stochast realiz of complex analog control \n stochast logic is base on digit process of a random puls stream , where the inform is codifi as the probabl of a high level in a finit sequenc . thi binari puls sequenc can be digit process exploit the similar between boolean algebra and statist algebra . given a random puls sequenc , ani boolean oper among individu puls will correspond to an algebra express among the variabl repres by their respect averag puls rate . subsequ , thi puls stream can be digit process to perform analog oper . in thi paper , we propos a stochast approach to the digit implement of complex control use programm devic as an altern to tradit digit signal processor . as an exampl , a practic realiz of nonlinear dissip control for a seri reson convert is present","ordered_present_kp":[0,25,50,95,179,199,268,288,314,340,453,102,571,635,733,764],"keyphrases":["digital stochastic realization","complex analog controllers","stochastic logic","random pulse stream","pulse stream","finite sequence","binary pulse sequence","Boolean algebra","statistical algebra","random pulse sequence","Boolean operation","average pulse rates","stochastic approach","programmable devices","nonlinear dissipative controllers","series resonant converter","parallel resonant DC-to-DC converters","series resonant DC-to-DC converters"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M","M"]} {"id":"1355","title":"Comparison of push and pull systems with transporters: a metamodelling approach","abstract":"Analyses push and pull systems with transportation consideration. A multiproduct, multiline, multistage production system was used to compare the two systems. The effects of four factors (processing time variation, demand variation, transporters, batch size) on throughput rate, average waiting time in the system and machine utilization were studied. The study uses metamodels to compare the two systems. They serve a dual purpose of expressing system performance measures in the form of a simple equation and reducing computational time when comparing the two systems. Research shows that the number of transporters used and the batch size have a significant effect on the performance measures of both systems","tok_text":"comparison of push and pull system with transport : a metamodel approach \n analys push and pull system with transport consider . a multiproduct , multilin , multistag product system wa use to compar the two system . the effect of four factor ( process time variat , demand variat , transport , batch size ) on throughput rate , averag wait time in the system and machin util were studi . the studi use metamodel to compar the two system . they serv a dual purpos of express system perform measur in the form of a simpl equat and reduc comput time when compar the two system . research show that the number of transport use and the batch size have a signific effect on the perform measur of both system","ordered_present_kp":[40,23,54,244,266,294,310,328,363,481],"keyphrases":["pull systems","transporters","metamodelling approach","processing time variation","demand variation","batch size","throughput rate","average waiting time","machine utilization","performance measures","push systems","multiproduct multiline multistage production system"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"1310","title":"Cat and class: what use are these skills to the new legal information professional?","abstract":"This article looks at the cataloguing and classification skills taught on information studies courses and the use these skills are to new legal information professionals. The article is based on the opinions of nine new legal information professionals from both academic and law firm libraries","tok_text":"cat and class : what use are these skill to the new legal inform profession ? \n thi articl look at the catalogu and classif skill taught on inform studi cours and the use these skill are to new legal inform profession . the articl is base on the opinion of nine new legal inform profession from both academ and law firm librari","ordered_present_kp":[52,103,116,140,311],"keyphrases":["legal information professional","cataloguing","classification","information studies courses","law firm libraries","academic libraries"],"prmu":["P","P","P","P","P","R"]} {"id":"130","title":"Resolution of a current-mode algorithmic analog-to-digital converter","abstract":"Errors limiting the resolution of current-mode algorithmic analog-to-digital converters are mainly related to current mirror operation. While systematic errors can be minimized by proper circuit techniques, random sources are unavoidable. In this paper a statistical analysis of the resolution of a typical converter is carried out taking into account process tolerances. To support the analysis, a 4-bit ADC, realized in a 0.35- mu m CMOS technology, was exhaustively simulated. Results were found to be in excellent agreement with theoretical derivations","tok_text":"resolut of a current-mod algorithm analog-to-digit convert \n error limit the resolut of current-mod algorithm analog-to-digit convert are mainli relat to current mirror oper . while systemat error can be minim by proper circuit techniqu , random sourc are unavoid . in thi paper a statist analysi of the resolut of a typic convert is carri out take into account process toler . to support the analysi , a 4-bit adc , realiz in a 0.35- mu m cmo technolog , wa exhaust simul . result were found to be in excel agreement with theoret deriv","ordered_present_kp":[35,0,220,281,440],"keyphrases":["resolution","analog-to-digital converters","circuit techniques","statistical analysis","CMOS technology","current-mode ADC","algorithmic ADC","A\/D converters","error analysis","tolerance analysis","circuit analysis","0.35 micron","4 bit"],"prmu":["P","P","P","P","P","R","R","M","R","R","R","U","U"]} {"id":"973","title":"Time-integration of multiphase chemistry in size-resolved cloud models","abstract":"The existence of cloud drops leads to a transfer of chemical species between the gas and aqueous phases. Species concentrations in both phases are modified by chemical reactions and by this phase transfer. The model equations resulting from such multiphase chemical systems are nonlinear, highly coupled and extremely stiff. In the paper we investigate several numerical approaches for treating such processes. The droplets are subdivided into several classes. This decomposition of the droplet spectrum into classes is based on their droplet size and the amount of scavenged material inside the drops, respectively. The very fast dissociations in the aqueous phase chemistry are treated as forward and backward reactions. The aqueous phase and gas phase chemistry, the mass transfer between the different droplet classes among themselves and with the gas phase are integrated in an implicit and coupled manner by the second order BDF method. For this part we apply a modification of the code LSODE with special linear system solvers. These direct sparse techniques exploit the special block structure of the corresponding Jacobian. Furthermore we investigate an approximate matrix factorization which is related to operator splitting at the linear algebra level. The sparse Jacobians are generated explicitly and stored in a sparse form. The efficiency and accuracy of our time-integration schemes is discussed for four multiphase chemistry systems of different complexity and for a different number of droplet classes","tok_text":"time-integr of multiphas chemistri in size-resolv cloud model \n the exist of cloud drop lead to a transfer of chemic speci between the ga and aqueou phase . speci concentr in both phase are modifi by chemic reaction and by thi phase transfer . the model equat result from such multiphas chemic system are nonlinear , highli coupl and extrem stiff . in the paper we investig sever numer approach for treat such process . the droplet are subdivid into sever class . thi decomposit of the droplet spectrum into class is base on their droplet size and the amount of scaveng materi insid the drop , respect . the veri fast dissoci in the aqueou phase chemistri are treat as forward and backward reaction . the aqueou phase and ga phase chemistri , the mass transfer between the differ droplet class among themselv and with the ga phase are integr in an implicit and coupl manner by the second order bdf method . for thi part we appli a modif of the code lsode with special linear system solver . these direct spars techniqu exploit the special block structur of the correspond jacobian . furthermor we investig an approxim matrix factor which is relat to oper split at the linear algebra level . the spars jacobian are gener explicitli and store in a spars form . the effici and accuraci of our time-integr scheme is discuss for four multiphas chemistri system of differ complex and for a differ number of droplet class","ordered_present_kp":[15,38,77,110,200,277,633,722,1109,1150,1168,1195,1290],"keyphrases":["multiphase chemistry","size-resolved cloud models","cloud drops","chemical species","chemical reactions","multiphase chemical systems","aqueous phase chemistry","gas phase chemistry","approximate matrix factorization","operator splitting","linear algebra","sparse Jacobians","time-integration schemes","air pollution modelling"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","M"]} {"id":"936","title":"Resonant controllers for smart structures","abstract":"In this paper we propose a special type of colocated feedback controller for smart structures. The controller is a parallel combination of high-Q resonant circuits. Each of the resonant circuits is tuned to a pole (or the resonant frequency) of the smart structure. It is proven that the parallel combination of resonant controllers is stable with an infinite gain margin. Only one set of actuator-sensor can damp multiple resonant modes with the resonant controllers. Experimental results are presented to show the robustness of the proposed controller in damping multimode resonances","tok_text":"reson control for smart structur \n in thi paper we propos a special type of coloc feedback control for smart structur . the control is a parallel combin of high-q reson circuit . each of the reson circuit is tune to a pole ( or the reson frequenc ) of the smart structur . it is proven that the parallel combin of reson control is stabl with an infinit gain margin . onli one set of actuator-sensor can damp multipl reson mode with the reson control . experiment result are present to show the robust of the propos control in damp multimod reson","ordered_present_kp":[82,18,156,232,18,383,408,403,531],"keyphrases":["smart structures","smart structures","feedback controller","high-Q resonant circuits","resonant frequency","actuator-sensor","damping","multiple resonant modes","multimode resonances","smart structure","laminate beam"],"prmu":["P","P","P","P","P","P","P","P","P","P","U"]} {"id":"1248","title":"Public business libraries: the next chapter","abstract":"Traces the history of the provision of business information by Leeds Public Libraries, UK, from the opening of the Public Commercial and Technical Library in 1918 to the revolutionary impact of the Internet in the 1990s. Describes how the Library came to terms with the need to integrate the Internet into its mainstream business information services, with particular reference to its limitations and to the provision of company information, market research, British Standards information, press cuttings and articles from specialized trade and scientific journals, and patents information. Focuses on some of the reasons why the public business library is still needed as a service to businesses, even after the introduction of the Internet and considers the Library's changing role and the need to impress on all concerned, especially government, the continuing value of these services. Looks to the partnerships formed by the Library over the years and the ways in which these are expected to assist in realizing future opportunities, in particular, the fact that all public libraries in England gained free Internet access at the end of 2001. Offers some useful ideas about how the Library could develop, noting that SINTO, a Sheffield based information network formed in 1938 and originally a partnership between the public library, the two Sheffield universities and various leading steel companies of the time, is being examined as a model for future services in Leeds. Concludes that the way forward can be defined in terms of five actions: redefinition of priorities; marketing; budgets; resources; and the use of information technology (IT)","tok_text":"public busi librari : the next chapter \n trace the histori of the provis of busi inform by leed public librari , uk , from the open of the public commerci and technic librari in 1918 to the revolutionari impact of the internet in the 1990 . describ how the librari came to term with the need to integr the internet into it mainstream busi inform servic , with particular refer to it limit and to the provis of compani inform , market research , british standard inform , press cut and articl from special trade and scientif journal , and patent inform . focus on some of the reason whi the public busi librari is still need as a servic to busi , even after the introduct of the internet and consid the librari 's chang role and the need to impress on all concern , especi govern , the continu valu of these servic . look to the partnership form by the librari over the year and the way in which these are expect to assist in realiz futur opportun , in particular , the fact that all public librari in england gain free internet access at the end of 2001 . offer some use idea about how the librari could develop , note that sinto , a sheffield base inform network form in 1938 and origin a partnership between the public librari , the two sheffield univers and variou lead steel compani of the time , is be examin as a model for futur servic in leed . conclud that the way forward can be defin in term of five action : redefinit of prioriti ; market ; budget ; resourc ; and the use of inform technolog ( it )","ordered_present_kp":[51,0,91,218,139,334,410,427,445,471,538,772,1124,1149,1239,1273,427,1452,1461],"keyphrases":["public business libraries","history","Leeds Public Libraries","Public Commercial and Technical Library","Internet","business information services","company information","market research","marketing","British Standards information","press cuttings","patents information","government","SINTO","information network","Sheffield universities","steel companies","budgets","resources","trade journal articles","scientific journal articles","priority redefinition","IT use"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R"]} {"id":"658","title":"Process pioneers [agile business]","abstract":"By managing IT infrastructures along so-called 'top down' lines, organisations can streamline their business processes, eliminate redundant tasks and increase automation","tok_text":"process pioneer [ agil busi ] \n by manag it infrastructur along so-cal ' top down ' line , organis can streamlin their busi process , elimin redund task and increas autom","ordered_present_kp":[18,35,119,157],"keyphrases":["agile business","managing IT infrastructures","business processes","increase automation"],"prmu":["P","P","P","P"]} {"id":"62","title":"Text-independent speaker verification using utterance level scoring and covariance modeling","abstract":"This paper describes a computationally simple method to perform text independent speaker verification using second order statistics. The suggested method, called utterance level scoring (ULS), allows one to obtain a normalized score using a single pass through the frames of the tested utterance. The utterance sample covariance is first calculated and then compared to the speaker covariance using a distortion measure. Subsequently, a distortion measure between the utterance covariance and the sample covariance of data taken from different speakers is used to normalize the score. Experimental results from the 2000 NIST speaker recognition evaluation are presented for ULS, used with different distortion measures, and for a Gaussian mixture model (GMM) system. The results indicate that ULS as a viable alternative to GMM whenever the computational complexity and verification accuracy needs to be traded","tok_text":"text-independ speaker verif use utter level score and covari model \n thi paper describ a comput simpl method to perform text independ speaker verif use second order statist . the suggest method , call utter level score ( ul ) , allow one to obtain a normal score use a singl pass through the frame of the test utter . the utter sampl covari is first calcul and then compar to the speaker covari use a distort measur . subsequ , a distort measur between the utter covari and the sampl covari of data taken from differ speaker is use to normal the score . experiment result from the 2000 nist speaker recognit evalu are present for ul , use with differ distort measur , and for a gaussian mixtur model ( gmm ) system . the result indic that ul as a viabl altern to gmm whenev the comput complex and verif accuraci need to be trade","ordered_present_kp":[0,32,54,89,152,250,328,380,401,586,401,678,702,778,797],"keyphrases":["text-independent speaker verification","utterance level scoring","covariance modeling","computationally simple method","second order statistics","normalized score","sample covariance","speaker covariance","distortion measure","distortion measure","NIST speaker recognition evaluation","Gaussian mixture model","GMM","computational complexity","verification accuracy","distortion measures"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1149","title":"Deterministic calculations of photon spectra for clinical accelerator targets","abstract":"A method is proposed to compute photon energy spectra produced in clinical electron accelerator targets, based on the deterministic solution of the Boltzmann equation for coupled electron-photon transport in one-dimensional (1-D) slab geometry. It is shown that the deterministic method gives similar results as Monte Carlo calculations over the angular range of interest for therapy applications. Relative energy spectra computed by deterministic and 3-D Monte Carlo methods, respectively, are compared for several realistic target materials and different electron beams, and are found to give similar photon energy distributions and mean energies. The deterministic calculations typically require 1-2 mins of execution time on a Sun Workstation, compared to 2-36 h for the Monte Carlo runs","tok_text":"determinist calcul of photon spectra for clinic acceler target \n a method is propos to comput photon energi spectra produc in clinic electron acceler target , base on the determinist solut of the boltzmann equat for coupl electron-photon transport in one-dimension ( 1-d ) slab geometri . it is shown that the determinist method give similar result as mont carlo calcul over the angular rang of interest for therapi applic . rel energi spectra comput by determinist and 3-d mont carlo method , respect , are compar for sever realist target materi and differ electron beam , and are found to give similar photon energi distribut and mean energi . the determinist calcul typic requir 1 - 2 min of execut time on a sun workstat , compar to 2 - 36 h for the mont carlo run","ordered_present_kp":[94,0,126,196,216,379,408,425,470],"keyphrases":["deterministic calculations","photon energy spectra","clinical electron accelerator targets","Boltzmann equation","coupled electron-photon transport","angular range of interest","therapy applications","relative energy spectra","3-D Monte Carlo methods","one-dimensional slab geometry","linear accelerator","therapy planning","integrodifferential equation","pencil beam source representations"],"prmu":["P","P","P","P","P","P","P","P","P","R","M","M","M","M"]} {"id":"559","title":"Is open source more or less secure?","abstract":"Networks dominate today's computing landscape and commercial technical protection is lagging behind attack technology. As a result, protection programme success depends more on prudent management decisions than on the selection of technical safeguards. The paper takes a management view of protection and seeks to reconcile the need for security with the limitations of technology","tok_text":"is open sourc more or less secur ? \n network domin today 's comput landscap and commerci technic protect is lag behind attack technolog . as a result , protect programm success depend more on prudent manag decis than on the select of technic safeguard . the paper take a manag view of protect and seek to reconcil the need for secur with the limit of technolog","ordered_present_kp":[80,119,200],"keyphrases":["commercial technical protection","attack technology","management","open source software security","computer networks","data security"],"prmu":["P","P","P","M","R","M"]} {"id":"747","title":"Simulation of cardiovascular physiology: the diastolic function(s) of the heart","abstract":"The cardiovascular system was simulated by using an equivalent electronic circuit. Four sets of simulations were performed. The basic variables investigated were cardiac output and stroke volume. They were studied as functions (i) of right ventricular capacitance and negative intrathoracic pressure; (ii) of left ventricular relaxation and of heart rate; and (iii) of left ventricle failure. It seems that a satisfactory simulation of systolic and diastolic functions of the heart is possible. Presented simulations improve our understanding of the role of the capacitance of both ventricles and of the diastolic relaxation in cardiovascular physiology","tok_text":"simul of cardiovascular physiolog : the diastol function( ) of the heart \n the cardiovascular system wa simul by use an equival electron circuit . four set of simul were perform . the basic variabl investig were cardiac output and stroke volum . they were studi as function ( i ) of right ventricular capacit and neg intrathorac pressur ; ( ii ) of left ventricular relax and of heart rate ; and ( iii ) of left ventricl failur . it seem that a satisfactori simul of systol and diastol function of the heart is possibl . present simul improv our understand of the role of the capacit of both ventricl and of the diastol relax in cardiovascular physiolog","ordered_present_kp":[9,0,40,67,120,212,231,283,313,349,379,407,612],"keyphrases":["simulation","cardiovascular physiology","diastolic function","heart","equivalent electronic circuit","cardiac output","stroke volume","right ventricular capacitance","negative intrathoracic pressure","left ventricular relaxation","heart rate","left ventricle failure","diastolic relaxation","systolic functions"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"702","title":"A comparison of high-power converter topologies for the implementation of FACTS controllers","abstract":"This paper compares four power converter topologies for the implementation of flexible AC transmission system (FACTS) controllers: three multilevel topologies (multipoint clamped (MPC), chain, and nested cell) and the well-established multipulse topology. In keeping with the need to implement very-high-power inverters, switching frequency is restricted to line frequency. The study addresses device count, DC filter ratings, restrictions on voltage control, active power transfer through the DC link, and balancing of DC-link voltages. Emphasis is placed on capacitor sizing because of its impact on the cost and size of the FACTS controller. A method for the dimensioning the DC capacitor filter is presented. It is found that the chain converter is attractive for the implementation of a static compensator or a static synchronous series compensator. The MPC converter is attractive for the implementation of a unified power flow controller or an interline power flow controller, but a special arrangement is required to overcome the limitations on voltage control","tok_text":"a comparison of high-pow convert topolog for the implement of fact control \n thi paper compar four power convert topolog for the implement of flexibl ac transmiss system ( fact ) control : three multilevel topolog ( multipoint clamp ( mpc ) , chain , and nest cell ) and the well-establish multipuls topolog . in keep with the need to implement very-high-pow invert , switch frequenc is restrict to line frequenc . the studi address devic count , dc filter rate , restrict on voltag control , activ power transfer through the dc link , and balanc of dc-link voltag . emphasi is place on capacitor size becaus of it impact on the cost and size of the fact control . a method for the dimens the dc capacitor filter is present . it is found that the chain convert is attract for the implement of a static compens or a static synchron seri compens . the mpc convert is attract for the implement of a unifi power flow control or an interlin power flow control , but a special arrang is requir to overcom the limit on voltag control","ordered_present_kp":[62,195,290,359,368,433,447,896,795,815],"keyphrases":["FACTS controllers","multilevel topologies","multipulse topology","inverters","switching frequency","device count","DC filter ratings","static compensator","static synchronous series compensator","unified power flow controller","high-power converter topologies comparison","multipoint clamped topology","STATCOM","UPFC"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R","U","U"]} {"id":"1357","title":"Work sequencing in a manufacturing cell with limited labour constraints","abstract":"This study focuses on the analysis of group scheduling heuristics in a dual-constrained, automated manufacturing cell, where labour utilization is limited to setups, tear-downs and loads\/unloads. This scenario is realistic in today's automated manufacturing cells. The results indicate that policies for allocating labour to tasks have very little impact in such an environment. Furthermore, the performance of efficiency oriented, exhaustive, group scheduling heuristics deteriorated while the performance of the more complex, non-exhaustive heuristics improved. Thus, it is recommended that production managers use the simplest labour scheduling policy, and instead focus their efforts to activities such as job scheduling and production planning in such environments","tok_text":"work sequenc in a manufactur cell with limit labour constraint \n thi studi focus on the analysi of group schedul heurist in a dual-constrain , autom manufactur cell , where labour util is limit to setup , tear-down and load \/ unload . thi scenario is realist in today 's autom manufactur cell . the result indic that polici for alloc labour to task have veri littl impact in such an environ . furthermor , the perform of effici orient , exhaust , group schedul heurist deterior while the perform of the more complex , non-exhaust heurist improv . thu , it is recommend that product manag use the simplest labour schedul polici , and instead focu their effort to activ such as job schedul and product plan in such environ","ordered_present_kp":[0,18,39,99,143,692,676],"keyphrases":["work sequencing","manufacturing cell","limited labour constraints","group scheduling heuristics","automated manufacturing cells","job scheduling","production planning","dual-constrained automated manufacturing cell","labour allocation policies","efficiency oriented exhaustive group scheduling heuristics","nonexhaustive heuristics"],"prmu":["P","P","P","P","P","P","P","R","R","R","M"]} {"id":"1312","title":"Stability in the numerical solution of the heat equation with nonlocal boundary conditions","abstract":"This paper deals with numerical methods for the solution of the heat equation with integral boundary conditions. Finite differences are used for the discretization in space. The matrices specifying the resulting semidiscrete problem are proved to satisfy a sectorial resolvent condition, uniformly with respect to the discretization parameter. Using this resolvent condition, unconditional stability is proved for the fully discrete numerical process generated by applying A( theta )-stable one-step methods to the semidiscrete problem. This stability result is established in the maximum norm; it improves some previous results in the literature in that it is not subject to various unnatural restrictions which were imposed on the boundary conditions and on the one-step methods","tok_text":"stabil in the numer solut of the heat equat with nonloc boundari condit \n thi paper deal with numer method for the solut of the heat equat with integr boundari condit . finit differ are use for the discret in space . the matric specifi the result semidiscret problem are prove to satisfi a sectori resolv condit , uniformli with respect to the discret paramet . use thi resolv condit , uncondit stabil is prove for the fulli discret numer process gener by appli a ( theta ) -stabl one-step method to the semidiscret problem . thi stabil result is establish in the maximum norm ; it improv some previou result in the literatur in that it is not subject to variou unnatur restrict which were impos on the boundari condit and on the one-step method","ordered_present_kp":[14,33,49,0,144,169,221,247,290,419,481,564],"keyphrases":["stability","numerical solution","heat equation","nonlocal boundary conditions","integral boundary conditions","finite differences","matrices","semidiscrete problem","sectorial resolvent condition","fully discrete numerical process","one-step methods","maximum norm","space discretization"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"829","title":"Santera targets independents in major strategy overhaul [telecom]","abstract":"With big carriers slashing capital expense budgets, Santera Systems is broadening the reach of its next-generation switching platform to include independent telcos. This week, the vendor will announce that it has signed a deal with Kerman, Calif-based Kerman Telephone Co. Furthermore, the company is angling for inclusion in the Rural Utilities Service's approved equipment list, hoping to sell its Class 5 replacement boxes to the smallest carriers. The move is almost a complete reversal for the Plano, Texas-based vendor, which previously focused solely on large carriers, including the RBOCs","tok_text":"santera target independ in major strategi overhaul [ telecom ] \n with big carrier slash capit expens budget , santera system is broaden the reach of it next-gener switch platform to includ independ telco . thi week , the vendor will announc that it ha sign a deal with kerman , calif-bas kerman telephon co. furthermor , the compani is angl for inclus in the rural util servic 's approv equip list , hope to sell it class 5 replac box to the smallest carrier . the move is almost a complet revers for the plano , texas-bas vendor , which previous focus sole on larg carrier , includ the rboc","ordered_present_kp":[110,163,288,359],"keyphrases":["Santera Systems","switching","Kerman Telephone","Rural Utilities Service"],"prmu":["P","P","P","P"]} {"id":"891","title":"Establishing the discipline of physics-based CMP modeling","abstract":"For the past decade, a physically based comprehensive process model for chemical mechanical polishing has eluded the semiconductor industry. However, a long-term collaborative effort has now resulted in a workable version of that approach. The highly fundamental model is based on advanced finite element analysis and is beginning to show promise in CMP process development","tok_text":"establish the disciplin of physics-bas cmp model \n for the past decad , a physic base comprehens process model for chemic mechan polish ha elud the semiconductor industri . howev , a long-term collabor effort ha now result in a workabl version of that approach . the highli fundament model is base on advanc finit element analysi and is begin to show promis in cmp process develop","ordered_present_kp":[115,39,308,361],"keyphrases":["CMP","chemical mechanical polishing","finite element analysis","CMP process development","physically based process model"],"prmu":["P","P","P","P","R"]} {"id":"1056","title":"Eliminating recency with self-review: the case of auditors' 'going concern' judgments","abstract":"This paper examines the use of self-review to debias recency. Recency is found in the 'going concern' judgments of staff auditors, but is successfully eliminated by the auditor's use of a simple self-review technique that would be extremely easy to implement in audit practice. Auditors who self-review are also less inclined to make audit report choices that are inconsistent with their going concern judgments. These results are important because the judgments of staff auditors often determine the type and extent of documentation in audit workpapers and serve as preliminary inputs for senior auditors' judgments and choices. If staff auditors' judgments are affected by recency, the impact of this bias may be impounded in the ultimate judgments and choices of senior auditors. Since biased judgments can expose auditors to significant costs involving extended audit procedures, legal liability and diminished reputation, simple debiasing techniques that reduce this exposure are valuable. The paper also explores some future research needs and other important issues concerning judgment debiasing in applied professional settings","tok_text":"elimin recenc with self-review : the case of auditor ' ' go concern ' judgment \n thi paper examin the use of self-review to debia recenc . recenc is found in the ' go concern ' judgment of staff auditor , but is success elimin by the auditor 's use of a simpl self-review techniqu that would be extrem easi to implement in audit practic . auditor who self-review are also less inclin to make audit report choic that are inconsist with their go concern judgment . these result are import becaus the judgment of staff auditor often determin the type and extent of document in audit workpap and serv as preliminari input for senior auditor ' judgment and choic . if staff auditor ' judgment are affect by recenc , the impact of thi bia may be impound in the ultim judgment and choic of senior auditor . sinc bias judgment can expos auditor to signific cost involv extend audit procedur , legal liabil and diminish reput , simpl debias techniqu that reduc thi exposur are valuabl . the paper also explor some futur research need and other import issu concern judgment debias in appli profession set","ordered_present_kp":[19,189,392,562,574,622,861,885,902,1055,1074],"keyphrases":["self-review","staff auditors","audit report choices","documentation","audit workpapers","senior auditors","extended audit procedures","legal liability","diminished reputation","judgment debiasing","applied professional settings","auditor going concern judgments","recency debiasing","accountability","probability judgments"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","R","U","M"]} {"id":"1013","title":"A scalable intelligent takeoff controller for a simulated running jointed leg","abstract":"Running with jointed legs poses a difficult control problem in robotics. Neural controllers are attractive because they allow the robot to adapt to changing environmental conditions. However, scalability is an issue with many neural controllers. The paper describes the development of a scalable neurofuzzy controller for the takeoff phase of the running stride. Scalability is achieved by selecting a controller whose size does not grow with the dimensionality of the problem. Empirical results show that with proper design the takeoff controller scales from a leg with a single movable link to one with three movable links without a corresponding growth in size and without a loss of accuracy","tok_text":"a scalabl intellig takeoff control for a simul run joint leg \n run with joint leg pose a difficult control problem in robot . neural control are attract becaus they allow the robot to adapt to chang environment condit . howev , scalabl is an issu with mani neural control . the paper describ the develop of a scalabl neurofuzzi control for the takeoff phase of the run stride . scalabl is achiev by select a control whose size doe not grow with the dimension of the problem . empir result show that with proper design the takeoff control scale from a leg with a singl movabl link to one with three movabl link without a correspond growth in size and without a loss of accuraci","ordered_present_kp":[2,41,126,193,2,309,344,365],"keyphrases":["scalable intelligent takeoff controller","scalability","simulated running jointed leg","neural controllers","changing environmental conditions","scalable neurofuzzy controller","takeoff phase","running stride","intelligent robotic control"],"prmu":["P","P","P","P","P","P","P","P","R"]} {"id":"60","title":"Perceptual audio coding using adaptive pre- and post-filters and lossless compression","abstract":"This paper proposes a versatile perceptual audio coding method that achieves high compression ratios and is capable of low encoding\/decoding delay. It accommodates a variety of source signals (including both music and speech) with different sampling rates. It is based on separating irrelevance and redundancy reductions into independent functional units. This contrasts traditional audio coding where both are integrated within the same subband decomposition. The separation allows for the independent optimization of the irrelevance and redundancy reduction units. For both reductions, we rely on adaptive filtering and predictive coding as much as possible to minimize the delay. A psycho-acoustically controlled adaptive linear filter is used for the irrelevance reduction, and the redundancy reduction is carried out by a predictive lossless coding scheme, which is termed weighted cascaded least mean squared (WCLMS) method. Experiments are carried out on a database of moderate size which contains mono-signals of different sampling rates and varying nature (music, speech, or mixed). They show that the proposed WCLMS lossless coder outperforms other competing lossless coders in terms of compression ratios and delay, as applied to the pre-filtered signal. Moreover, a subjective listening test of the combined pre-filter\/lossless coder and a state-of-the-art perceptual audio coder (PAC) shows that the new method achieves a comparable compression ratio and audio quality with a lower delay","tok_text":"perceptu audio code use adapt pre- and post-filt and lossless compress \n thi paper propos a versatil perceptu audio code method that achiev high compress ratio and is capabl of low encod \/ decod delay . it accommod a varieti of sourc signal ( includ both music and speech ) with differ sampl rate . it is base on separ irrelev and redund reduct into independ function unit . thi contrast tradit audio code where both are integr within the same subband decomposit . the separ allow for the independ optim of the irrelev and redund reduct unit . for both reduct , we reli on adapt filter and predict code as much as possibl to minim the delay . a psycho-acoust control adapt linear filter is use for the irrelev reduct , and the redund reduct is carri out by a predict lossless code scheme , which is term weight cascad least mean squar ( wclm ) method . experi are carri out on a databas of moder size which contain mono-sign of differ sampl rate and vari natur ( music , speech , or mix ) . they show that the propos wclm lossless coder outperform other compet lossless coder in term of compress ratio and delay , as appli to the pre-filt signal . moreov , a subject listen test of the combin pre-filt \/ lossless coder and a state-of-the-art perceptu audio coder ( pac ) show that the new method achiev a compar compress ratio and audio qualiti with a lower delay","ordered_present_kp":[0,53,140,177,228,255,286,331,573,590,645,702,759,804,1017,1159,1193,1331],"keyphrases":["perceptual audio coding","lossless compression","high compression ratio","low encoding\/decoding delay","source signals","music","sampling rates","redundancy reduction","adaptive filtering","predictive coding","psycho-acoustically controlled adaptive linear filter","irrelevance reduction","predictive lossless coding","weighted cascaded least mean squared","WCLMS lossless coder","subjective listening test","pre-filter\/lossless coder","audio quality","adaptive pre-filters","adaptive post-filters"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"132","title":"A unified view for vector rotational CORDIC algorithms and architectures based on angle quantization approach","abstract":"Vector rotation is the key operation employed extensively in many digital signal processing applications. In this paper, we introduce a new design concept called Angle Quantization (AQ). It can be used as a design index for vector rotational operation, where the rotational angle is known in advance. Based on the AQ process, we establish a unified design framework for cost-effective low-latency rotational algorithms and architectures. Several existing works, such as conventional COordinate Rotational Digital Computer (CORDIC), AR-CORDIC, MVR-CORDIC, and EEAS-based CORDIC, can be fitted into the design framework, forming a Vector Rotational CORDIC Family. Moreover, we address four searching algorithms to solve the optimization problem encountered in the proposed vector rotational CORDIC family. The corresponding scaling operations of the CORDIC family are also discussed. Based on the new design framework, we can realize high-speed\/low-complexity rotational VLSI circuits, whereas without degrading the precision performance in fixed-point implementations","tok_text":"a unifi view for vector rotat cordic algorithm and architectur base on angl quantiz approach \n vector rotat is the key oper employ extens in mani digit signal process applic . in thi paper , we introduc a new design concept call angl quantiz ( aq ) . it can be use as a design index for vector rotat oper , where the rotat angl is known in advanc . base on the aq process , we establish a unifi design framework for cost-effect low-lat rotat algorithm and architectur . sever exist work , such as convent coordin rotat digit comput ( cordic ) , ar-cord , mvr-cordic , and eeas-bas cordic , can be fit into the design framework , form a vector rotat cordic famili . moreov , we address four search algorithm to solv the optim problem encount in the propos vector rotat cordic famili . the correspond scale oper of the cordic famili are also discuss . base on the new design framework , we can realiz high-spe \/ low-complex rotat vlsi circuit , wherea without degrad the precis perform in fixed-point implement","ordered_present_kp":[17,146,71,270,287,389,428,690,719,799,910,987],"keyphrases":["vector rotational CORDIC algorithms","angle quantization","digital signal processing applications","design index","vector rotational operation","unified design framework","low-latency rotational algorithms","searching algorithms","optimization problem","scaling operations","low-complexity rotational VLSI circuits","fixed-point implementations","DSP applications","greedy searching algorithm","low-latency rotational architectures","high-speed rotational VLSI circuits","trellis-based searching algorithm"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","M","M","R","R","M"]} {"id":"971","title":"Homogenization in L\/sup infinity \/","abstract":"Homogenization of deterministic control problems with L\/sup infinity \/ running cost is studied by viscosity solutions techniques. It is proved that the value function of an L\/sup infinity \/ problem in a medium with a periodic micro-structure converges uniformly on the compact sets to the value function of the homogenized problem as the period shrinks to 0. Our main convergence result extends that of Ishii (Stochastic Analysis, control, optimization and applications, pp. 305-324, Birkhauser Boston, Boston, MA, 1999.) to the case of a discontinuous Hamiltonian. The cell problem is solved, but, as nonuniqueness occurs, the effective Hamiltonian must be selected in a careful way. The paper also provides a representation formula for the effective Hamiltonian and gives illustrations to calculus of variations, averaging and one-dimensional problems","tok_text":"homogen in l \/ sup infin \/ \n homogen of determinist control problem with l \/ sup infin \/ run cost is studi by viscos solut techniqu . it is prove that the valu function of an l \/ sup infin \/ problem in a medium with a period micro-structur converg uniformli on the compact set to the valu function of the homogen problem as the period shrink to 0 . our main converg result extend that of ishii ( stochast analysi , control , optim and applic , pp . 305 - 324 , birkhaus boston , boston , ma , 1999 . ) to the case of a discontinu hamiltonian . the cell problem is solv , but , as nonuniqu occur , the effect hamiltonian must be select in a care way . the paper also provid a represent formula for the effect hamiltonian and give illustr to calculu of variat , averag and one-dimension problem","ordered_present_kp":[40,73,0,155,760,740,240,548],"keyphrases":["homogenization","deterministic control","L\/sup infinity \/ running cost","value function","convergence","cell problem","calculus of variations","averaging","optimal control"],"prmu":["P","P","P","P","P","P","P","P","R"]} {"id":"934","title":"Induced-shear piezoelectric actuators for rotor blade trailing edge flaps","abstract":"Much of the current rotorcraft research is focused on improving performance by reducing unwanted helicopter noise and vibration. One of the most promising active rotorcraft vibration control systems is an active trailing edge flap. In this paper, an induced-shear piezoelectric tube actuator is used in conjunction with a simple lever-cusp hinge amplification device to generate a useful combination of trailing edge flap deflections and hinge moments. A finite-element model of the actuator tube and trailing edge flap (including aerodynamic and inertial loading) was used to guide the design of the actuator-flap system. A full-scale induced shear tube actuator flap system was fabricated and bench top testing was conducted to validate the analysis. Hinge moments corresponding to various rotor speeds were applied to the actuator using mechanical springs. The testing demonstrated that for an applied electric field of 3 kV cm\/sup -1\/ the tube actuator deflected a representative full-scale 12 inch flap +or-2.8 degrees at 0 rpm and +or-1.4 degrees for a hinge moment simulating a 400 rpm condition. The per cent error between the predicted and experimental full-scale flap deflections ranged from 4% (low rpm) to 12.5% (large rpm). Increasing the electric field to 4 kV cm\/sup -1\/ results in +or-2.5 degrees flap deflection at a rotation speed of 400 rpm, according to the design analysis. A trade study was conducted to compare the performance of the piezoelectric tube actuator to the state of the art in trailing edge flap actuators and indicated that the induced-shear tube actuator shows promise as a trailing edge flap actuator","tok_text":"induced-shear piezoelectr actuat for rotor blade trail edg flap \n much of the current rotorcraft research is focus on improv perform by reduc unwant helicopt nois and vibrat . one of the most promis activ rotorcraft vibrat control system is an activ trail edg flap . in thi paper , an induced-shear piezoelectr tube actuat is use in conjunct with a simpl lever-cusp hing amplif devic to gener a use combin of trail edg flap deflect and hing moment . a finite-el model of the actuat tube and trail edg flap ( includ aerodynam and inerti load ) wa use to guid the design of the actuator-flap system . a full-scal induc shear tube actuat flap system wa fabric and bench top test wa conduct to valid the analysi . hing moment correspond to variou rotor speed were appli to the actuat use mechan spring . the test demonstr that for an appli electr field of 3 kv cm \/ sup -1\/ the tube actuat deflect a repres full-scal 12 inch flap + or-2.8 degre at 0 rpm and + or-1.4 degre for a hing moment simul a 400 rpm condit . the per cent error between the predict and experiment full-scal flap deflect rang from 4 % ( low rpm ) to 12.5 % ( larg rpm ) . increas the electr field to 4 kv cm \/ sup -1\/ result in + or-2.5 degre flap deflect at a rotat speed of 400 rpm , accord to the design analysi . a trade studi wa conduct to compar the perform of the piezoelectr tube actuat to the state of the art in trail edg flap actuat and indic that the induced-shear tube actuat show promis as a trail edg flap actuat","ordered_present_kp":[86,149,216,244,355,452,913,529,562,617,661,299,1431,913],"keyphrases":["rotorcraft","helicopter noise","vibration control","active trailing edge flap","piezoelectric tube actuator","lever-cusp hinge amplification device","finite-element model","inertial loading","design","shear tube actuator flap","bench top testing","12 inch flap","12 inch","induced-shear tube actuator","helicopter vibration","aerodynamic loading"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"1232","title":"Techniques for compiling and implementing all NAS parallel benchmarks in HPF","abstract":"The NAS parallel benchmarks (NPB) are a well-known benchmark set for high-performance machines. Much effort has been made to implement them in High-Performance Fortran (HPF). In previous attempts, however, the HPF versions did not include the complete set of benchmarks, and the performance was not always good. In this study, we implement all eight benchmarks of the NPB in HPF, and parallelize them using an HPF compiler that we have developed. This report describes the implementation techniques and compiler features necessary to achieve good performance. We evaluate the HPF version on the Hitachi SR2201, a distributed-memory parallel machine. With 16 processors, the execution time of the HPF version is within a factor of 1.5 of the hand-parallelized version of the NPB 2.3 beta","tok_text":"techniqu for compil and implement all na parallel benchmark in hpf \n the na parallel benchmark ( npb ) are a well-known benchmark set for high-perform machin . much effort ha been made to implement them in high-perform fortran ( hpf ) . in previou attempt , howev , the hpf version did not includ the complet set of benchmark , and the perform wa not alway good . in thi studi , we implement all eight benchmark of the npb in hpf , and parallel them use an hpf compil that we have develop . thi report describ the implement techniqu and compil featur necessari to achiev good perform . we evalu the hpf version on the hitachi sr2201 , a distributed-memori parallel machin . with 16 processor , the execut time of the hpf version is within a factor of 1.5 of the hand-parallel version of the npb 2.3 beta","ordered_present_kp":[38,138,13,457],"keyphrases":["compiler","NAS parallel benchmarks","high-performance machines","HPF compiler","distributed-memory parallel supercomputers"],"prmu":["P","P","P","P","M"]} {"id":"1277","title":"Dynamic modification of object Petri nets. An application to modelling protocols with fork-join structures","abstract":"In this paper we discuss possibilities of modelling protocols by objects in object-based high-level Petri nets. Some advantages of dynamically modifying the structure of token objects are discussed and the need for further investigations into mathematically rigorous foundations of object net formalisms incorporating facilities for such operations on its token nets is emphasised","tok_text":"dynam modif of object petri net . an applic to model protocol with fork-join structur \n in thi paper we discuss possibl of model protocol by object in object-bas high-level petri net . some advantag of dynam modifi the structur of token object are discuss and the need for further investig into mathemat rigor foundat of object net formal incorpor facil for such oper on it token net is emphasis","ordered_present_kp":[0,15,53,67,231,295,321],"keyphrases":["dynamic modification","object Petri nets","protocols","fork-join structures","token objects","mathematically rigorous foundations","object net formalisms"],"prmu":["P","P","P","P","P","P","P"]} {"id":"622","title":"Source\/channel coding of still images using lapped transforms and block classification","abstract":"A novel scheme for joint source\/channel coding of still images is proposed. By using efficient lapped transforms, channel-optimised robust quantisers and classification methods it is shown that significant improvements over traditional source\/channel coding of images can be obtained while keeping the complexity low","tok_text":"sourc \/ channel code of still imag use lap transform and block classif \n a novel scheme for joint sourc \/ channel code of still imag is propos . by use effici lap transform , channel-optimis robust quantis and classif method it is shown that signific improv over tradit sourc \/ channel code of imag can be obtain while keep the complex low","ordered_present_kp":[24,39,57,175],"keyphrases":["still images","lapped transforms","block classification","channel-optimised robust quantisers","joint source-channel coding","image coding","low complexity"],"prmu":["P","P","P","P","M","R","R"]} {"id":"909","title":"Influence of the process design on the control strategy: application in electropneumatic field","abstract":"This article proposes an example of electropneumatic system where the architecture of the process is modified with respect to both the specifications for position and velocity tracking and a criterion concerning the energy consumption. Experimental results are compared and analyzed using an industrial bench test. For this, a complete model of the system is presented, and two kinds of nonlinear control laws are developed, a monovariable and multivariable type based on the flatness theory","tok_text":"influenc of the process design on the control strategi : applic in electropneumat field \n thi articl propos an exampl of electropneumat system where the architectur of the process is modifi with respect to both the specif for posit and veloc track and a criterion concern the energi consumpt . experiment result are compar and analyz use an industri bench test . for thi , a complet model of the system is present , and two kind of nonlinear control law are develop , a monovari and multivari type base on the flat theori","ordered_present_kp":[121,242,276,432,510],"keyphrases":["electropneumatic systems","tracking","energy consumption","nonlinear control","flatness theory","positioning systems","position control","monovariable control","multivariable control","velocity control"],"prmu":["P","P","P","P","P","R","R","R","R","R"]} {"id":"1133","title":"An analytic center cutting plane method for semidefinite feasibility problems","abstract":"Semidefinite feasibility problems arise in many areas of operations research. The abstract form of these problems can be described as finding a point in a nonempty bounded convex body Gamma in the cone of symmetric positive semidefinite matrices. Assume that Gamma is defined by an oracle, which for any given m * m symmetric positive semidefinite matrix Gamma either confirms that Y epsilon Gamma or returns a cut, i.e., a symmetric matrix A such that Gamma is in the half-space {Y : A . Y < phi |+1-x\/D\/sup N\/I\/sub D\/N, where x in [0, 1], D = 2S + 1, I\/sub D\/N is the D\/sup N\/ * D\/sup N\/ unity matrix and | phi > is a special entangled state. The cases x = 0 and x = 1 correspond respectively to fully random spins and to a fully entangled state. In the first of these series we consider special states | phi > invariant under charge conjugation, that generalizes the N = 2 spin S = 1\/2 Einstein-Podolsky-Rosen state, and in the second one we consider generalizations of the Werner (1989) density matrices. The evaluation of the critical point x\/sub c\/ was done through bounds coming from the partial transposition method of Peres (1996) and the conditional nonextensive entropy criterion. Our results suggest the conjecture that whenever the bounds coming from both methods coincide the result of x\/sub c\/ is the exact one. The results we present are relevant for the discussion of quantum computing, teleportation and cryptography","tok_text":"frontier between separ and quantum entangl in a mani spin system \n we discuss the critic point x \/ sub c\/ separ the quantum entangl and separ state in two seri of n spin s in the simpl mix state character by the matrix oper rho = x| phi > < phi |+1-x \/ d \/ sup n \/ i \/ sub d \/ n , where x in [ 0 , 1 ] , d = 2s + 1 , i \/ sub d \/ n is the d \/ sup n\/ * d \/ sup n\/ uniti matrix and | phi > is a special entangl state . the case x = 0 and x = 1 correspond respect to fulli random spin and to a fulli entangl state . in the first of these seri we consid special state | phi > invari under charg conjug , that gener the n = 2 spin s = 1\/2 einstein-podolsky-rosen state , and in the second one we consid gener of the werner ( 1989 ) densiti matric . the evalu of the critic point x \/ sub c\/ wa done through bound come from the partial transposit method of pere ( 1996 ) and the condit nonextens entropi criterion . our result suggest the conjectur that whenev the bound come from both method coincid the result of x \/ sub c\/ is the exact one . the result we present are relev for the discuss of quantum comput , teleport and cryptographi","ordered_present_kp":[17,27,48,136,212,362,400,469,584,633,82,820,878,1088,1105,1118],"keyphrases":["separability","quantum entanglement","many spin system","critical point","separable states","matrix operator","unity matrix","entangled state","random spin","charge conjugation","Einstein-Podolsky-Rosen state","partial transposition method","nonextensive entropy criterion","quantum computing","teleportation","cryptography","Werner density matrices"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"955","title":"From the DOS dog days to e-filing [law firms]","abstract":"The poster child for a successful e-filing venture is the Case Management and Electronic Case File system now rolling through the district and bankruptcy courts. A project of the Administrative Office of the United States Courts, CM\/ECF is a loud proponent of the benefits of the PDF approach and it has a full head of steam. Present plans are for all federal courts to implement CM\/ECF by 2005. That means a radical shift in methodology and tools for a lot of lawyers. It also means that you should get cozy with Acrobat real soon","tok_text":"from the do dog day to e-fil [ law firm ] \n the poster child for a success e-fil ventur is the case manag and electron case file system now roll through the district and bankruptci court . a project of the administr offic of the unit state court , cm \/ ecf is a loud propon of the benefit of the pdf approach and it ha a full head of steam . present plan are for all feder court to implement cm \/ ecf by 2005 . that mean a radic shift in methodolog and tool for a lot of lawyer . it also mean that you should get cozi with acrobat real soon","ordered_present_kp":[23,95,229,296],"keyphrases":["e-filing","Case Management and Electronic Case File system","United States Courts","PDF","Adobe Acrobat"],"prmu":["P","P","P","P","M"]} {"id":"910","title":"Control of a heavy-duty robotic excavator using time delay control with integral sliding surface","abstract":"The control of a robotic excavator is difficult from the standpoint of the following problems: parameter variations in mechanical structures, various nonlinearities in hydraulic actuators and disturbance due to the contact with the ground. In addition, the more the size of robotic excavators increases, the more the length and mass of the excavator links; the more the parameters of a heavy-duty excavator vary. A time-delay control with switching action (TDCSA) using an integral sliding surface is proposed in this paper for the control of a 21-ton robotic excavator. Through analysis and experiments, we show that using an integral sliding surface for the switching action of TDCSA is better than using a PD-type sliding surface. The proposed controller is applied to straight-line motions of a 21-ton robotic excavator with a speed level at which skillful operators work. Experiments, which were designed for surfaces with various inclinations and over broad ranges of joint motions, show that the proposed controller exhibits good performance","tok_text":"control of a heavy-duti robot excav use time delay control with integr slide surfac \n the control of a robot excav is difficult from the standpoint of the follow problem : paramet variat in mechan structur , variou nonlinear in hydraul actuat and disturb due to the contact with the ground . in addit , the more the size of robot excav increas , the more the length and mass of the excav link ; the more the paramet of a heavy-duti excav vari . a time-delay control with switch action ( tdcsa ) use an integr slide surfac is propos in thi paper for the control of a 21-ton robot excav . through analysi and experi , we show that use an integr slide surfac for the switch action of tdcsa is better than use a pd-type slide surfac . the propos control is appli to straight-lin motion of a 21-ton robot excav with a speed level at which skill oper work . experi , which were design for surfac with variou inclin and over broad rang of joint motion , show that the propos control exhibit good perform","ordered_present_kp":[447,24,64],"keyphrases":["robotic excavator","integral sliding surface","time-delay control","robust control","motion control","trajectory control","dynamics","tracking","pressure control"],"prmu":["P","P","P","M","R","M","U","U","M"]} {"id":"582","title":"Optimal estimation of a finite sample of a discrete chaotic process","abstract":"The synthesis of optimal algorithms for estimating discrete chaotic processes specified by a finite sample is considered; various possible approaches are discussed. Expressions determining the potential accuracy in estimating a single value of the chaotic process are derived. An example of the application of the general equations obtained is given","tok_text":"optim estim of a finit sampl of a discret chaotic process \n the synthesi of optim algorithm for estim discret chaotic process specifi by a finit sampl is consid ; variou possibl approach are discuss . express determin the potenti accuraci in estim a singl valu of the chaotic process are deriv . an exampl of the applic of the gener equat obtain is given","ordered_present_kp":[0,17,34],"keyphrases":["optimal estimation","finite sample","discrete chaotic process","optimal algorithm synthesis","space-time filtering"],"prmu":["P","P","P","R","U"]} {"id":"1192","title":"Construction of two-sided bounds for initial-boundary value problems","abstract":"This paper extends the bounding operator approach developed for boundary value problems to the case of initial-boundary value problems (IBVPs). Following the general principle of bounding operators enclosing methods for the case of partial differential equations are discussed. In particular, continuous discretization methods with an appropriate error bound controlled shift and monotone extensions of Rothe's method for parabolic problems are investigated","tok_text":"construct of two-sid bound for initial-boundari valu problem \n thi paper extend the bound oper approach develop for boundari valu problem to the case of initial-boundari valu problem ( ibvp ) . follow the gener principl of bound oper enclos method for the case of partial differenti equat are discuss . in particular , continu discret method with an appropri error bound control shift and monoton extens of roth 's method for parabol problem are investig","ordered_present_kp":[13,31,84,84,264,426],"keyphrases":["two-sided bounds","initial-boundary value problems","bounding operator approach","bounding operators","partial differential equations","parabolic problems"],"prmu":["P","P","P","P","P","P"]} {"id":"683","title":"Knowledge management","abstract":"The article defines knowledge management, discusses its role, and describes its functions. It also explains the principles of knowledge management, enumerates the strategies involved in knowledge management, and traces its history in brief. The focus is on its interdisciplinary nature. The steps involved in knowledge management i.e. identifying, collecting and capturing, selecting, organizing and storing, sharing, applying, and creating, are explained. The pattern of knowledge management initiatives is also considered","tok_text":"knowledg manag \n the articl defin knowledg manag , discuss it role , and describ it function . it also explain the principl of knowledg manag , enumer the strategi involv in knowledg manag , and trace it histori in brief . the focu is on it interdisciplinari natur . the step involv in knowledg manag i.e. identifi , collect and captur , select , organ and store , share , appli , and creat , are explain . the pattern of knowledg manag initi is also consid","ordered_present_kp":[0],"keyphrases":["knowledge management"],"prmu":["P"]} {"id":"1293","title":"Truss topology optimization by a modified genetic algorithm","abstract":"This paper describes the use of a stochastic search procedure based on genetic algorithms for developing near-optimal topologies of load-bearing truss structures. Most existing cases these publications express the truss topology as a combination of members. These methods, however, have the disadvantage that the resulting topology may include needless members or those which overlap other members. In addition to these problems, the generated structures are not necessarily structurally stable. A new method, which resolves these problems by expressing the truss topology as a combination of triangles, is proposed in this paper. Details of the proposed methodology are presented as well as the results of numerical examples that clearly show the effectiveness and efficiency of the method","tok_text":"truss topolog optim by a modifi genet algorithm \n thi paper describ the use of a stochast search procedur base on genet algorithm for develop near-optim topolog of load-bear truss structur . most exist case these public express the truss topolog as a combin of member . these method , howev , have the disadvantag that the result topolog may includ needless member or those which overlap other member . in addit to these problem , the gener structur are not necessarili structur stabl . a new method , which resolv these problem by express the truss topolog as a combin of triangl , is propos in thi paper . detail of the propos methodolog are present as well as the result of numer exampl that clearli show the effect and effici of the method","ordered_present_kp":[81,25,142,164,0,573],"keyphrases":["truss topology optimization","modified genetic algorithm","stochastic search procedure","near-optimal topologies","load-bearing truss structures","triangles"],"prmu":["P","P","P","P","P","P"]} {"id":"1422","title":"Taxonomy's role in content management","abstract":"A taxonomy is simply a way of classifying things. Still, there is a rapidly growing list of vendors offering taxonomy software and related applications. They promise many benefits, especially to enterprise customers: Content management will be more efficient. Corporate portals will be enhanced by easily created Yahoo!-like directories of internal information. And the end-user experience will be dramatically improved by more successful content retrieval and more effective knowledge discovery. But today's taxonomy products represent emerging technologies. They are not out-of-the-box solutions. And even the most automated systems require some manual assistance from people who know how to classify content","tok_text":"taxonomi 's role in content manag \n a taxonomi is simpli a way of classifi thing . still , there is a rapidli grow list of vendor offer taxonomi softwar and relat applic . they promis mani benefit , especi to enterpris custom : content manag will be more effici . corpor portal will be enhanc by easili creat yahoo!-lik directori of intern inform . and the end-us experi will be dramat improv by more success content retriev and more effect knowledg discoveri . but today 's taxonomi product repres emerg technolog . they are not out-of-the-box solut . and even the most autom system requir some manual assist from peopl who know how to classifi content","ordered_present_kp":[136,209,20,264,333,434],"keyphrases":["content management","taxonomy software","enterprise customers","corporate portals","internal information","effective knowledge discovery","taxonomy applications"],"prmu":["P","P","P","P","P","P","R"]} {"id":"834","title":"Commerce Department plan eases 3G spectrum crunch","abstract":"The federal government made its first move last week toward cleaning up a spectrum allocation system that was in shambles just a year ago and had some, spectrum-starved wireless carriers fearing they wouldn't be able to compete in third-generation services. The move, however, is far from complete and leaves numerous details unsettled","tok_text":"commerc depart plan eas 3 g spectrum crunch \n the feder govern made it first move last week toward clean up a spectrum alloc system that wa in shambl just a year ago and had some , spectrum-starv wireless carrier fear they would n't be abl to compet in third-gener servic . the move , howev , is far from complet and leav numer detail unsettl","ordered_present_kp":[24,50,110,196],"keyphrases":["3G spectrum","federal government","spectrum allocation system","wireless carriers"],"prmu":["P","P","P","P"]} {"id":"871","title":"Priming the pipeline [women in computer science careers]","abstract":"In 1997 The Backyard Project, a pilot program of the Garnett Foundation, was instituted to encourage high school girls to explore careers in the computer industry. At that time, the Garnett Foundation commissioned the Global Strategy Group to execute a survey of 652 college-bound high school students (grades 9 through 12), to help discover directions that The Backyard Project might take to try to move toward the mission of the pilot program. It conducted the study by telephone between March 25 and April 8, 1997 in the Silicon Valley, Boston, and Austin metropolitan areas. It conducted all interviews using a random digit dialing methodology, derived from a file of American households with high incidences of adolescent children. The top six answers from girls to the survey question \"why are girls less likely to pursue computer science careers?\" in order of perceived importance by the girls were: not enough role models; women have other interests; didn't know about the industry; limited opportunity; negative media; and too nerdy. These responses are discussed","tok_text":"prime the pipelin [ women in comput scienc career ] \n in 1997 the backyard project , a pilot program of the garnett foundat , wa institut to encourag high school girl to explor career in the comput industri . at that time , the garnett foundat commiss the global strategi group to execut a survey of 652 college-bound high school student ( grade 9 through 12 ) , to help discov direct that the backyard project might take to tri to move toward the mission of the pilot program . it conduct the studi by telephon between march 25 and april 8 , 1997 in the silicon valley , boston , and austin metropolitan area . it conduct all interview use a random digit dial methodolog , deriv from a file of american household with high incid of adolesc children . the top six answer from girl to the survey question \" whi are girl less like to pursu comput scienc career ? \" in order of perceiv import by the girl were : not enough role model ; women have other interest ; did n't know about the industri ; limit opportun ; neg media ; and too nerdi . these respons are discuss","ordered_present_kp":[62,150,304],"keyphrases":["The Backyard Project","high school girls","college-bound high school students","computer industry careers"],"prmu":["P","P","P","R"]} {"id":"929","title":"Closed loop finite-element modeling of active constrained layer damping in the time domain analysis","abstract":"A three-dimensional finite-element closed-loop model has been developed to predict the effects of active-passive damping on a vibrating structure. The Golla-Hughes-McTavish method is employed to capture the viscoelastic material behavior in a time domain analysis. The parametric study includes the different control gains as well as geometric parameters related to the active constrained layer damping (ACLD) treatment. Comparisons are made among several ACLD models, the passive constrained model and the active damping model. The results obtained here reiterate that ACLD is somewhat better for vibration suppression than either the purely passive or the active system and provides higher structural damping with less control gain when compared to the purely active system. Since the ACLD performance can be reduced by the viscoelastic layer, the design of the ACLD model must be given a careful consideration in order to optimize the effect of passive damping","tok_text":"close loop finite-el model of activ constrain layer damp in the time domain analysi \n a three-dimension finite-el closed-loop model ha been develop to predict the effect of active-pass damp on a vibrat structur . the golla-hughes-mctavish method is employ to captur the viscoelast materi behavior in a time domain analysi . the parametr studi includ the differ control gain as well as geometr paramet relat to the activ constrain layer damp ( acld ) treatment . comparison are made among sever acld model , the passiv constrain model and the activ damp model . the result obtain here reiter that acld is somewhat better for vibrat suppress than either the pure passiv or the activ system and provid higher structur damp with less control gain when compar to the pure activ system . sinc the acld perform can be reduc by the viscoelast layer , the design of the acld model must be given a care consider in order to optim the effect of passiv damp","ordered_present_kp":[88,217,270,64,30,494,511,542,934,624,706,824],"keyphrases":["active constrained layer damping","time domain analysis","three-dimensional finite-element closed-loop model","Golla-Hughes-McTavish method","viscoelastic material","ACLD models","passive constrained model","active damping model","vibration suppression","structural damping","viscoelastic layer","passive damping"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1212","title":"TCRM: diagnosing tuple inconsistency for granulized datasets","abstract":"Many approaches to granularization have been presented for knowledge discovery. However, the inconsistent tuples that exist in granulized datasets are hardly ever revealed. We developed a model, tuple consistency recognition model (TCRM) to help efficiently detect inconsistent tuples for datasets that are granulized. The main outputs of the developed model include explored inconsistent tuples and consumed processing time. We further conducted an empirical test where eighteen continuous real-life datasets granulized by the equal width interval technique that embedded S-plus histogram binning algorithm (SHBA) and largest binning size algorithm (LBSA) binning algorithms were diagnosed. Remarkable results: almost 40% of the granulized datasets contain inconsistent tuples and 22% have the amount of inconsistent tuples more than 20%","tok_text":"tcrm : diagnos tupl inconsist for granul dataset \n mani approach to granular have been present for knowledg discoveri . howev , the inconsist tupl that exist in granul dataset are hardli ever reveal . we develop a model , tupl consist recognit model ( tcrm ) to help effici detect inconsist tupl for dataset that are granul . the main output of the develop model includ explor inconsist tupl and consum process time . we further conduct an empir test where eighteen continu real-lif dataset granul by the equal width interv techniqu that embed s-plu histogram bin algorithm ( shba ) and largest bin size algorithm ( lbsa ) bin algorithm were diagnos . remark result : almost 40 % of the granul dataset contain inconsist tupl and 22 % have the amount of inconsist tupl more than 20 %","ordered_present_kp":[0,15,34,68,99,222,403,505,544,587],"keyphrases":["TCRM","tuple inconsistency","granulized datasets","granularization","knowledge discovery","tuple consistency recognition model","processing time","equal width interval technique","S-plus histogram binning algorithm","largest binning size algorithm","relational database","large database","SQL"],"prmu":["P","P","P","P","P","P","P","P","P","P","U","U","U"]} {"id":"1257","title":"Definition of a similarity measure between cases based on auto\/cross-fuzzy thesauri","abstract":"A similarity measure between cases is needed in order to evaluate the degree of similarity when using past similar cases in order to resolve current problems. In similar case retrieval, multiple indices are set up in order to characterize the queries and individual cases, then terms are given as values to each. The similarity measure between cases commonly used is defined using the rate at which the values provided from the corresponding indices match. In practice, however, values cannot be expected to be mutually exclusive. As a result, a natural expansion of this approach is to have relationships in which mutually similar meanings are reflected in the similarity measure between cases. In this paper the authors consider an auto-fuzzy thesaurus which gives the relationship for values between corresponding indices and a cross-fuzzy thesaurus which gives the relationship for values between mutually distinct indices, then defines a similarity measure between cases which considers the relationship of index values based on these thesauri. This definition satisfies the characteristics required for the operation of case-based retrieval even when one value is not necessarily given in the index. Finally, using a test similar case retrieval system, the authors perform a comparative analysis of the proposed similarity measure between cases and a conventional approach","tok_text":"definit of a similar measur between case base on auto \/ cross-fuzzi thesauri \n a similar measur between case is need in order to evalu the degre of similar when use past similar case in order to resolv current problem . in similar case retriev , multipl indic are set up in order to character the queri and individu case , then term are given as valu to each . the similar measur between case commonli use is defin use the rate at which the valu provid from the correspond indic match . in practic , howev , valu can not be expect to be mutual exclus . as a result , a natur expans of thi approach is to have relationship in which mutual similar mean are reflect in the similar measur between case . in thi paper the author consid an auto-fuzzi thesauru which give the relationship for valu between correspond indic and a cross-fuzzi thesauru which give the relationship for valu between mutual distinct indic , then defin a similar measur between case which consid the relationship of index valu base on these thesauri . thi definit satisfi the characterist requir for the oper of case-bas retriev even when one valu is not necessarili given in the index . final , use a test similar case retriev system , the author perform a compar analysi of the propos similar measur between case and a convent approach","ordered_present_kp":[462,888,1082,223,734,822],"keyphrases":["similar case retrieval","corresponding indices","auto-fuzzy thesaurus","cross-fuzzy thesaurus","mutually distinct indices","case-based retrieval","case similarity measure","relationship indices","decision making support system","problem solving"],"prmu":["P","P","P","P","P","P","R","R","M","M"]} {"id":"602","title":"Image fusion between \/sup 18\/FDG-PET and MRI\/CT for radiotherapy planning of oropharyngeal and nasopharyngeal carcinomas","abstract":"Accurate diagnosis of tumor extent is important in three-dimensional conformal radiotherapy. This study reports the use of image fusion between (18)F-fluoro-2-deoxy-D-glucose positron emission tomography (\/sup 18\/FDG-PET) and magnetic resonance imaging\/computed tomography (MRI\/CT) for better targets delineation in radiotherapy planning of head-and-neck cancers. The subjects consisted of 12 patients with oropharyngeal carcinoma and 9 patients with nasopharyngeal carcinoma (NPC) who were treated with radical radiotherapy between July 1999 and February 2001. Image fusion between \/sup 18\/FDG-PET and MRI\/CT was performed using an automatic multimodality image registration algorithm, which used the brain as an internal reference for registration. Gross tumor volume (GTV) was determined based on clinical examination and \/sup 18\/FDG uptake on the fusion images. Clinical target volume (CTV) was determined following the usual pattern of lymph node spread for each disease entity along with the clinical presentation of each patient. Except for 3 cases with superficial tumors, all the other primary tumors were detected by \/sup 18\/FDG-PET. The GTV volumes for primary tumors were not changed by image fusion in 19 cases (89%), increased by 49% in one NPC, and decreased by 45% in another NPC. Normal tissue sparing was more easily performed based on clearer GTV and CTV determination on the fusion images. In particular, parotid sparing became possible in 15 patients (71%) whose upper neck areas near the parotid glands were tumor-free by \/sup 18\/FDG-PET. Within a mean follow-up period of 18 months, no recurrence occurred in the areas defined as CTV, which was treated prophylactically, except for 1 patient who experienced nodal recurrence in the CTV and simultaneous primary site recurrence. In conclusion, this preliminary study showed that image fusion between \/sup 18\/FDG-PET and MRI\/CT was useful in GTV and CTV determination in conformal RT, thus sparing normal tissues","tok_text":"imag fusion between \/sup 18 \/ fdg-pet and mri \/ ct for radiotherapi plan of oropharyng and nasopharyng carcinoma \n accur diagnosi of tumor extent is import in three-dimension conform radiotherapi . thi studi report the use of imag fusion between ( 18)f-fluoro-2-deoxy-d-glucos positron emiss tomographi ( \/sup 18 \/ fdg-pet ) and magnet reson imag \/ comput tomographi ( mri \/ ct ) for better target delin in radiotherapi plan of head-and-neck cancer . the subject consist of 12 patient with oropharyng carcinoma and 9 patient with nasopharyng carcinoma ( npc ) who were treat with radic radiotherapi between juli 1999 and februari 2001 . imag fusion between \/sup 18 \/ fdg-pet and mri \/ ct wa perform use an automat multimod imag registr algorithm , which use the brain as an intern refer for registr . gross tumor volum ( gtv ) wa determin base on clinic examin and \/sup 18 \/ fdg uptak on the fusion imag . clinic target volum ( ctv ) wa determin follow the usual pattern of lymph node spread for each diseas entiti along with the clinic present of each patient . except for 3 case with superfici tumor , all the other primari tumor were detect by \/sup 18 \/ fdg-pet . the gtv volum for primari tumor were not chang by imag fusion in 19 case ( 89 % ) , increas by 49 % in one npc , and decreas by 45 % in anoth npc . normal tissu spare wa more easili perform base on clearer gtv and ctv determin on the fusion imag . in particular , parotid spare becam possibl in 15 patient ( 71 % ) whose upper neck area near the parotid gland were tumor-fre by \/sup 18 \/ fdg-pet . within a mean follow-up period of 18 month , no recurr occur in the area defin as ctv , which wa treat prophylact , except for 1 patient who experienc nodal recurr in the ctv and simultan primari site recurr . in conclus , thi preliminari studi show that imag fusion between \/sup 18 \/ fdg-pet and mri \/ ct wa use in gtv and ctv determin in conform rt , thu spare normal tissu","ordered_present_kp":[0,20,42,55,91,490,1513,1744,1315,1086,1118],"keyphrases":["image fusion","\/sup 18\/FDG-PET","MRI\/CT","radiotherapy planning","nasopharyngeal carcinomas","oropharyngeal carcinomas","superficial tumors","primary tumors","normal tissues sparing","parotid glands","simultaneous primary site recurrence","F"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","U"]} {"id":"647","title":"Experimental design methodology and data analysis technique applied to optimise an organic synthesis","abstract":"The study was aimed at maximising the yield of a Michaelis-Becker dibromoalkane monophosphorylation reaction. In order to save time and money, we first applied a full factorial experimental design to search for the optimum conditions while performing a small number of experiments. We then used the principal component analysis (PCA) technique to evidence two uncontrolled factors. Lastly, a special experimental design that took into account all the influential factors allowed us to determine the maximum-yield experimental conditions. This study also evidenced the complementary nature of experimental design methodology and data analysis techniques","tok_text":"experiment design methodolog and data analysi techniqu appli to optimis an organ synthesi \n the studi wa aim at maximis the yield of a michaelis-beck dibromoalkan monophosphoryl reaction . in order to save time and money , we first appli a full factori experiment design to search for the optimum condit while perform a small number of experi . we then use the princip compon analysi ( pca ) techniqu to evid two uncontrol factor . lastli , a special experiment design that took into account all the influenti factor allow us to determin the maximum-yield experiment condit . thi studi also evidenc the complementari natur of experiment design methodolog and data analysi techniqu","ordered_present_kp":[135,240,289,33,75,361,413,542],"keyphrases":["data analysis technique","organic synthesis","Michaelis-Becker dibromoalkane monophosphorylation reaction","full factorial experimental design","optimum conditions","principal component analysis","uncontrolled factors","maximum-yield experimental conditions"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"80","title":"Evaluating the performance of a distributed database of repetitive elements in complete genomes","abstract":"The original version of the Repeat Sequence Database (RSDB) was created based on centralized database systems (CDBSs). RSDB presently includes an enormous amount of data, with the amount of biological data increasing rapidly. Distributed RSDB (DRSDB) is developed to yield better performance. This study proposed many approaches to data distribution and experimentally determines the best approach to obtain good performance of our database. Experimental results indicate that DRSDB performs well for particular types of query","tok_text":"evalu the perform of a distribut databas of repetit element in complet genom \n the origin version of the repeat sequenc databas ( rsdb ) wa creat base on central databas system ( cdbss ) . rsdb present includ an enorm amount of data , with the amount of biolog data increas rapidli . distribut rsdb ( drsdb ) is develop to yield better perform . thi studi propos mani approach to data distribut and experiment determin the best approach to obtain good perform of our databas . experiment result indic that drsdb perform well for particular type of queri","ordered_present_kp":[254,380,548,63,44],"keyphrases":["repetitive elements","complete genomes","biological data","data distribution","queries","distributed Repeat Sequence Database","performance evaluation"],"prmu":["P","P","P","P","P","R","R"]} {"id":"1113","title":"Word spotting based on a posterior measure of keyword confidence","abstract":"In this paper, an approach of keyword confidence estimation is developed that well combines acoustic layer scores and syllable-based statistical language model (LM) scores. An a posteriori (AP) confidence measure and its forward-backward calculating algorithm are deduced. A zero false alarm (ZFA) assumption is proposed for evaluating relative confidence measures by word spotting task. In a word spotting experiment with a vocabulary of 240 keywords, the keyword accuracy under the AP measure is above 94%, which well approaches its theoretical upper limit. In addition, a syllable lattice Hidden Markov Model (SLHMM) is formulated and a unified view of confidence estimation, word spotting, optimal path search, and N-best syllable re-scoring is presented. The proposed AP measure can be easily applied to various speech recognition systems as well","tok_text":"word spot base on a posterior measur of keyword confid \n in thi paper , an approach of keyword confid estim is develop that well combin acoust layer score and syllable-bas statist languag model ( lm ) score . an a posteriori ( ap ) confid measur and it forward-backward calcul algorithm are deduc . a zero fals alarm ( zfa ) assumpt is propos for evalu rel confid measur by word spot task . in a word spot experi with a vocabulari of 240 keyword , the keyword accuraci under the ap measur is abov 94 % , which well approach it theoret upper limit . in addit , a syllabl lattic hidden markov model ( slhmm ) is formul and a unifi view of confid estim , word spot , optim path search , and n-best syllabl re-scor is present . the propos ap measur can be easili appli to variou speech recognit system as well","ordered_present_kp":[0,18,40,136,253,353,374,562,95,664,688,775],"keyphrases":["word spotting","a posterior measure","keyword confidence","confidence estimation","acoustic layer scores","forward-backward calculating algorithm","relative confidence measures","word spotting task","syllable lattice hidden Markov model","optimal path search","N-best syllable re-scoring","speech recognition systems","syllable-based statistical language model scores","a posteriori confidence measure","zero false alarm assumption"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1156","title":"Favorable noise uniformity properties of Fourier-based interpolation and reconstruction approaches in single-slice helical computed tomography","abstract":"Volumes reconstructed by standard methods from single-slice helical computed tomography (CT) data have been shown to have noise levels that are highly nonuniform relative to those in conventional CT. These noise nonuniformities can affect low-contrast object detectability and have also been identified as the cause of the zebra artifacts that plague maximum intensity projection (MIP) images of such volumes. While these spatially variant noise levels have their root in the peculiarities of the helical scan geometry, there is also a strong dependence on the interpolation and reconstruction algorithms employed. In this paper, we seek to develop image reconstruction strategies that eliminate or reduce, at its source, the nonuniformity of noise levels in helical CT relative to that in conventional CT. We pursue two approaches, independently and in concert. We argue, and verify, that Fourier-based longitudinal interpolation approaches lead to more uniform noise ratios than do the standard 360LI and 180LI approaches. We also demonstrate that a Fourier-based fan-to-parallel rebinning algorithm, used as an alternative to fanbeam filtered backprojection for slice reconstruction, also leads to more uniform noise ratios, even when making use of the 180LI and 360LI interpolation approaches","tok_text":"favor nois uniform properti of fourier-bas interpol and reconstruct approach in single-slic helic comput tomographi \n volum reconstruct by standard method from single-slic helic comput tomographi ( ct ) data have been shown to have nois level that are highli nonuniform rel to those in convent ct . these nois nonuniform can affect low-contrast object detect and have also been identifi as the caus of the zebra artifact that plagu maximum intens project ( mip ) imag of such volum . while these spatial variant nois level have their root in the peculiar of the helic scan geometri , there is also a strong depend on the interpol and reconstruct algorithm employ . in thi paper , we seek to develop imag reconstruct strategi that elimin or reduc , at it sourc , the nonuniform of nois level in helic ct rel to that in convent ct . we pursu two approach , independ and in concert . we argu , and verifi , that fourier-bas longitudin interpol approach lead to more uniform nois ratio than do the standard 360li and 180li approach . we also demonstr that a fourier-bas fan-to-parallel rebin algorithm , use as an altern to fanbeam filter backproject for slice reconstruct , also lead to more uniform nois ratio , even when make use of the 180li and 360li interpol approach","ordered_present_kp":[31,80,56,6,286,1054,958,332,406],"keyphrases":["noise uniformity properties","Fourier-based interpolation","reconstruction approaches","single-slice helical computed tomography","conventional CT","low-contrast object detectability","zebra artifacts","more uniform noise ratios","Fourier-based fan-to-parallel rebinning algorithm","medical diagnostic imaging","maximum intensity projection images","helical span geometry"],"prmu":["P","P","P","P","P","P","P","P","P","M","R","M"]} {"id":"991","title":"Estimation of blocking probabilities in cellular networks with dynamic channel assignment","abstract":"Blocking probabilities in cellular mobile communication networks using dynamic channel assignment are hard to compute for realistic sized systems. This computational difficulty is due to the structure of the state space, which imposes strong coupling constraints amongst components of the occupancy vector. Approximate tractable models have been proposed, which have product form stationary state distributions. However, for real channel assignment schemes, the product form is a poor approximation and it is necessary to simulate the actual occupancy process in order to estimate the blocking probabilities. Meaningful estimates of the blocking probability typically require an enormous amount of CPU time for simulation, since blocking events are usually rare. Advanced simulation approaches use importance sampling (IS) to overcome this problem. We study two regimes under which blocking is a rare event: low-load and high cell capacity. Our simulations use the standard clock (SC) method. For low load, we propose a change of measure that we call static ISSC, which has bounded relative error. For high capacity, we use a change of measure that depends on the current state of the network occupancy. This is the dynamic ISSC method. We prove that this method yields zero variance estimators for single clique models, and we empirically show the advantages of this method over naive simulation for networks of moderate size and traffic loads","tok_text":"estim of block probabl in cellular network with dynam channel assign \n block probabl in cellular mobil commun network use dynam channel assign are hard to comput for realist size system . thi comput difficulti is due to the structur of the state space , which impos strong coupl constraint amongst compon of the occup vector . approxim tractabl model have been propos , which have product form stationari state distribut . howev , for real channel assign scheme , the product form is a poor approxim and it is necessari to simul the actual occup process in order to estim the block probabl . meaning estim of the block probabl typic requir an enorm amount of cpu time for simul , sinc block event are usual rare . advanc simul approach use import sampl ( is ) to overcom thi problem . we studi two regim under which block is a rare event : low-load and high cell capac . our simul use the standard clock ( sc ) method . for low load , we propos a chang of measur that we call static issc , which ha bound rel error . for high capac , we use a chang of measur that depend on the current state of the network occup . thi is the dynam issc method . we prove that thi method yield zero varianc estim for singl cliqu model , and we empir show the advantag of thi method over naiv simul for network of moder size and traffic load","ordered_present_kp":[48,88,266,312,327,381,659,523,740,840,853,999,1126,1177,1200],"keyphrases":["dynamic channel assignment","cellular mobile communication networks","strong coupling constraints","occupancy vector","approximate tractable models","product form stationary state distributions","simulation","CPU time","importance sampling","low-load","high cell capacity","bounded relative error","dynamic ISSC method","zero variance estimators","single clique models","blocking probability estimation","standard clock method","static ISSC method","quality of service","network traffic load"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","M","R"]} {"id":"546","title":"Real-time quasi-2-D inversion of array resistivity logging data using neural network","abstract":"We present a quasi-2-D real-time inversion algorithm for a modern galvanic array tool via dimensional reduction and neural network simulation. Using reciprocity and superposition, we apply a numerical focusing technique to the unfocused data. The numerically focused data are much less subject to 2-D and layering effects and can be approximated as from a cylindrical 1-D Earth. We then perform 1-D inversion on the focused data to provide approximate information about the 2-D resistivity structure. A neural network is used to perform forward modeling in the 1-D inversion, which is several hundred times faster than conventional numerical forward solutions. Testing our inversion algorithm on both synthetic and field data shows that this fast inversion algorithm is useful for providing formation resistivity information at a well site","tok_text":"real-tim quasi-2-d invers of array resist log data use neural network \n we present a quasi-2-d real-tim invers algorithm for a modern galvan array tool via dimension reduct and neural network simul . use reciproc and superposit , we appli a numer focus techniqu to the unfocus data . the numer focus data are much less subject to 2-d and layer effect and can be approxim as from a cylindr 1-d earth . we then perform 1-d invers on the focus data to provid approxim inform about the 2-d resist structur . a neural network is use to perform forward model in the 1-d invers , which is sever hundr time faster than convent numer forward solut . test our invers algorithm on both synthet and field data show that thi fast invers algorithm is use for provid format resist inform at a well site","ordered_present_kp":[0,29,55,95,134,156,204,217,241,269,417,271,539,752,778],"keyphrases":["real-time quasi-2-D inversion","array resistivity logging data","neural network","real-time inversion algorithm","galvanic array tool","dimensional reduction","reciprocity","superposition","numerical focusing technique","unfocused data","focused data","1-D inversion","forward modeling","formation resistivity","well site"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"687","title":"Image reconstruction of simulated specimens using convolution back projection","abstract":"This paper reports the reconstruction of cross-sections of composite structures. The convolution back projection (CBP) algorithm has been used to capture the attenuation field over the specimen. Five different test cases have been taken up for evaluation. These cases represent varying degrees of complexity. In addition, the role of filters on the nature of the reconstruction errors has also been discussed. Numerical results obtained in the study reveal that CBP algorithm is a useful tool for qualitative as well as quantitative assessment of composite regions encountered in engineering applications","tok_text":"imag reconstruct of simul specimen use convolut back project \n thi paper report the reconstruct of cross-sect of composit structur . the convolut back project ( cbp ) algorithm ha been use to captur the attenu field over the specimen . five differ test case have been taken up for evalu . these case repres vari degre of complex . in addit , the role of filter on the natur of the reconstruct error ha also been discuss . numer result obtain in the studi reveal that cbp algorithm is a use tool for qualit as well as quantit assess of composit region encount in engin applic","ordered_present_kp":[0,20,39,113,203,354,381,467,535,562],"keyphrases":["image reconstruction","simulated specimens","convolution back projection","composite structures","attenuation field","filters","reconstruction errors","CBP algorithm","composite regions","engineering applications","computerised tomography"],"prmu":["P","P","P","P","P","P","P","P","P","P","U"]} {"id":"1297","title":"Stochastic optimization of acoustic response - a numerical and experimental comparison","abstract":"The objective of the work presented is to compare results from numerical optimization with experimental data and to highlight and discuss the differences between two fundamentally different optimization methods. The problem domain is minimization of acoustic emission and the structure used in the work is a closed cylinder with forced vibration of one end. The optimization method used in this paper is simulated annealing (SA), a stochastic method. The results are compared with those from a gradient-based method used on the same structure in an earlier paper (Tinnsten, 2000)","tok_text":"stochast optim of acoust respons - a numer and experiment comparison \n the object of the work present is to compar result from numer optim with experiment data and to highlight and discuss the differ between two fundament differ optim method . the problem domain is minim of acoust emiss and the structur use in the work is a close cylind with forc vibrat of one end . the optim method use in thi paper is simul anneal ( sa ) , a stochast method . the result are compar with those from a gradient-bas method use on the same structur in an earlier paper ( tinnsten , 2000 )","ordered_present_kp":[127,296,326,18,344,406,0,488],"keyphrases":["stochastic optimization","acoustic response","numerical optimization","structure","closed cylinder","forced vibration","simulated annealing","gradient-based method","acoustic emission minimization"],"prmu":["P","P","P","P","P","P","P","P","R"]} {"id":"112","title":"Revisiting Hardy's paradox: Counterfactual statements, real measurements, entanglement and weak values","abstract":"Hardy's (1992) paradox is revisited. Usually the paradox is dismissed on grounds of counterfactuality, i.e., because the paradoxical effects appear only when one considers results of experiments which do not actually take place. We suggest a new set of measurements in connection with Hardy's scheme, and show that when they are actually performed, they yield strange and surprising outcomes. More generally, we claim that counterfactual paradoxes point to a deeper structure inherent to quantum mechanics","tok_text":"revisit hardi 's paradox : counterfactu statement , real measur , entangl and weak valu \n hardi 's ( 1992 ) paradox is revisit . usual the paradox is dismiss on ground of counterfactu , i.e. , becaus the paradox effect appear onli when one consid result of experi which do not actual take place . we suggest a new set of measur in connect with hardi 's scheme , and show that when they are actual perform , they yield strang and surpris outcom . more gener , we claim that counterfactu paradox point to a deeper structur inher to quantum mechan","ordered_present_kp":[27,52,66,78,204,530],"keyphrases":["counterfactual statements","real measurements","entanglement","weak values","paradoxical effects","quantum mechanics","Hardy paradox","gedanken-experiments"],"prmu":["P","P","P","P","P","P","R","U"]} {"id":"951","title":"How to drive strategic innovation [law firms]","abstract":"Innovation. It has everything to do with organization and attitude. Marginal improvement isn't enough anymore. Convert your problem-solving skills into a new value for the entire firm. 10 initiatives","tok_text":"how to drive strateg innov [ law firm ] \n innov . it ha everyth to do with organ and attitud . margin improv is n't enough anymor . convert your problem-solv skill into a new valu for the entir firm . 10 initi","ordered_present_kp":[29,13],"keyphrases":["strategic innovation","law firms","management","change","clients","experiments"],"prmu":["P","P","U","U","U","U"]} {"id":"914","title":"A knowledge management framework for the support of decision making in humanitarian assistance\/disaster relief","abstract":"The major challenge in current humanitarian assistance\/disaster relief (HA\/DR) efforts is that diverse information and knowledge are widely distributed and owned by different organizations. These resources are not efficiently organized and utilized during HA\/DR operations. We present a knowledge management framework that integrates multiple information technologies to collect, analyze, and manage information and knowledge for supporting decision making in HA\/DR. The framework will help identify the information needs, be aware of a disaster situation, and provide decision-makers with useful relief recommendations based on past experience. A comprehensive, consistent and authoritative knowledge base within the framework will facilitate knowledge sharing and reuse. This framework can also be applied to other similar real-time decision-making environments, such as crisis management and emergency medical assistance","tok_text":"a knowledg manag framework for the support of decis make in humanitarian assist \/ disast relief \n the major challeng in current humanitarian assist \/ disast relief ( ha \/ dr ) effort is that divers inform and knowledg are wide distribut and own by differ organ . these resourc are not effici organ and util dure ha \/ dr oper . we present a knowledg manag framework that integr multipl inform technolog to collect , analyz , and manag inform and knowledg for support decis make in ha \/ dr . the framework will help identifi the inform need , be awar of a disast situat , and provid decision-mak with use relief recommend base on past experi . a comprehens , consist and authorit knowledg base within the framework will facilit knowledg share and reus . thi framework can also be appli to other similar real-tim decision-mak environ , such as crisi manag and emerg medic assist","ordered_present_kp":[2,60,82,255,385,527,726,801,841,857],"keyphrases":["knowledge management framework","humanitarian assistance","disaster relief","organizations","information technology","information needs","knowledge sharing","real-time decision-making environments","crisis management","emergency medical assistance","decision support system","knowledge reuse","case-based reasoning"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","R","U"]} {"id":"586","title":"A strategy for a payoff-switching differential game based on fuzzy reasoning","abstract":"In this paper, a new concept of a payoff-switching differential game is introduced. In this new game, any one player at any time may have several choices of payoffs for the future. Moreover, the payoff-switching process, including the time of payoff switching and the outcome payoff, of any one player is unknown to the other. Indeed, the overall payoff, which is a sequence of several payoffs, is unknown until the game ends. An algorithm for determining a reasoning strategy based on fuzzy reasoning is proposed. In this algorithm, the fuzzy theory is used to estimate the behavior of one player during a past time interval. By deriving two fuzzy matrices GSM, game similarity matrix, and VGSM, variation of GSM, the behavior of the player can be quantified. Two weighting vectors are selected to weight the relative importance of the player's behavior at each past time instant. Finally a simple fuzzy inference rule is adopted to generate a linear reasoning strategy. The advantage of this algorithm is that it provides a flexible way for differential game specialists to convert their knowledge into a \"reasonable\" strategy. A practical example of guarding three territories is given to illustrate our main ideas","tok_text":"a strategi for a payoff-switch differenti game base on fuzzi reason \n in thi paper , a new concept of a payoff-switch differenti game is introduc . in thi new game , ani one player at ani time may have sever choic of payoff for the futur . moreov , the payoff-switch process , includ the time of payoff switch and the outcom payoff , of ani one player is unknown to the other . inde , the overal payoff , which is a sequenc of sever payoff , is unknown until the game end . an algorithm for determin a reason strategi base on fuzzi reason is propos . in thi algorithm , the fuzzi theori is use to estim the behavior of one player dure a past time interv . by deriv two fuzzi matric gsm , game similar matrix , and vgsm , variat of gsm , the behavior of the player can be quantifi . two weight vector are select to weight the rel import of the player 's behavior at each past time instant . final a simpl fuzzi infer rule is adopt to gener a linear reason strategi . the advantag of thi algorithm is that it provid a flexibl way for differenti game specialist to convert their knowledg into a \" reason \" strategi . a practic exampl of guard three territori is given to illustr our main idea","ordered_present_kp":[17,296,318,502,55,669,688,786,904,31],"keyphrases":["payoff-switching differential game","differential game","fuzzy reasoning","payoff switching","outcome payoff","reasoning strategy","fuzzy matrices","game similarity matrix","weighting vectors","fuzzy inference"],"prmu":["P","P","P","P","P","P","P","P","P","P"]} {"id":"1196","title":"Multiple shooting using a dichotomically stable integrator for solving differential-algebraic equations","abstract":"In previous work by the first author, it has been established that a dichotomically stable discretization is needed when solving a stiff boundary-value problem in ordinary differential equations (ODEs), when sharp boundary layers may occur at each end of the interval. A dichotomically stable implicit Runge-Kutta method, using the 3-stage, fourth-order, Lobatto IIIA formulae, has been implemented in a variable step-size initial-value integrator, which could be used in a multiple-shooting approach. In the case of index-one differential-algebraic equations (DAEs) the use of the Lobatto IIIA formulae has an advantage, over a comparable Gaussian method, that the order is the same for both differential and algebraic variables, and there is no need to treat them separately. The ODE integrator has been adapted for the solution of index-one DAEs, and the resulting integrator (SYMDAE) has been inserted into the multiple-shooting code (MSHDAE) previously developed by R. Lamour for differential-algebraic boundary-value problems. The standard version of MSHDAE uses a BDF integrator, which is not dichotomically stable, and for some stiff test problems this fails to integrate across the interval of interest, while the dichotomically stable integrator SYMDAE encounters no difficulty. Indeed, for such problems, the modified version of MSHDAE produces an accurate solution, and within limits imposed by computer word length, the efficiency of the solution process improves with increasing stiffness. For some nonstiff problems, the solution is also entirely satisfactory","tok_text":"multipl shoot use a dichotom stabl integr for solv differential-algebra equat \n in previou work by the first author , it ha been establish that a dichotom stabl discret is need when solv a stiff boundary-valu problem in ordinari differenti equat ( ode ) , when sharp boundari layer may occur at each end of the interv . a dichotom stabl implicit runge-kutta method , use the 3-stage , fourth-ord , lobatto iiia formula , ha been implement in a variabl step-siz initial-valu integr , which could be use in a multiple-shoot approach . in the case of index-on differential-algebra equat ( dae ) the use of the lobatto iiia formula ha an advantag , over a compar gaussian method , that the order is the same for both differenti and algebra variabl , and there is no need to treat them separ . the ode integr ha been adapt for the solut of index-on dae , and the result integr ( symda ) ha been insert into the multiple-shoot code ( mshdae ) previous develop by r. lamour for differential-algebra boundary-valu problem . the standard version of mshdae use a bdf integr , which is not dichotom stabl , and for some stiff test problem thi fail to integr across the interv of interest , while the dichotom stabl integr symda encount no difficulti . inde , for such problem , the modifi version of mshdae produc an accur solut , and within limit impos by comput word length , the effici of the solut process improv with increas stiff . for some nonstiff problem , the solut is also entir satisfactori","ordered_present_kp":[0,189,220,337,398,461,20,51],"keyphrases":["multiple shooting","dichotomically stable integrator","differential-algebraic equations","stiff boundary-value problem","ordinary differential equations","implicit Runge-Kutta method","Lobatto IIIA formulae","initial-value integrator"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"809","title":"Edison's direct current influenced \"Broadway\" show lighting","abstract":"During the early decades of the 20 th century, midtown Manhattan in New York City developed an extensive underground direct current (DC) power distribution system. This was a result of the original introduction of direct current by Thomas Edison's pioneering Pearl Street Station in 1882. The availability of DC power in the theater district, led to the perpetuation of an archaic form of stage lighting control through nearly three-quarters of the 20 th century. This control device was known as a \"resistance dimmer.\" It was essentially a series-connected rheostat, but it was wound with a special resistance \"taper\" so as to provide a uniform change in the apparent light output of typical incandescent lamps throughout the travel of its manually operated arm. The development and use of DC powered stage lighting is discussed in this article","tok_text":"edison 's direct current influenc \" broadway \" show light \n dure the earli decad of the 20 th centuri , midtown manhattan in new york citi develop an extens underground direct current ( dc ) power distribut system . thi wa a result of the origin introduct of direct current by thoma edison 's pioneer pearl street station in 1882 . the avail of dc power in the theater district , led to the perpetu of an archaic form of stage light control through nearli three-quart of the 20 th centuri . thi control devic wa known as a \" resist dimmer . \" it wa essenti a series-connect rheostat , but it wa wound with a special resist \" taper \" so as to provid a uniform chang in the appar light output of typic incandesc lamp throughout the travel of it manual oper arm . the develop and use of dc power stage light is discuss in thi articl","ordered_present_kp":[112,125,361,421,525,559,672,700,784],"keyphrases":["Manhattan","New York City","theater district","stage lighting control","resistance dimmer","series-connected rheostat","apparent light output","incandescent lamps","DC powered stage lighting","Broadway show lighting","underground direct current power distribution system","Thomas Edison's Pearl Street Station","resistance taper"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R","R"]} {"id":"767","title":"Quantum computation for physical modeling","abstract":"One of the most famous American physicists of the twentieth century, Richard Feynman, in 1982 was the first to propose using a quantum mechanical computing device to efficiently simulate quantum mechanical many-body dynamics, a task that is exponentially complex in the number of particles treated and is completely intractable by any classical computing means for large systems of many particles. In the two decades following his work, remarkable progress has been made both theoretically and experimentally in the new field of quantum computation","tok_text":"quantum comput for physic model \n one of the most famou american physicist of the twentieth centuri , richard feynman , in 1982 wa the first to propos use a quantum mechan comput devic to effici simul quantum mechan many-bodi dynam , a task that is exponenti complex in the number of particl treat and is complet intract by ani classic comput mean for larg system of mani particl . in the two decad follow hi work , remark progress ha been made both theoret and experiment in the new field of quantum comput","ordered_present_kp":[0,19,157,201],"keyphrases":["quantum computation","physical modeling","quantum mechanical computing","quantum mechanical many-body dynamics"],"prmu":["P","P","P","P"]} {"id":"722","title":"Updating systems for monitoring and controlling power equipment on the basis of the firmware system SARGON","abstract":"The economic difficulties experienced by the power industry of Russia has considerably retarded the speed of commissioning new capacities and reconstructing equipment in service. The increasing deterioration of the equipment at power stations makes the problem of its updating very acute. The main efforts of organizations working in the power industry are now focused on updating all kinds of equipment installed at power installations. The necessary condition for the efficient operation of power equipment is to carry out serious modernization of systems for monitoring and control (SMC) of technological processes. The specialists at ZAO NVT-Avtomatika have developed efficient technology for updating the SMC on the basis of the firmware system SARGON which ensures the fast introduction of high-quality systems of automation with a minimal payback time of the capital outlay. This paper discusses the updating of equipment using SARGON","tok_text":"updat system for monitor and control power equip on the basi of the firmwar system sargon \n the econom difficulti experienc by the power industri of russia ha consider retard the speed of commiss new capac and reconstruct equip in servic . the increas deterior of the equip at power station make the problem of it updat veri acut . the main effort of organ work in the power industri are now focus on updat all kind of equip instal at power instal . the necessari condit for the effici oper of power equip is to carri out seriou modern of system for monitor and control ( smc ) of technolog process . the specialist at zao nvt-avtomatika have develop effici technolog for updat the smc on the basi of the firmwar system sargon which ensur the fast introduct of high-qual system of autom with a minim payback time of the capit outlay . thi paper discuss the updat of equip use sargon","ordered_present_kp":[131,149,619],"keyphrases":["power industry","Russia","ZAO NVT-Avtomatika","SARGON firmware system","monitoring systems","control systems","power equipment monitoring","power equipment control"],"prmu":["P","P","P","R","R","R","R","R"]} {"id":"1377","title":"Open hypermedia for product support","abstract":"As industrial systems become increasingly more complex, the maintenance and operating information increases both in volume and complexity. With the current pressures on manufacturing, the management of information resources has become a critical issue. In particular, ensuring that personnel can access current information quickly and effectively when undertaking a specific task. This paper discusses some of the issues involved in, and the benefits of using, open hypermedia to manage and deliver a diverse range of information. While the paper concentrates on the problems specifically associated with manufacturing organizations, the problems are generic across other business sectors such as healthcare, defence and finance. The open hypermedia approach to information management and delivery allows a multimedia resource base to be used for a range of applications and it permits a user to have controlled access to the required information in an easily accessible and structured manner. Recent advancement in hypermedia also permits just-in-time support in the most appropriate format for all users. Our approach is illustrated by the discussion of a case study in which an open hypermedia system delivers maintenance and process information to factory-floor users to support the maintenance and operation of a very large manufacturing cell","tok_text":"open hypermedia for product support \n as industri system becom increasingli more complex , the mainten and oper inform increas both in volum and complex . with the current pressur on manufactur , the manag of inform resourc ha becom a critic issu . in particular , ensur that personnel can access current inform quickli and effect when undertak a specif task . thi paper discuss some of the issu involv in , and the benefit of use , open hypermedia to manag and deliv a divers rang of inform . while the paper concentr on the problem specif associ with manufactur organ , the problem are gener across other busi sector such as healthcar , defenc and financ . the open hypermedia approach to inform manag and deliveri allow a multimedia resourc base to be use for a rang of applic and it permit a user to have control access to the requir inform in an easili access and structur manner . recent advanc in hypermedia also permit just-in-tim support in the most appropri format for all user . our approach is illustr by the discuss of a case studi in which an open hypermedia system deliv mainten and process inform to factory-floor user to support the mainten and oper of a veri larg manufactur cell","ordered_present_kp":[0,95,107,209,927,20],"keyphrases":["open hypermedia","product support","maintenance","operating information","information resources","just-in-time support"],"prmu":["P","P","P","P","P","P"]} {"id":"1332","title":"Personal cards for on-line purchases","abstract":"Buying presents over the Web has advantages for a busy person: lots of choices, 24-hour accessibility, quick delivery, and you don't even have to wrap the gift. But many people like to select a card or write a personal note to go with their presents, and the options for doing that have been limited. Two companies have seen this limitation as an opportunity: 4YourSoul.com and CardintheBox.com","tok_text":"person card for on-lin purchas \n buy present over the web ha advantag for a busi person : lot of choic , 24-hour access , quick deliveri , and you do n't even have to wrap the gift . but mani peopl like to select a card or write a person note to go with their present , and the option for do that have been limit . two compani have seen thi limit as an opportun : 4yoursoul.com and cardinthebox.com","ordered_present_kp":[364,382,0],"keyphrases":["personal cards","4YourSoul.com","CardintheBox.com","personalized printing","online purchases"],"prmu":["P","P","P","M","M"]} {"id":"1076","title":"Delayed-choice entanglement swapping with vacuum-one-photon quantum states","abstract":"We report the experimental realization of a recently discovered quantum-information protocol by Peres implying an apparent nonlocal quantum mechanical retrodiction effect. The demonstration is carried out by a quantum optical method by which each singlet entangled state is physically implemented by a two-dimensional subspace of Fock states of a mode of the electromagnetic field, specifically the space spanned by the vacuum and the one-photon state, along lines suggested recently by E. Knill et al. [Nature (London) 409, 46 (2001)] and by M. Duan et al. [ibid. 414, 413 (2001)]","tok_text":"delayed-choic entangl swap with vacuum-one-photon quantum state \n we report the experiment realiz of a recent discov quantum-inform protocol by pere impli an appar nonloc quantum mechan retrodict effect . the demonstr is carri out by a quantum optic method by which each singlet entangl state is physic implement by a two-dimension subspac of fock state of a mode of the electromagnet field , specif the space span by the vacuum and the one-photon state , along line suggest recent by e. knill et al . [ natur ( london ) 409 , 46 ( 2001 ) ] and by m. duan et al . [ ibid . 414 , 413 ( 2001 ) ]","ordered_present_kp":[0,117,164,236,271,318,343,32,437],"keyphrases":["delayed-choice entanglement","vacuum-one-photon quantum states","quantum-information","nonlocal quantum mechanical retrodiction effect","quantum optical method","singlet entangled state","two-dimensional subspace","Fock states","one-photon state","state entanglement","electromagnetic field mode","vacuum state"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1033","title":"Optical two-step modified signed-digit addition based on binary logic gates","abstract":"A new modified signed-digit (MSD) addition algorithm based on binary logic gates is proposed for parallel computing. It is shown that by encoding each of the input MSD digits and flag digits into a pair of binary bits, the number of addition steps can be reduced to two. The flag digit is introduced to characterize the next low order pair (NLOP) of the input digits in order to suppress carry propagation. The rules for two-step addition of binary coded MSD (BCMSD) numbers are formulated that can be implemented using optical shadow-casting logic system","tok_text":"optic two-step modifi signed-digit addit base on binari logic gate \n a new modifi signed-digit ( msd ) addit algorithm base on binari logic gate is propos for parallel comput . it is shown that by encod each of the input msd digit and flag digit into a pair of binari bit , the number of addit step can be reduc to two . the flag digit is introduc to character the next low order pair ( nlop ) of the input digit in order to suppress carri propag . the rule for two-step addit of binari code msd ( bcmsd ) number are formul that can be implement use optic shadow-cast logic system","ordered_present_kp":[0,49,159,215,235,261,288,370,462,480,550],"keyphrases":["optical two-step modified signed-digit addition","binary logic gates","parallel computing","input MSD digits","flag digits","binary bits","addition steps","low order pair","two-step addition","binary coded MSD","optical shadow-casting logic system","modified signed-digit addition algorithm","carry propagation suppression"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"64","title":"Speech enhancement using a mixture-maximum model","abstract":"We present a spectral domain, speech enhancement algorithm. The new algorithm is based on a mixture model for the short time spectrum of the clean speech signal, and on a maximum assumption in the production of the noisy speech spectrum. In the past this model was used in the context of noise robust speech recognition. In this paper we show that this model is also effective for improving the quality of speech signals corrupted by additive noise. The computational requirements of the algorithm can be significantly reduced, essentially without paying performance penalties, by incorporating a dual codebook scheme with tied variances. Experiments, using recorded speech signals and actual noise sources, show that in spite of its low computational requirements, the algorithm shows improved performance compared to alternative speech enhancement algorithms","tok_text":"speech enhanc use a mixture-maximum model \n we present a spectral domain , speech enhanc algorithm . the new algorithm is base on a mixtur model for the short time spectrum of the clean speech signal , and on a maximum assumpt in the product of the noisi speech spectrum . in the past thi model wa use in the context of nois robust speech recognit . in thi paper we show that thi model is also effect for improv the qualiti of speech signal corrupt by addit nois . the comput requir of the algorithm can be significantli reduc , essenti without pay perform penalti , by incorpor a dual codebook scheme with tie varianc . experi , use record speech signal and actual nois sourc , show that in spite of it low comput requir , the algorithm show improv perform compar to altern speech enhanc algorithm","ordered_present_kp":[20,57,75,132,153,180,249,320,452,549,581,607,634,666,704],"keyphrases":["mixture-maximum model","spectral domain","speech enhancement algorithm","mixture model","short time spectrum","clean speech signal","noisy speech spectrum","noise robust speech recognition","additive noise","performance penalties","dual codebook","tied variances","recorded speech signals","noise sources","low computational requirements","speech signal quality","Gaussian mixture model","MIXMAX model","speech intelligibility"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","M","M","M"]} {"id":"136","title":"Design of 1-D and 2-D variable fractional delay allpass filters using weighted least-squares method","abstract":"In this paper, a weighted least-squares method is presented to design one-dimensional and two-dimensional variable fractional delay allpass filters. First, each coefficient of the variable allpass filter is expressed as the polynomial of the fractional delay parameter. Then, the nonlinear phase error is approximated by a weighted equation error such that the cost function can be converted into a quadratic form. Next, by minimizing the weighted equation error, the optimal polynomial coefficients can be obtained iteratively by solving a set of linear simultaneous equations at each iteration. Finally, the design examples are demonstrated to illustrate the effectiveness of the proposed approach","tok_text":"design of 1-d and 2-d variabl fraction delay allpass filter use weight least-squar method \n in thi paper , a weight least-squar method is present to design one-dimension and two-dimension variabl fraction delay allpass filter . first , each coeffici of the variabl allpass filter is express as the polynomi of the fraction delay paramet . then , the nonlinear phase error is approxim by a weight equat error such that the cost function can be convert into a quadrat form . next , by minim the weight equat error , the optim polynomi coeffici can be obtain iter by solv a set of linear simultan equat at each iter . final , the design exampl are demonstr to illustr the effect of the propos approach","ordered_present_kp":[64,22,314,389,422,518,578],"keyphrases":["variable fractional delay allpass filters","weighted least-squares method","fractional delay parameter","weighted equation error","cost function","optimal polynomial coefficients","linear simultaneous equations","1D allpass filters","2D allpass filters","nonlinear phase error approximation"],"prmu":["P","P","P","P","P","P","P","M","M","R"]} {"id":"975","title":"Algebraic conditions for high-order convergent deferred correction schemes based on Runge-Kutta-Nystrom methods for second order boundary value problems","abstract":"In [T. Van Hecke, M. Van Daele, J. Comp. Appl. Math., vol. 132, p. 107-125, (2001)] the investigation of high-order convergence of deferred correction schemes for the numerical solution of second order nonlinear two-point boundary value problems not containing the first derivative, is made. The derivation of the algebraic conditions to raise the increase of order by the deferred correction scheme was based on Taylor series expansions. In this paper we describe a more elegant way by means of P-series to obtain this necessary conditions and generalize this idea to equations of the form y\" = f (t, y, y')","tok_text":"algebra condit for high-ord converg defer correct scheme base on runge-kutta-nystrom method for second order boundari valu problem \n in [ t. van heck , m. van dael , j. comp . appl . math . , vol . 132 , p. 107 - 125 , ( 2001 ) ] the investig of high-ord converg of defer correct scheme for the numer solut of second order nonlinear two-point boundari valu problem not contain the first deriv , is made . the deriv of the algebra condit to rais the increas of order by the defer correct scheme wa base on taylor seri expans . in thi paper we describ a more eleg way by mean of p-seri to obtain thi necessari condit and gener thi idea to equat of the form y \" = f ( t , y , y ' )","ordered_present_kp":[19,65,96,36,310,0,505],"keyphrases":["algebraic conditions","high-order convergent deferred correction schemes","deferred correction schemes","Runge-Kutta-Nystrom methods","second order boundary value problems","second order nonlinear two-point boundary value problems","Taylor series expansions"],"prmu":["P","P","P","P","P","P","P"]} {"id":"930","title":"NARX-based technique for the modelling of magneto-rheological damping devices","abstract":"This paper presents a methodology for identifying variable-structure nonlinear models of magneto-rheological dampers (MRD) and similar devices. Its peculiarity with respect to the mainstream literature is to be especially conceived for obtaining models that are structurally simple, easy to estimate and well suited for model-based control. This goal is pursued by adopting linear-in-the-parameters NARX models, for which an identification method is developed based on the minimization of the simulation error. This method is capable of selecting the model structure together with the parameters, thus it does not require a priori structural information. A set of validation tests is reported, with the aim of demonstrating the technique's efficiency by comparing it to a widely accepted MRD modelling approach","tok_text":"narx-bas techniqu for the model of magneto-rheolog damp devic \n thi paper present a methodolog for identifi variable-structur nonlinear model of magneto-rheolog damper ( mrd ) and similar devic . it peculiar with respect to the mainstream literatur is to be especi conceiv for obtain model that are structur simpl , easi to estim and well suit for model-bas control . thi goal is pursu by adopt linear-in-the-paramet narx model , for which an identif method is develop base on the minim of the simul error . thi method is capabl of select the model structur togeth with the paramet , thu it doe not requir a priori structur inform . a set of valid test is report , with the aim of demonstr the techniqu 's effici by compar it to a wide accept mrd model approach","ordered_present_kp":[26,348,417,99,481,494,642,743],"keyphrases":["modelling","identification","model-based control","NARX models","minimization","simulation error","validation","MRD modelling","magnetorheological damping"],"prmu":["P","P","P","P","P","P","P","P","M"]} {"id":"988","title":"A new merging algorithm for constructing suffix trees for integer alphabets","abstract":"A new approach for constructing a suffix tree T\/sub s\/ for a given string S is to construct recursively a suffix tree T\/sub o\/ for odd positions, construct a suffix, tree T\/sub e\/ for even positions from T\/sub o\/ and then merge T\/sub o\/ and T\/sub e\/ into T\/sub s\/. To construct suffix trees for integer alphabets in linear time had been a major open problem on index data structures. Farach used this approach and gave the first linear-time algorithm for integer alphabets. The hardest part of Farach's algorithm is the merging step. In this paper we present a new and simpler merging algorithm based on a coupled BFS (breadth-first search). Our merging algorithm is more intuitive than Farach's coupled DFS (depth-first search) merging, and thus it can be easily extended to other applications","tok_text":"a new merg algorithm for construct suffix tree for integ alphabet \n a new approach for construct a suffix tree t \/ sub s\/ for a given string s is to construct recurs a suffix tree t \/ sub o\/ for odd posit , construct a suffix , tree t \/ sub e\/ for even posit from t \/ sub o\/ and then merg t \/ sub o\/ and t \/ sub e\/ into t \/ sub s\/. to construct suffix tree for integ alphabet in linear time had been a major open problem on index data structur . farach use thi approach and gave the first linear-tim algorithm for integ alphabet . the hardest part of farach 's algorithm is the merg step . in thi paper we present a new and simpler merg algorithm base on a coupl bf ( breadth-first search ) . our merg algorithm is more intuit than farach 's coupl df ( depth-first search ) merg , and thu it can be easili extend to other applic","ordered_present_kp":[6,424,657,668,35,51,379],"keyphrases":["merging algorithm","suffix trees","integer alphabets","linear time","index data structures","coupled BFS","breadth-first search","recursive construction"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"99","title":"Radianz and Savvis look to expand service in wake of telecom scandals [finance]","abstract":"With confidence in network providers waning, Radianz and Savvis try to prove their stability. Savvis and Radianz, which both specialize in providing the data-extranet components of telecommunication infrastructures, may see more networking doors open at investment banks, brokerage houses, exchanges and alternative-trading systems","tok_text":"radianz and savvi look to expand servic in wake of telecom scandal [ financ ] \n with confid in network provid wane , radianz and savvi tri to prove their stabil . savvi and radianz , which both special in provid the data-extranet compon of telecommun infrastructur , may see more network door open at invest bank , brokerag hous , exchang and alternative-trad system","ordered_present_kp":[95,12,0,216,240,301,315,331,343],"keyphrases":["Radianz","Savvis","network providers","data-extranet","telecommunication infrastructures","investment banks","brokerage houses","exchanges","alternative-trading systems"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"895","title":"Algorithms for improving the quality of R-trees","abstract":"A novel approach to operation with a structure for spatial indexing of extended objects shaped as R-trees is considered. It consists of the initial global construction of an efficient R-tree structure and the subsequent operation with it using conventional dynamic algorithms. A global strategy for constructing an R-tree reduced to a problem of dividing a set of rectangular objects into K parts with minimum mutual overlay is suggested. Base, box, and \"Divide and Conquer\" algorithms are suggested. The results of experimental modeling of the execution of various algorithms are discussed","tok_text":"algorithm for improv the qualiti of r-tree \n a novel approach to oper with a structur for spatial index of extend object shape as r-tree is consid . it consist of the initi global construct of an effici r-tree structur and the subsequ oper with it use convent dynam algorithm . a global strategi for construct an r-tree reduc to a problem of divid a set of rectangular object into k part with minimum mutual overlay is suggest . base , box , and \" divid and conquer \" algorithm are suggest . the result of experiment model of the execut of variou algorithm are discuss","ordered_present_kp":[36,90,107,260,357,393],"keyphrases":["R-trees","spatial indexing","extended objects","dynamic algorithms","rectangular objects","minimum mutual overlay","graphical search","computational geometry"],"prmu":["P","P","P","P","P","P","U","U"]} {"id":"1052","title":"Developing a high-performance web server in Concurrent Haskell","abstract":"Server applications, and in particular network-based server applications, place a unique combination of demands on a programming language: lightweight concurrency, high I\/O throughput, and fault tolerance are all important. This paper describes a prototype Web server written in Concurrent Haskell (with extensions), and presents two useful results: firstly, a conforming server could be written with minimal effort, leading to an implementation in less than 1500 lines of code, and secondly the naive implementation produced reasonable performance. Furthermore, making minor modifications to a few time-critical components improved performance to a level acceptable for anything but the most heavily loaded Web servers","tok_text":"develop a high-perform web server in concurr haskel \n server applic , and in particular network-bas server applic , place a uniqu combin of demand on a program languag : lightweight concurr , high i \/ o throughput , and fault toler are all import . thi paper describ a prototyp web server written in concurr haskel ( with extens ) , and present two use result : firstli , a conform server could be written with minim effort , lead to an implement in less than 1500 line of code , and secondli the naiv implement produc reason perform . furthermor , make minor modif to a few time-crit compon improv perform to a level accept for anyth but the most heavili load web server","ordered_present_kp":[10,37,88,170,192,220,374,575],"keyphrases":["high-performance Web server","Concurrent Haskell","network-based server applications","lightweight concurrency","high I\/O throughput","fault tolerance","conforming server","time-critical components"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1017","title":"Searching a scalable approach to cerebellar based control","abstract":"Decades of research into the structure and function of the cerebellum have led to a clear understanding of many of its cells, as well as how learning might take place. Furthermore, there are many theories on what signals the cerebellum operates on, and how it works in concert with other parts of the nervous system. Nevertheless, the application of computational cerebellar models to the control of robot dynamics remains in its infant state. To date, few applications have been realized. The currently emerging family of light-weight robots poses a new challenge to robot control: due to their complex dynamics traditional methods, depending on a full analysis of the dynamics of the system, are no longer applicable since the joints influence each other dynamics during movement. Can artificial cerebellar models compete here?","tok_text":"search a scalabl approach to cerebellar base control \n decad of research into the structur and function of the cerebellum have led to a clear understand of mani of it cell , as well as how learn might take place . furthermor , there are mani theori on what signal the cerebellum oper on , and how it work in concert with other part of the nervou system . nevertheless , the applic of comput cerebellar model to the control of robot dynam remain in it infant state . to date , few applic have been realiz . the current emerg famili of light-weight robot pose a new challeng to robot control : due to their complex dynam tradit method , depend on a full analysi of the dynam of the system , are no longer applic sinc the joint influenc each other dynam dure movement . can artifici cerebellar model compet here ?","ordered_present_kp":[9,29,339,384,534,576],"keyphrases":["scalable approach","cerebellar based control","nervous system","computational cerebellar models","light-weight robots","robot control"],"prmu":["P","P","P","P","P","P"]} {"id":"743","title":"Local satellite","abstract":"Consumer based mobile satellite phone services went from boom to burn up in twelve months despite original forecasts predicting 10 million to 40 million users by 2005. Julian Bright wonders what prospects the technology has now and if going regional might be one answer","tok_text":"local satellit \n consum base mobil satellit phone servic went from boom to burn up in twelv month despit origin forecast predict 10 million to 40 million user by 2005 . julian bright wonder what prospect the technolog ha now and if go region might be one answer","ordered_present_kp":[29],"keyphrases":["mobile satellite phone services"],"prmu":["P"]} {"id":"706","title":"Enhancing the reliability of modular medium-voltage drives","abstract":"A method to increase the reliability of modular medium-voltage induction motor drives is discussed, by providing means to bypass a failed module. The impact on reliability is shown. A control, which maximizes the output voltage available after bypass, is described, and experimental results are given","tok_text":"enhanc the reliabl of modular medium-voltag drive \n a method to increas the reliabl of modular medium-voltag induct motor drive is discuss , by provid mean to bypass a fail modul . the impact on reliabl is shown . a control , which maxim the output voltag avail after bypass , is describ , and experiment result are given","ordered_present_kp":[87],"keyphrases":["modular medium-voltage induction motor drives","reliability enhancement","failed module bypass","available output voltage control"],"prmu":["P","R","R","R"]} {"id":"1353","title":"Generalized spatio-chromatic diffusion","abstract":"A framework for diffusion of color images is presented. The method is based on the theory of thermodynamics of irreversible transformations which provides a suitable basis for designing correlations between the different color channels. More precisely, we derive an equation for color evolution which comprises a purely spatial diffusive term and a nonlinear term that depends on the interactions among color channels over space. We apply the proposed equation to images represented in several color spaces, such as RGB, CIELAB, Opponent colors, and IHS","tok_text":"gener spatio-chromat diffus \n a framework for diffus of color imag is present . the method is base on the theori of thermodynam of irrevers transform which provid a suitabl basi for design correl between the differ color channel . more precis , we deriv an equat for color evolut which compris a pure spatial diffus term and a nonlinear term that depend on the interact among color channel over space . we appli the propos equat to imag repres in sever color space , such as rgb , cielab , oppon color , and ih","ordered_present_kp":[0,56,21,116,131,215,267,301,327,475,481,490,508],"keyphrases":["generalized spatio-chromatic diffusion","diffusion","color images","thermodynamics","irreversible transformations","color channels","color evolution","spatial diffusive term","nonlinear term","RGB","CIELAB","Opponent colors","IHS","vector-valued diffusion","scale-space"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","M","U"]} {"id":"1316","title":"Understanding Internet traffic streams: dragonflies and tortoises","abstract":"We present the concept of network traffic streams and the ways they aggregate into flows through Internet links. We describe a method of measuring the size and lifetime of Internet streams, and use this method to characterize traffic distributions at two different sites. We find that although most streams (about 45 percent of them) are dragonflies, lasting less than 2 seconds, a significant number of streams have lifetimes of hours to days, and can carry a high proportion (50-60 percent) of the total bytes on a given link. We define tortoises as streams that last longer than 15 minutes. We point out that streams can be classified not only by lifetime (dragonflies and tortoises) but also by size (mice and elephants), and note that stream size and lifetime are independent dimensions. We submit that ISPs need to be aware of the distribution of Internet stream sizes, and the impact of the difference in behavior between short and long streams. In particular, any forwarding cache mechanisms in Internet routers must be able to cope with a high volume of short streams. In addition ISPs should realize that long-running streams can contribute a significant fraction of their packet and byte volumes-something they may not have allowed for when using traditional \"flat rate user bandwidth consumption\" approaches to provisioning and engineering","tok_text":"understand internet traffic stream : dragonfli and tortois \n we present the concept of network traffic stream and the way they aggreg into flow through internet link . we describ a method of measur the size and lifetim of internet stream , and use thi method to character traffic distribut at two differ site . we find that although most stream ( about 45 percent of them ) are dragonfli , last less than 2 second , a signific number of stream have lifetim of hour to day , and can carri a high proport ( 50 - 60 percent ) of the total byte on a given link . we defin tortois as stream that last longer than 15 minut . we point out that stream can be classifi not onli by lifetim ( dragonfli and tortois ) but also by size ( mice and eleph ) , and note that stream size and lifetim are independ dimens . we submit that isp need to be awar of the distribut of internet stream size , and the impact of the differ in behavior between short and long stream . in particular , ani forward cach mechan in internet router must be abl to cope with a high volum of short stream . in addit isp should realiz that long-run stream can contribut a signific fraction of their packet and byte volumes-someth they may not have allow for when use tradit \" flat rate user bandwidth consumpt \" approach to provis and engin","ordered_present_kp":[11,37,51,87,272,725,734,819,975,998,1102],"keyphrases":["Internet traffic streams","dragonflies","tortoises","network traffic streams","traffic distributions","mice","elephants","ISP","forwarding cache mechanisms","Internet routers","long-running streams","Internet stream size measurement","Internet stream lifetime measurement","packet volume","byte volume","traffic provisioning","traffic engineering"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","R","R","R","R","R"]} {"id":"868","title":"Two quantum analogues of Fisher information from a large deviation viewpoint of quantum estimation","abstract":"We discuss two quantum analogues of the Fisher information, the symmetric logarithmic derivative Fisher information and Kubo-Mori-Bogoljubov Fisher information from a large deviation viewpoint of quantum estimation and prove that the former gives the true bound and the latter gives the bound of consistent superefficient estimators. As another comparison, it is shown that the difference between them is characterized by the change of the order of limits","tok_text":"two quantum analogu of fisher inform from a larg deviat viewpoint of quantum estim \n we discuss two quantum analogu of the fisher inform , the symmetr logarithm deriv fisher inform and kubo-mori-bogoljubov fisher inform from a larg deviat viewpoint of quantum estim and prove that the former give the true bound and the latter give the bound of consist supereffici estim . as anoth comparison , it is shown that the differ between them is character by the chang of the order of limit","ordered_present_kp":[4,69,185,345,44,143],"keyphrases":["quantum analogues","large deviation viewpoint","quantum estimation","symmetric logarithmic derivative Fisher information","Kubo-Mori-Bogoljubov Fisher information","consistent superefficient estimators","statistical inference"],"prmu":["P","P","P","P","P","P","U"]} {"id":"1092","title":"Ride quality evaluation of an actively-controlled stretcher for an ambulance","abstract":"This study considers the subjective evaluation of ride quality during ambulance transportation using an actively-controlled stretcher (ACS). The ride quality of a conventional stretcher and an assistant driver's seat is also compared. Braking during ambulance transportation generates negative foot-to-head acceleration in patients and causes blood pressure to rise in the patient's head. The ACS absorbs the foot-to-head acceleration by changing the angle of the stretcher, thus reducing the blood pressure variation. However, the ride quality of the ACS should be investigated further because the movement of the ACS may cause motion sickness and nausea. Experiments of ambulance transportation, including rapid acceleration and deceleration, are performed to evaluate the effect of differences in posture of the transported subject on the ride quality; the semantic differential method and factor analysis are used in the investigations. Subjects are transported using a conventional stretcher with head forward, a conventional stretcher with head backward, the ACS, and an assistant driver's seat for comparison with transportation using a stretcher. Experimental results show that the ACS gives the most comfortable transportation when using a stretcher. Moreover, the reduction of the negative foot-to-head acceleration at frequencies below 0.2 Hz and the small variation of the foot-to-head acceleration result in more comfortable transportation. Conventional transportation with the head forward causes the worst transportation, although the characteristics of the vibration of the conventional stretcher seem to be superior to that of the ACS","tok_text":"ride qualiti evalu of an actively-control stretcher for an ambul \n thi studi consid the subject evalu of ride qualiti dure ambul transport use an actively-control stretcher ( ac ) . the ride qualiti of a convent stretcher and an assist driver 's seat is also compar . brake dure ambul transport gener neg foot-to-head acceler in patient and caus blood pressur to rise in the patient 's head . the ac absorb the foot-to-head acceler by chang the angl of the stretcher , thu reduc the blood pressur variat . howev , the ride qualiti of the ac should be investig further becaus the movement of the ac may caus motion sick and nausea . experi of ambul transport , includ rapid acceler and deceler , are perform to evalu the effect of differ in postur of the transport subject on the ride qualiti ; the semant differenti method and factor analysi are use in the investig . subject are transport use a convent stretcher with head forward , a convent stretcher with head backward , the ac , and an assist driver 's seat for comparison with transport use a stretcher . experiment result show that the ac give the most comfort transport when use a stretcher . moreov , the reduct of the neg foot-to-head acceler at frequenc below 0.2 hz and the small variat of the foot-to-head acceler result in more comfort transport . convent transport with the head forward caus the worst transport , although the characterist of the vibrat of the convent stretcher seem to be superior to that of the ac","ordered_present_kp":[25,59,0,88,123,204,268,301,483,607,623,667,754,798,827,919,959,1110,1412],"keyphrases":["ride quality evaluation","actively-controlled stretcher","ambulance","subjective evaluation","ambulance transportation","conventional stretcher","braking","negative foot-to-head acceleration","blood pressure variation","motion sickness","nausea","rapid acceleration","transported subject","semantic differential method","factor analysis","head forward","head backward","comfortable transportation","vibration","assistant driver seat","patient head","stretcher angle","rapid deceleration","posture differences"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R","R"]} {"id":"1443","title":"C and C++: a case for compatibility","abstract":"Modern C and C++ are sibling languages descended from Classic C. In many people's minds, they are (wrongly, but understandably) fused into the mythical C\/C++ programming language. There is no C\/C++ language, but there is a C\/C++ community. Previously the author described some of the incompatibilities that complicate the work of developers within that C\/C++ community. In this article, he discusses some of the underlying myths that help perpetuate these incompatibilities. He also shows why more compatibility (ideally, full compatibility) is in the best interest of the C\/C++ community. In the next paper, he presents some examples of how the incompatibilities in C and C++ might be resolved","tok_text":"c and c++ : a case for compat \n modern c and c++ are sibl languag descend from classic c. in mani peopl 's mind , they are ( wrongli , but understand ) fuse into the mythic c \/ c++ program languag . there is no c \/ c++ languag , but there is a c \/ c++ commun . previous the author describ some of the incompat that complic the work of develop within that c \/ c++ commun . in thi articl , he discuss some of the underli myth that help perpetu these incompat . he also show whi more compat ( ideal , full compat ) is in the best interest of the c \/ c++ commun . in the next paper , he present some exampl of how the incompat in c and c++ might be resolv","ordered_present_kp":[215,301],"keyphrases":["C++ language","incompatibilities","C language","object-oriented programming","class hierarchies","low-level programming","C++ libraries"],"prmu":["P","P","R","M","U","M","M"]} {"id":"1406","title":"Bluetooth bites back","abstract":"It is now more than four years since we started to hear about Bluetooth, and from the user's point of view very little seems to have happened since then. Paul Haddlesey looks at the progress, and the role Bluetooth may eventually play in your firm's communications strategy","tok_text":"bluetooth bite back \n it is now more than four year sinc we start to hear about bluetooth , and from the user 's point of view veri littl seem to have happen sinc then . paul haddlesey look at the progress , and the role bluetooth may eventu play in your firm 's commun strategi","ordered_present_kp":[0,263],"keyphrases":["Bluetooth","communications strategy","wireless connection","mobile"],"prmu":["P","P","U","U"]} {"id":"810","title":"Oracle's Suite grows up","abstract":"Once a low-cost Web offering, Oracle's Small Business Suite now carries a price tag to justify VAR interest","tok_text":"oracl 's suit grow up \n onc a low-cost web offer , oracl 's small busi suit now carri a price tag to justifi var interest","ordered_present_kp":[],"keyphrases":["Oracle Small Business Suite","NetLedger","accounting","resellers"],"prmu":["R","U","U","U"]} {"id":"855","title":"Support communities for women in computing","abstract":"This article highlights the many activities provided by the support communities available for women in computing. Thousands of women actively participate in these programs and they receive many benefits including networking and professional support. In addition, the organizations and associations help promote the accomplishments of women computer scientists and disseminate valuable information. This article surveys some of these organizations and concludes with a list of suggestions for how faculty members can incorporate the benefits of these organizations in their own institutions","tok_text":"support commun for women in comput \n thi articl highlight the mani activ provid by the support commun avail for women in comput . thousand of women activ particip in these program and they receiv mani benefit includ network and profession support . in addit , the organ and associ help promot the accomplish of women comput scientist and dissemin valuabl inform . thi articl survey some of these organ and conclud with a list of suggest for how faculti member can incorpor the benefit of these organ in their own institut","ordered_present_kp":[0,19,28,216,228,445],"keyphrases":["support communities","women","computing","networking","professional support","faculty members","information dissemination"],"prmu":["P","P","P","P","P","P","R"]} {"id":"1393","title":"ERP systems implementation: Best practices in Canadian government organizations","abstract":"ERP (Enterprise resource planning) systems implementation is a complex exercise in organizational innovation and change management. Government organizations are increasing their adoption of these systems for various benefits such as integrated real-time information, better administration, and result-based management. Government organizations, due to their social obligations, higher legislative and public accountability, and unique culture face many specific challenges in the transition to enterprise systems. This motivated the authors to explore the key considerations and typical activities in government organizations adopting ERP systems. The article adopts the innovation process theory framework as well as the (Markus & Tanis, 2000) model as a basis to delineate the ERP adoption process. Although, each adopting organization has a distinct set of objectives for its systems, the study found many similarities in motivations, concerns, and strategies across organizations","tok_text":"erp system implement : best practic in canadian govern organ \n erp ( enterpris resourc plan ) system implement is a complex exercis in organiz innov and chang manag . govern organ are increas their adopt of these system for variou benefit such as integr real-tim inform , better administr , and result-bas manag . govern organ , due to their social oblig , higher legisl and public account , and uniqu cultur face mani specif challeng in the transit to enterpris system . thi motiv the author to explor the key consider and typic activ in govern organ adopt erp system . the articl adopt the innov process theori framework as well as the ( marku & tani , 2000 ) model as a basi to delin the erp adopt process . although , each adopt organ ha a distinct set of object for it system , the studi found mani similar in motiv , concern , and strategi across organ","ordered_present_kp":[0,39,23,69,247,279,295,342,375,592],"keyphrases":["ERP systems implementation","best practices","Canadian government organizations","enterprise resource planning","integrated real-time information","administration","result-based management","social obligations","public accountability","innovation process theory framework","higher legislative accountability"],"prmu":["P","P","P","P","P","P","P","P","P","P","R"]} {"id":"783","title":"The network society as seen from Italy","abstract":"Italy was behind the European average in Internet development for many years, but a new trend, which has brought considerable change, emerged at the end of 1998 and showed its effects in 2000 and the following years. Now Italy is one of the top ten countries worldwide in Internet hostcount and the fourth largest in Europe. The density of Internet activity in Italy in proportion to the population is still below the average in the European Union, but is growing faster than Germany, the UK and France, and faster than the worldwide or European average. From the point of view of media control there are several problems. Italy has democratic institutions and freedom of speech, but there is an alarming concentration in the control of mainstream media (especially broadcast). There are no officially declared restrictions in the use of the Internet, but several legal and regulatory decisions reveal a desire to limit freedom of opinion and dialogue and\/or gain centralized control of the Net","tok_text":"the network societi as seen from itali \n itali wa behind the european averag in internet develop for mani year , but a new trend , which ha brought consider chang , emerg at the end of 1998 and show it effect in 2000 and the follow year . now itali is one of the top ten countri worldwid in internet hostcount and the fourth largest in europ . the densiti of internet activ in itali in proport to the popul is still below the averag in the european union , but is grow faster than germani , the uk and franc , and faster than the worldwid or european averag . from the point of view of media control there are sever problem . itali ha democrat institut and freedom of speech , but there is an alarm concentr in the control of mainstream media ( especi broadcast ) . there are no offici declar restrict in the use of the internet , but sever legal and regulatori decis reveal a desir to limit freedom of opinion and dialogu and\/or gain central control of the net","ordered_present_kp":[4,33,61,80,291,61,359,440,481,495,502,586,635,657,726,851,935],"keyphrases":["network society","Italy","European average","Europe","Internet development","Internet hostcount","Internet activity","European Union","Germany","UK","France","media control","democratic institutions","freedom of speech","mainstream media","regulatory decisions","centralized control","worldwide average","broadcast media","legal decisions"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1137","title":"On deciding stability of constrained homogeneous random walks and queueing systems","abstract":"We investigate stability of scheduling policies in queueing systems. To this day no algorithmic characterization exists for checking stability of a given policy in a given queueing system. In this paper we introduce a certain generalized priority policy and prove that the stability of this policy is algorithmically undecidable. We also prove that stability of a homogeneous random walk in L\/sub +\/\/sup d\/ is undecidable. Finally, we show that the problem of computing a fluid limit of a queueing system or of a constrained homogeneous random walk is undecidable. To the best of our knowledge these are the first undecidability results in the area of stability of queueing systems and random walks in L\/sub +\/\/sup d\/. We conjecture that stability of common policies like First-In-First-Out and priority policy is also an undecidable problem","tok_text":"on decid stabil of constrain homogen random walk and queue system \n we investig stabil of schedul polici in queue system . to thi day no algorithm character exist for check stabil of a given polici in a given queue system . in thi paper we introduc a certain gener prioriti polici and prove that the stabil of thi polici is algorithm undecid . we also prove that stabil of a homogen random walk in l \/ sub + \/\/sup d\/ is undecid . final , we show that the problem of comput a fluid limit of a queue system or of a constrain homogen random walk is undecid . to the best of our knowledg these are the first undecid result in the area of stabil of queue system and random walk in l \/ sub + \/\/sup d\/. we conjectur that stabil of common polici like first-in-first-out and prioriti polici is also an undecid problem","ordered_present_kp":[53,19,259,604,265,793],"keyphrases":["constrained homogeneous random walks","queueing systems","generalized priority policy","priority policy","undecidability results","undecidable problem","scheduling policy stability","homogeneous random walk stability","fluid limit computation","first-in-first-out policy"],"prmu":["P","P","P","P","P","P","R","R","R","R"]} {"id":"1172","title":"Marble cutting with single point cutting tool and diamond segments","abstract":"An investigation has been undertaken into the frame sawing with diamond blades. The kinematic behaviour of the frame sawing process is discussed. Under different cutting conditions, cutting and indenting-cutting tests are carried out by single point cutting tools and single diamond segments. The results indicate that the depth of cut per diamond grit increases as the blades move forward. Only a few grits per segment can remove the material in the cutting process. When the direction of the stroke changes, the cutting forces do not decrease to zero because of the residual plastic deformation beneath the diamond grits. The plastic deformation and fracture chipping of material are the dominant removal processes, which can be explained by the fracture theory of brittle material indentation","tok_text":"marbl cut with singl point cut tool and diamond segment \n an investig ha been undertaken into the frame saw with diamond blade . the kinemat behaviour of the frame saw process is discuss . under differ cut condit , cut and indenting-cut test are carri out by singl point cut tool and singl diamond segment . the result indic that the depth of cut per diamond grit increas as the blade move forward . onli a few grit per segment can remov the materi in the cut process . when the direct of the stroke chang , the cut forc do not decreas to zero becaus of the residu plastic deform beneath the diamond grit . the plastic deform and fractur chip of materi are the domin remov process , which can be explain by the fractur theori of brittl materi indent","ordered_present_kp":[0,15,40,98,133,223,558,630,667,711,729],"keyphrases":["marble cutting","single point cutting tool","diamond segments","frame sawing","kinematic behaviour","indenting-cutting tests","residual plastic deformation","fracture chipping","removal processes","fracture theory","brittle material indentation","cutting tests"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"562","title":"The Advanced Encryption Standard - implementation and transition to a new cryptographic benchmark","abstract":"Cryptography is the science of coding information to create unintelligible ciphers that conceal or hide messages. The process that achieves this goal is commonly referred to as encryption. Although encryption processes of various forms have been employed for centuries to protect the exchange of messages, the advent of the information age has underscored the importance of strong cryptography as a process to secure data exchanged through electronic means, and has accentuated the demand for products offering these services. This article describes the process that has led to the development of the latest cryptographic benchmark; the Advanced Encryption Standard (AES). The article briefly examines the requirements set forth for its development, defines how the new standard is implemented, and describes how government, business, and industry can transition to AES with minimum impact to operations","tok_text":"the advanc encrypt standard - implement and transit to a new cryptograph benchmark \n cryptographi is the scienc of code inform to creat unintellig cipher that conceal or hide messag . the process that achiev thi goal is commonli refer to as encrypt . although encrypt process of variou form have been employ for centuri to protect the exchang of messag , the advent of the inform age ha underscor the import of strong cryptographi as a process to secur data exchang through electron mean , and ha accentu the demand for product offer these servic . thi articl describ the process that ha led to the develop of the latest cryptograph benchmark ; the advanc encrypt standard ( ae ) . the articl briefli examin the requir set forth for it develop , defin how the new standard is implement , and describ how govern , busi , and industri can transit to ae with minimum impact to oper","ordered_present_kp":[4,61,115,136,453,675,804,813,824],"keyphrases":["Advanced Encryption Standard","cryptographic benchmark","coding","unintelligible ciphers","data exchange","AES","government","business","industry"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"1236","title":"Compatibility comparison and performance evaluation for Japanese HPF compilers using scientific applications","abstract":"The lack of compatibility of High-Performance Fortran (HPF) between vender implementations has been disheartening scientific application users so as to hinder the development of portable programs. Thus parallel computing is still unpopular in the computational science community, even though parallel programming is common to the computer science community. As users would like to run the same source code on parallel machines with different architectures as fast as possible, we have investigated the compatibility of source codes for Japanese HPF compilers (NEC, Fujitsu and Hitachi) with two real-world applications: a 3D fluid code and a 2D particle code. We have found that the source-level compatibility between Japanese HPF compilers is almost preserved, but more effort will be needed to sustain complete compatibility. We have also evaluated parallel performance and found that HPF can achieve good performance for the 3D fluid code with almost the same source code. For the 2D particle code, good results have also been obtained with a small number of processors, but some changes in the original source code and the addition of interface blocks is required","tok_text":"compat comparison and perform evalu for japanes hpf compil use scientif applic \n the lack of compat of high-perform fortran ( hpf ) between vender implement ha been dishearten scientif applic user so as to hinder the develop of portabl program . thu parallel comput is still unpopular in the comput scienc commun , even though parallel program is common to the comput scienc commun . as user would like to run the same sourc code on parallel machin with differ architectur as fast as possibl , we have investig the compat of sourc code for japanes hpf compil ( nec , fujitsu and hitachi ) with two real-world applic : a 3d fluid code and a 2d particl code . we have found that the source-level compat between japanes hpf compil is almost preserv , but more effort will be need to sustain complet compat . we have also evalu parallel perform and found that hpf can achiev good perform for the 3d fluid code with almost the same sourc code . for the 2d particl code , good result have also been obtain with a small number of processor , but some chang in the origin sourc code and the addit of interfac block is requir","ordered_present_kp":[103,48,228,327,52,824],"keyphrases":["HPF","compilers","High-Performance Fortran","portable programs","parallel programming","parallel performance","source compatability"],"prmu":["P","P","P","P","P","P","R"]} {"id":"1273","title":"Towards an ontology of approximate reason","abstract":"This article introduces structural aspects in an ontology of approximate reason. The basic assumption in this ontology is that approximate reason is a capability of an agent. Agents are designed to classify information granules derived from sensors that respond to stimuli in the environment of an agent or received from other agents. Classification of information granules is carried out in the context of parameterized approximation spaces and a calculus of granules. Judgment in agents is a faculty of thinking about (classifying) the particular relative to decision rules derived from data. Judgment in agents is reflective, but not in the classical philosophical sense (e.g., the notion of judgment in Kant). In an agent, a reflective judgment itself is an assertion that a particular decision rule derived from data is applicable to an object (input). That is, a reflective judgment by an agent is an assertion that a particular vector of attribute (sensor) values matches to some degree the conditions for a particular rule. In effect, this form of judgment is an assertion that a vector of sensor values reflects a known property of data expressed by a decision rule. Since the reasoning underlying a reflective judgment is inductive and surjective (not based on a priori conditions or universals), this form of judgment is reflective, but not in the sense of Kant. Unlike Kant, a reflective judgment is surjective in the sense that it maps experimental attribute values onto the most closely matching descriptors (conditions) in a derived rule. Again, unlike Kant's notion of judgment, a reflective judgment is not the result of searching for a universal that pertains to a particular set of values of descriptors. Rather, a reflective judgment by an agent is a form of recognition that a particular vector of sensor values pertains to a particular rule in some degree. This recognition takes the form of an assertion that a particular descriptor vector is associated with a particular decision rule. These considerations can be repeated for other forms of classifiers besides those defined by decision rules","tok_text":"toward an ontolog of approxim reason \n thi articl introduc structur aspect in an ontolog of approxim reason . the basic assumpt in thi ontolog is that approxim reason is a capabl of an agent . agent are design to classifi inform granul deriv from sensor that respond to stimuli in the environ of an agent or receiv from other agent . classif of inform granul is carri out in the context of parameter approxim space and a calculu of granul . judgment in agent is a faculti of think about ( classifi ) the particular rel to decis rule deriv from data . judgment in agent is reflect , but not in the classic philosoph sens ( e.g. , the notion of judgment in kant ) . in an agent , a reflect judgment itself is an assert that a particular decis rule deriv from data is applic to an object ( input ) . that is , a reflect judgment by an agent is an assert that a particular vector of attribut ( sensor ) valu match to some degre the condit for a particular rule . in effect , thi form of judgment is an assert that a vector of sensor valu reflect a known properti of data express by a decis rule . sinc the reason underli a reflect judgment is induct and surject ( not base on a priori condit or univers ) , thi form of judgment is reflect , but not in the sens of kant . unlik kant , a reflect judgment is surject in the sens that it map experiment attribut valu onto the most close match descriptor ( condit ) in a deriv rule . again , unlik kant 's notion of judgment , a reflect judgment is not the result of search for a univers that pertain to a particular set of valu of descriptor . rather , a reflect judgment by an agent is a form of recognit that a particular vector of sensor valu pertain to a particular rule in some degre . thi recognit take the form of an assert that a particular descriptor vector is associ with a particular decis rule . these consider can be repeat for other form of classifi besid those defin by decis rule","ordered_present_kp":[10,21,222,390,229,522,680],"keyphrases":["ontology","approximate reason","information granules","granules","parameterized approximation spaces","decision rules","reflective judgment","pattern recognition","rough sets"],"prmu":["P","P","P","P","P","P","P","M","M"]} {"id":"626","title":"Approximate confidence intervals for one proportion and difference of two proportions","abstract":"Constructing a confidence interval for a binomial proportion or the difference of two proportions is a routine exercise in daily data analysis. The best-known method is the Wald interval based on the asymptotic normal approximation to the distribution of the observed sample proportion, though it is known to have bad performance for small to medium sample sizes. Agresti et al. (1998, 2000) proposed an Adding-4 method: 4 pseudo-observations are added with 2 successes and 2 failures and then the resulting (pseudo-)sample proportion is used. The method is simple and performs extremely well. Here we propose an approximate method based on a t-approximation that takes account of the uncertainty in estimating the variance of the observed (pseudo-)sample proportion. It follows the same line of using a t-test, rather than z-test, in testing the mean of a normal distribution with an unknown variance. For some circumstances our proposed method has a higher coverage probability than the Adding-4 method","tok_text":"approxim confid interv for one proport and differ of two proport \n construct a confid interv for a binomi proport or the differ of two proport is a routin exercis in daili data analysi . the best-known method is the wald interv base on the asymptot normal approxim to the distribut of the observ sampl proport , though it is known to have bad perform for small to medium sampl size . agresti et al . ( 1998 , 2000 ) propos an adding-4 method : 4 pseudo-observ are ad with 2 success and 2 failur and then the result ( pseudo-)sampl proport is use . the method is simpl and perform extrem well . here we propos an approxim method base on a t-approxim that take account of the uncertainti in estim the varianc of the observ ( pseudo-)sampl proport . it follow the same line of use a t-test , rather than z-test , in test the mean of a normal distribut with an unknown varianc . for some circumst our propos method ha a higher coverag probabl than the adding-4 method","ordered_present_kp":[0,99,43,172,638,674,780,832,923],"keyphrases":["approximate confidence intervals","difference of two proportions","binomial proportion","data analysis","t-approximation","uncertainty","t-test","normal distribution","coverage probability","variance estimation","pseudo-sample proportion"],"prmu":["P","P","P","P","P","P","P","P","P","R","M"]} {"id":"59","title":"Efficient tracking of the cross-correlation coefficient","abstract":"In many (audio) processing algorithms, involving manipulation of discrete-time signals, the performance can vary strongly over the repertoire that is used. This may be the case when the signals from the various channels are allowed to be strongly positively or negatively correlated. We propose and analyze a general formula for tracking the (time-dependent) correlation between two signals. Some special cases of this formula lead to classical results known from the literature, others are new. This formula is recursive in nature, and uses only the instantaneous values of the two signals, in a low-cost and low-complexity manner; in particular, there is no need to take square roots or to carry out divisions. Furthermore, this formula can be modified with respect to the occurrence of the two signals so as to further decrease the complexity, and increase ease of implementation. The latter modification comes at the expense that not the actual correlation is tracked, but, rather, a somewhat deformed version of it. To overcome this problem, we propose, for a number of instances of the tracking formula, a simple warping operation on the deformed correlation. Now we obtain, at least for sinusoidal signals, the correct value of the correlation coefficient. Special attention is paid to the convergence behavior of the algorithm for stationary signals and the dynamic behavior if there is a transition to another stationary state; the latter is considered to be important to study the tracking abilities to nonstationary signals. We illustrate tracking algorithm by using it for stereo music fragments, obtained from a number of digital audio recordings","tok_text":"effici track of the cross-correl coeffici \n in mani ( audio ) process algorithm , involv manipul of discrete-tim signal , the perform can vari strongli over the repertoir that is use . thi may be the case when the signal from the variou channel are allow to be strongli posit or neg correl . we propos and analyz a gener formula for track the ( time-depend ) correl between two signal . some special case of thi formula lead to classic result known from the literatur , other are new . thi formula is recurs in natur , and use onli the instantan valu of the two signal , in a low-cost and low-complex manner ; in particular , there is no need to take squar root or to carri out divis . furthermor , thi formula can be modifi with respect to the occurr of the two signal so as to further decreas the complex , and increas eas of implement . the latter modif come at the expens that not the actual correl is track , but , rather , a somewhat deform version of it . to overcom thi problem , we propos , for a number of instanc of the track formula , a simpl warp oper on the deform correl . now we obtain , at least for sinusoid signal , the correct valu of the correl coeffici . special attent is paid to the converg behavior of the algorithm for stationari signal and the dynam behavior if there is a transit to anoth stationari state ; the latter is consid to be import to studi the track abil to nonstationari signal . we illustr track algorithm by use it for stereo music fragment , obtain from a number of digit audio record","ordered_present_kp":[0,20,100,1055,1072,1117,1207,1245,1271,1317,1397,1431,1461,1509],"keyphrases":["efficient tracking","cross-correlation coefficient","discrete-time signals","warping operation","deformed correlation","sinusoidal signals","convergence behavior","stationary signals","dynamic behavior","stationary state","nonstationary signals","tracking algorithm","stereo music fragments","digital audio recording","audio processing algorithms","time-dependent correlation","recursive formula"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"663","title":"The road ahead [supply chains]","abstract":"Executive supply chain managers, says David Metcalfe of Forrester Research, need the skills and precision of Mongolian archers on horseback. They must be able to hit their target, in this case customer demand, while moving at great speed. But what is wrong with the supply chains companies have in place already? According to Metcalfe, current manufacturing models are too inflexible. A recent survey conducted by Forrester Research supports this claim. It found that 42% of respondents could not transfer production from one plant to another in the event of a glitch in the supply chain. A further 32% said it would be possible, but extremely costly","tok_text":"the road ahead [ suppli chain ] \n execut suppli chain manag , say david metcalf of forrest research , need the skill and precis of mongolian archer on horseback . they must be abl to hit their target , in thi case custom demand , while move at great speed . but what is wrong with the suppli chain compani have in place alreadi ? accord to metcalf , current manufactur model are too inflex . a recent survey conduct by forrest research support thi claim . it found that 42 % of respond could not transfer product from one plant to anoth in the event of a glitch in the suppli chain . a further 32 % said it would be possibl , but extrem costli","ordered_present_kp":[17,83,401,358],"keyphrases":["supply chains","Forrester Research","manufacturing","survey","business networks"],"prmu":["P","P","P","P","U"]} {"id":"948","title":"Pairwise thermal entanglement in the n-qubit (n 0 while it is a 50-50 mixture between a point mass at zero and a normal random variable on the positive axis for theta \/sub 0\/=0. For small samples, simulations suggest that the frailty variance estimates are approximately distributed as an x-(100-x)% mixture, 00. We apply this method and verify by simulations these statistical results for semiparametric shared log-normal frailty models. We also apply the semiparametric shared gamma and log-normal frailty models to Busselton Health Study coronary heart disease data","tok_text":"a hybrid ml-em algorithm for calcul of maximum likelihood estim in semiparametr share frailti model \n thi paper describ a generalis hybrid ml-em algorithm for the calcul of maximum likelihood estim in semiparametr share frailti model , the cox proport hazard model with hazard function multipli by a ( parametr ) frailti random variabl . thi hybrid method is much faster than the standard em method and faster than the standard direct maximum likelihood method ( ml , newton-raphson ) for larg sampl . we have previous appli thi method to semiparametr share gamma frailti model , and verifi by simul the asymptot and small sampl statist properti of the frailti varianc estim . let theta \/sub 0\/ be the true valu of the frailti varianc paramet . then the asymptot distribut is normal for theta \/sub 0\/>0 while it is a 50 - 50 mixtur between a point mass at zero and a normal random variabl on the posit axi for theta \/sub 0\/=0 . for small sampl , simul suggest that the frailti varianc estim are approxim distribut as an x-(100-x)% mixtur , 0 < or = x < or=50 , between a point mass at zero and a normal random variabl on the posit axi even for theta \/sub 0\/>0 . we appli thi method and verifi by simul these statist result for semiparametr share log-norm frailti model . we also appli the semiparametr share gamma and log-norm frailti model to busselton health studi coronari heart diseas data","ordered_present_kp":[2,39,240,1344,1367,270,594,653,754,867,1227],"keyphrases":["hybrid ML-EM algorithm","maximum likelihood estimates","Cox proportional hazard models","hazard functions","simulations","frailty variance estimates","asymptotic distribution","normal random variable","semiparametric shared log-normal frailty models","Busselton Health Study","coronary heart disease data","data analysis","normal distribution"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","M","R"]} {"id":"661","title":"All change [agile business]","abstract":"What does it take for an organisation to become an agile business? Its employees probably need to adhere to new procurement policies, work more closely with colleagues in other departments, meet more exacting sales targets, and offer higher standards of customer service and support. In short, they need to change the way they work. Implementing technologies to support agile business models and underpin new practices is a complex task in itself. But getting employees to adopt new practices is far harder, and one that requires careful handling, says Barry O'Connell, general manager of business-to-employee (B2E) solutions at systems vendor Hewlett-Packard (HP)","tok_text":"all chang [ agil busi ] \n what doe it take for an organis to becom an agil busi ? it employe probabl need to adher to new procur polici , work more close with colleagu in other depart , meet more exact sale target , and offer higher standard of custom servic and support . in short , they need to chang the way they work . implement technolog to support agil busi model and underpin new practic is a complex task in itself . but get employe to adopt new practic is far harder , and one that requir care handl , say barri o'connel , gener manag of business-to-employe ( b2e ) solut at system vendor hewlett-packard ( hp )","ordered_present_kp":[12],"keyphrases":["agile business","corporate transformation","organisational change"],"prmu":["P","U","R"]} {"id":"1135","title":"A combinatorial, graph-based solution method for a class of continuous-time optimal control problems","abstract":"The paper addresses a class of continuous-time, optimal control problems whose solutions are typically characterized by both bang-bang and \"singular\" control regimes. Analytical study and numerical computation of such solutions are very difficult and far from complete when only techniques from control theory are used. This paper solves optimal control problems by reducing them to the combinatorial search for the shortest path in a specially constructed graph. Since the nodes of the graph are weighted in a sequence-dependent manner, we extend the classical, shortest-path algorithm to our case. The proposed solution method is currently limited to single-state problems with multiple control functions. A production planning problem and a train operation problem are optimally solved to illustrate the method","tok_text":"a combinatori , graph-bas solut method for a class of continuous-tim optim control problem \n the paper address a class of continuous-tim , optim control problem whose solut are typic character by both bang-bang and \" singular \" control regim . analyt studi and numer comput of such solut are veri difficult and far from complet when onli techniqu from control theori are use . thi paper solv optim control problem by reduc them to the combinatori search for the shortest path in a special construct graph . sinc the node of the graph are weight in a sequence-depend manner , we extend the classic , shortest-path algorithm to our case . the propos solut method is current limit to single-st problem with multipl control function . a product plan problem and a train oper problem are optim solv to illustr the method","ordered_present_kp":[54,261,435,681,704,733,760,550],"keyphrases":["continuous-time optimal control problems","numerical computation","combinatorial search","sequence-dependent manner","single-state problems","multiple control functions","production planning problem","train operation problem","combinatorial graph-based solution","bang-bang control regimes","singular control regimes","shortest path algorithm","weighted graph nodes"],"prmu":["P","P","P","P","P","P","P","P","R","R","R","R","R"]} {"id":"1170","title":"Upper bound analysis of oblique cutting with nose radius tools","abstract":"A generalized upper bound model for calculating the chip flow angle in oblique cutting using flat-faced nose radius tools is described. The projection of the uncut chip area on the rake face is divided into a number of elements parallel to an assumed chip flow direction. The length of each of these elements is used to find the length of the corresponding element on the shear surface using the ratio of the shear velocity to the chip velocity. The area of each element is found as the cross product of the length and its width along the cutting edge. Summing up the area of the elements along the shear surface, the total shear surface area is obtained. The friction area is calculated using the similarity between orthogonal and oblique cutting in the 'equivalent' plane that includes both the cutting velocity and chip velocity. The cutting power is obtained by summing the shear power and the friction power. The actual chip flow angle and chip velocity are obtained by minimizing the cutting power with respect to both these variables. The shape of the curved shear surface, the chip cross section and the cutting force obtained from this model are presented","tok_text":"upper bound analysi of obliqu cut with nose radiu tool \n a gener upper bound model for calcul the chip flow angl in obliqu cut use flat-fac nose radiu tool is describ . the project of the uncut chip area on the rake face is divid into a number of element parallel to an assum chip flow direct . the length of each of these element is use to find the length of the correspond element on the shear surfac use the ratio of the shear veloc to the chip veloc . the area of each element is found as the cross product of the length and it width along the cut edg . sum up the area of the element along the shear surfac , the total shear surfac area is obtain . the friction area is calcul use the similar between orthogon and obliqu cut in the ' equival ' plane that includ both the cut veloc and chip veloc . the cut power is obtain by sum the shear power and the friction power . the actual chip flow angl and chip veloc are obtain by minim the cut power with respect to both these variabl . the shape of the curv shear surfac , the chip cross section and the cut forc obtain from thi model are present","ordered_present_kp":[0,23,39,98,188,390,424,443,658],"keyphrases":["upper bound analysis","oblique cutting","nose radius tools","chip flow angle","uncut chip area","shear surface","shear velocity","chip velocity","friction area"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"560","title":"Citizen centric identity management: chip tricks?","abstract":"Accelerating and harmonizing the diffusion and acceptance of electronic services in Europe in a secure and practical way has become a priority of several initiatives in the past few years and a critical factor for citizen and business information society services. As identification and authentication is a critical element in accessing public services the combination of public key infrastructure (PKI) and smart cards emerges as the solution of choice for eGovernment in Europe. National governments and private initiatives alike vouch their support for this powerful combination to deliver an essential layer of reliable electronic services and address identity requirements in a broad range of application areas. A recent study suggests that several eGovernment implementations point to the direction of electronic citizen identity management as an up and coming challenge. The paper discusses the eGovernment needs for user identification applicability and the need for standardization","tok_text":"citizen centric ident manag : chip trick ? \n acceler and harmon the diffus and accept of electron servic in europ in a secur and practic way ha becom a prioriti of sever initi in the past few year and a critic factor for citizen and busi inform societi servic . as identif and authent is a critic element in access public servic the combin of public key infrastructur ( pki ) and smart card emerg as the solut of choic for egovern in europ . nation govern and privat initi alik vouch their support for thi power combin to deliv an essenti layer of reliabl electron servic and address ident requir in a broad rang of applic area . a recent studi suggest that sever egovern implement point to the direct of electron citizen ident manag as an up and come challeng . the paper discuss the egovern need for user identif applic and the need for standard","ordered_present_kp":[0,89,802,277,343,380,424,839],"keyphrases":["citizen centric identity management","electronic services","authentication","public key infrastructure","smart cards","government","user identification","standardization","business information services","legal framework","public information services"],"prmu":["P","P","P","P","P","P","P","P","R","U","R"]} {"id":"1108","title":"The visible cement data set","abstract":"With advances in x-ray microtomography, it is now possible to obtain three-dimensional representations of a material's microstructure with a voxel size of less than one micrometer. The Visible Cement Data Set represents a collection of 3-D data sets obtained using the European Synchrotron Radiation Facility in Grenoble, France in September 2000. Most of the images obtained are for hydrating portland cement pastes, with a few data sets representing hydrating Plaster of Paris and a common building brick. All of these data sets are being made available on the Visible Cement Data Set website at http:\/\/visiblecement.nist.gov. The website includes the raw 3-D datafiles, a description of the material imaged for each data set, example two-dimensional images and visualizations for each data set, and a collection of C language computer programs that will be of use in processing and analyzing the 3-D microstructural images. This paper provides the details of the experiments performed at the ESRF, the analysis procedures utilized in obtaining the data set files, and a few representative example images for each of the three materials investigated","tok_text":"the visibl cement data set \n with advanc in x-ray microtomographi , it is now possibl to obtain three-dimension represent of a materi 's microstructur with a voxel size of less than one micromet . the visibl cement data set repres a collect of 3-d data set obtain use the european synchrotron radiat facil in grenobl , franc in septemb 2000 . most of the imag obtain are for hydrat portland cement past , with a few data set repres hydrat plaster of pari and a common build brick . all of these data set are be made avail on the visibl cement data set websit at http:\/\/visiblecement.nist.gov . the websit includ the raw 3-d datafil , a descript of the materi imag for each data set , exampl two-dimension imag and visual for each data set , and a collect of c languag comput program that will be of use in process and analyz the 3-d microstructur imag . thi paper provid the detail of the experi perform at the esrf , the analysi procedur util in obtain the data set file , and a few repres exampl imag for each of the three materi investig","ordered_present_kp":[44,137,158,272,375,439,468,691,833,911],"keyphrases":["X-ray microtomography","microstructure","voxel size","European Synchrotron Radiation Facility","hydrating portland cement pastes","Plaster of Paris","building brick","two-dimensional images","microstructural images","ESRF","3D representations","cement hydration"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","R"]} {"id":"134","title":"A model of periodic oscillation for genetic regulatory systems","abstract":"In this paper, we focus on modeling and explaining periodic oscillations in gene-protein systems with a simple nonlinear model and on analyzing effects of time delay on the stability of oscillations. Our main model of genetic regulation comprises of a two-gene system with an autoregulatory feedback loop. We exploit multiple time scales and hysteretic properties of the model to construct periodic oscillations with jumping dynamics and analyze the possible mechanism according to the singular perturbation theory. As shown in this paper, periodic oscillations are mainly generated by nonlinearly negative and positive feedback loops in gene regulatory systems, whereas the jumping dynamics is generally caused by time scale differences among biochemical reactions. This simple model may actually act as a genetic oscillator or switch in gene-protein networks because the dynamics are robust for parameter perturbations or environment variations. We also explore effects of time delay on the stability of the dynamics, showing that the time delay generally increases the stability region of the oscillations, thereby making the oscillations robust to parameter changes. Two examples are also provided to numerically demonstrate our theoretical results","tok_text":"a model of period oscil for genet regulatori system \n in thi paper , we focu on model and explain period oscil in gene-protein system with a simpl nonlinear model and on analyz effect of time delay on the stabil of oscil . our main model of genet regul compris of a two-gen system with an autoregulatori feedback loop . we exploit multipl time scale and hysteret properti of the model to construct period oscil with jump dynam and analyz the possibl mechan accord to the singular perturb theori . as shown in thi paper , period oscil are mainli gener by nonlinearli neg and posit feedback loop in gene regulatori system , wherea the jump dynam is gener caus by time scale differ among biochem reaction . thi simpl model may actual act as a genet oscil or switch in gene-protein network becaus the dynam are robust for paramet perturb or environ variat . we also explor effect of time delay on the stabil of the dynam , show that the time delay gener increas the stabil region of the oscil , therebi make the oscil robust to paramet chang . two exampl are also provid to numer demonstr our theoret result","ordered_present_kp":[2,11,114,147,187,28,266,289,354,416,471,28,685,962],"keyphrases":["modeling","periodic oscillations","genetic regulation","genetic regulatory system","gene-protein systems","nonlinear model","time delay","two-gene system","autoregulatory feedback loop","hysteretic properties","jumping dynamics","singular perturbation theory","biochemical reactions","stability region","oscillations stability","nonlinearly negative feedback loops","nonlinearly positive feedback loops","bifurcation","circadian rhythm","relaxation oscillator"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","U","U","M"]} {"id":"977","title":"Behavior of Runge-Kutta discretizations near equilibria of index 2 differential algebraic systems","abstract":"We analyze Runge-Kutta discretizations applied to index 2 differential algebraic equations (DAE's) near equilibria. We compare the geometric properties of the numerical and the exact solutions. It is shown that projected and half-explicit Runge-Kutta methods reproduce the qualitative features of the continuous system in the vicinity of an equilibrium correctly. The proof combines cut-off and scaling techniques for index 2 differential algebraic equations with some invariant manifold results of Schropp (Geometric properties of Runge-Kutta discretizations for index 2 differential algebraic equations, Konstanzer Schriften in Mathematik und Informatik 128) and classical results for discretized ordinary differential equations","tok_text":"behavior of runge-kutta discret near equilibria of index 2 differenti algebra system \n we analyz runge-kutta discret appli to index 2 differenti algebra equat ( dae 's ) near equilibria . we compar the geometr properti of the numer and the exact solut . it is shown that project and half-explicit runge-kutta method reproduc the qualit featur of the continu system in the vicin of an equilibrium correctli . the proof combin cut-off and scale techniqu for index 2 differenti algebra equat with some invari manifold result of schropp ( geometr properti of runge-kutta discret for index 2 differenti algebra equat , konstanz schriften in mathematik und informatik 128 ) and classic result for discret ordinari differenti equat","ordered_present_kp":[12,51,37,202,283,350,437,499,691],"keyphrases":["Runge-Kutta discretizations","equilibria","index 2 differential algebraic systems","geometric properties","half-explicit Runge-Kutta methods","continuous system","scaling techniques","invariant manifold","discretized ordinary differential equations","cut-off techniques"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"932","title":"Modeling of torsional vibration induced by extension-twisting coupling of anisotropic composite laminates with piezoelectric actuators","abstract":"In this paper we present a dynamic analytical model for the torsional vibration of an anisotropic piezoelectric laminate induced by the extension-twisting coupling effect. In the present approach, we use the Hamilton principle and a reduced bending stiffness method for the derivation of equations of motion. As a result, the in-plane displacements are not involved and the out-of-plane displacement of the laminate is the only quantity to be calculated. Therefore, the proposed method turns the twisting of a laminate with structural coupling into a simplified problem without losing its features. We give analytical solutions of the present model with harmonic excitation. A parametric study is performed to demonstrate the present approach","tok_text":"model of torsion vibrat induc by extension-twist coupl of anisotrop composit lamin with piezoelectr actuat \n in thi paper we present a dynam analyt model for the torsion vibrat of an anisotrop piezoelectr lamin induc by the extension-twist coupl effect . in the present approach , we use the hamilton principl and a reduc bend stiff method for the deriv of equat of motion . as a result , the in-plan displac are not involv and the out-of-plan displac of the lamin is the onli quantiti to be calcul . therefor , the propos method turn the twist of a lamin with structur coupl into a simplifi problem without lose it featur . we give analyt solut of the present model with harmon excit . a parametr studi is perform to demonstr the present approach","ordered_present_kp":[9,58,88,135,183,224,292,316,357,393,432,43,561,672,689,68],"keyphrases":["torsional vibration","twisting","anisotropic composite laminates","composite laminate","piezoelectric actuators","dynamic analytical model","anisotropic piezoelectric laminate","extension-twisting coupling effect","Hamilton principle","reduced bending stiffness","equations of motion","in-plane displacements","out-of-plane displacement","structural coupling","harmonic excitation","parametric study","extension -twisting coupling","material anisotropy","PZT"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M","U","U"]} {"id":"1209","title":"High-level language support for user-defined reductions","abstract":"The optimized handling of reductions on parallel supercomputers or clusters of workstations is critical to high performance because reductions are common in scientific codes and a potential source of bottlenecks. Yet in many high-level languages, a mechanism for writing efficient reductions remains surprisingly absent. Further, when such mechanisms do exist, they often do not provide the flexibility a programmer needs to achieve a desirable level of performance. In this paper, we present a new language construct for arbitrary reductions that lets a programmer achieve a level of performance equal to that achievable with the highly flexible, but low-level combination of Fortran and MPI. We have implemented this construct in the ZPL language and evaluate it in the context of the initialization of the NAS MG benchmark. We show a 45 times speedup over the same code written in ZPL without this construct. In addition, performance on a large number of processors surpasses that achieved in the NAS implementation showing that our mechanism provides programmers with the needed flexibility","tok_text":"high-level languag support for user-defin reduct \n the optim handl of reduct on parallel supercomput or cluster of workstat is critic to high perform becaus reduct are common in scientif code and a potenti sourc of bottleneck . yet in mani high-level languag , a mechan for write effici reduct remain surprisingli absent . further , when such mechan do exist , they often do not provid the flexibl a programm need to achiev a desir level of perform . in thi paper , we present a new languag construct for arbitrari reduct that let a programm achiev a level of perform equal to that achiev with the highli flexibl , but low-level combin of fortran and mpi . we have implement thi construct in the zpl languag and evalu it in the context of the initi of the na mg benchmark . we show a 45 time speedup over the same code written in zpl without thi construct . in addit , perform on a larg number of processor surpass that achiev in the na implement show that our mechan provid programm with the need flexibl","ordered_present_kp":[80,104,42,483],"keyphrases":["reductions","parallel supercomputers","clusters of workstations","language construct","parallel programming","scientific computing"],"prmu":["P","P","P","P","M","M"]} {"id":"66","title":"Regression testing of database applications","abstract":"Database applications features such as Structured Query Language or SQL, exception programming, integrity constraints, and table triggers pose difficulties for maintenance activities; especially for regression testing that follows modifications to database applications. In this work, we address these difficulties and propose a two phase regression testing methodology. In phase 1, we explore control flow and data flow analysis issues of database applications. Then, we propose an impact analysis technique that is based on dependencies that exist among the components of database applications. This analysis leads to selecting test cases from the initial test suite for regression testing the modified application. In phase 2, further reduction in the regression test cases is performed by using reduction algorithms. We present two such algorithms. The Graph Walk algorithm walks through the control flow graph of database modules and selects a safe set of test cases to retest. The Call Graph Firewall algorithm uses a firewall for the inter procedural level. Finally, a maintenance environment for database applications is described. Our experience with this regression testing methodology shows that the impact analysis technique is adequate for selecting regression tests and that phase 2 techniques can be used for further reduction in the number of theses tests","tok_text":"regress test of databas applic \n databas applic featur such as structur queri languag or sql , except program , integr constraint , and tabl trigger pose difficulti for mainten activ ; especi for regress test that follow modif to databas applic . in thi work , we address these difficulti and propos a two phase regress test methodolog . in phase 1 , we explor control flow and data flow analysi issu of databas applic . then , we propos an impact analysi techniqu that is base on depend that exist among the compon of databas applic . thi analysi lead to select test case from the initi test suit for regress test the modifi applic . in phase 2 , further reduct in the regress test case is perform by use reduct algorithm . we present two such algorithm . the graph walk algorithm walk through the control flow graph of databas modul and select a safe set of test case to retest . the call graph firewal algorithm use a firewal for the inter procedur level . final , a mainten environ for databas applic is describ . our experi with thi regress test methodolog show that the impact analysi techniqu is adequ for select regress test and that phase 2 techniqu can be use for further reduct in the number of these s test","ordered_present_kp":[16,378,706,761,799,886,441,63,89,95,112,136,302],"keyphrases":["database applications","Structured Query Language","SQL","exception programming","integrity constraints","table triggers","two phase regression testing methodology","data flow analysis","impact analysis","reduction algorithms","Graph Walk algorithm","control flow graph","Call Graph Firewall algorithm","control flow analysis"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"619","title":"Wavelet-based image segment representation","abstract":"An efficient representation method for arbitrarily shaped image segments is proposed. This method includes a smart way to select a wavelet basis to approximate the given image segment, with improved image quality and reduced computational load","tok_text":"wavelet-bas imag segment represent \n an effici represent method for arbitrarili shape imag segment is propos . thi method includ a smart way to select a wavelet basi to approxim the given imag segment , with improv imag qualiti and reduc comput load","ordered_present_kp":[12,68,153,208,232],"keyphrases":["image segment representation","arbitrarily shaped image segments","wavelet basis","improved image quality","reduced computational load","discrete wavelet transform","DWT"],"prmu":["P","P","P","P","P","M","U"]} {"id":"1439","title":"On-line robust processing techniques for elimination of measurement drop-out","abstract":"When processing measurement data, it is usually assumed that some amount of normally distributed measurement noise is present. In some situations, outliers are present in the measurements and consequently the noise is far from normally distributed. In this case classical least-squares procedures for estimating Fourier spectra (or derived quantities like the frequency response function) can give results which are inaccurate or even useless. In this paper, a novel technique for the on-line processing of measurement outliers will be proposed. Both the computation speed and the accuracy of the technique presented will be compared with different classical approaches for handling outliers in measurement data (i.e. filtering techniques, outlier rejection techniques and robust regression techniques). In particular, all processing techniques will be validated by applying them to the problem of speckle drop-out in optical vibration measurements (performed with a laser Doppler vibrometer), which typically causes outliers in the measurements","tok_text":"on-lin robust process techniqu for elimin of measur drop-out \n when process measur data , it is usual assum that some amount of normal distribut measur nois is present . in some situat , outlier are present in the measur and consequ the nois is far from normal distribut . in thi case classic least-squar procedur for estim fourier spectra ( or deriv quantiti like the frequenc respons function ) can give result which are inaccur or even useless . in thi paper , a novel techniqu for the on-lin process of measur outlier will be propos . both the comput speed and the accuraci of the techniqu present will be compar with differ classic approach for handl outlier in measur data ( i.e. filter techniqu , outlier reject techniqu and robust regress techniqu ) . in particular , all process techniqu will be valid by appli them to the problem of speckl drop-out in optic vibrat measur ( perform with a laser doppler vibromet ) , which typic caus outlier in the measur","ordered_present_kp":[0,128,285,324,369,507,548,862,899,732],"keyphrases":["on-line robust processing techniques","normally distributed measurement noise","classical least-squares procedures","Fourier spectra","frequency response function","measurement outliers","computation speed","robust regression","optical vibration measurements","laser Doppler vibrometer","measurement dropout elimination","speckle dropout","laser interferometer","modal analysis","vibration velocity","iterative technique","low-pass filtering","median filtering","signal sampling","order statistics","sinusoidal excitation","broadband excitation","frequency spectra"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","M","M","U","M","M","M","M","U","U","U","U","R"]} {"id":"741","title":"Mothball mania [3G licences]","abstract":"Telefonica Moviles has frozen its 3G operations in Germany, Austria, Italy and Switzerland. With other 3G licence holders questioning the logic of entering already saturated markets with unproven technology, Emma McClune asks if the mothball effect is set to snowball any further","tok_text":"mothbal mania [ 3 g licenc ] \n telefonica movil ha frozen it 3 g oper in germani , austria , itali and switzerland . with other 3 g licenc holder question the logic of enter alreadi satur market with unproven technolog , emma mcclune ask if the mothbal effect is set to snowbal ani further","ordered_present_kp":[128,182,0],"keyphrases":["mothball","3G licence holders","saturated markets","mobile telephony"],"prmu":["P","P","P","U"]} {"id":"704","title":"Multicell converters: active control and observation of flying-capacitor voltages","abstract":"The multicell converters introduced more than ten years ago make it possible to distribute the voltage constraints among series-connected switches and to improve the output waveforms (increased number of levels and apparent frequency). The balance of the constraints requires an appropriate distribution of the flying voltages. This paper presents some solutions for the active control of the voltages across the flying capacitors in the presence of rapid variation of the input voltage. The latter part of this paper is dedicated to the observation of these voltages using an original modeling of the converter","tok_text":"multicel convert : activ control and observ of flying-capacitor voltag \n the multicel convert introduc more than ten year ago make it possibl to distribut the voltag constraint among series-connect switch and to improv the output waveform ( increas number of level and appar frequenc ) . the balanc of the constraint requir an appropri distribut of the fli voltag . thi paper present some solut for the activ control of the voltag across the fli capacitor in the presenc of rapid variat of the input voltag . the latter part of thi paper is dedic to the observ of these voltag use an origin model of the convert","ordered_present_kp":[0,19,47,183,494],"keyphrases":["multicell converters","active control","flying-capacitor voltages","series-connected switches","input voltage","Kalman filtering","multilevel systems","nonlinear systems","power electronics","power systems harmonics","output waveforms improvement"],"prmu":["P","P","P","P","P","U","U","U","U","U","R"]} {"id":"1351","title":"Analytic PCA construction for theoretical analysis of lighting variability in images of a Lambertian object","abstract":"We analyze theoretically the subspace best approximating images of a convex Lambertian object taken from the same viewpoint, but under different distant illumination conditions. We analytically construct the principal component analysis for images of a convex Lambertian object, explicitly taking attached shadows into account, and find the principal eigenmodes and eigenvalues with respect to lighting variability. Our analysis makes use of an analytic formula for the irradiance in terms of spherical-harmonic coefficients of the illumination and shows, under appropriate assumptions, that the principal components or eigenvectors are identical to the spherical harmonic basis functions evaluated at the surface normal vectors. Our main contribution is in extending these results to the single-viewpoint case, showing how the principal eigenmodes and eigenvalues are affected when only a limited subset (the upper hemisphere) of normals is available and the spherical harmonics are no longer orthonormal over the restricted domain. Our results are very close, both qualitatively and quantitatively, to previous empirical observations and represent the first essentially complete theoretical explanation of these observations","tok_text":"analyt pca construct for theoret analysi of light variabl in imag of a lambertian object \n we analyz theoret the subspac best approxim imag of a convex lambertian object taken from the same viewpoint , but under differ distant illumin condit . we analyt construct the princip compon analysi for imag of a convex lambertian object , explicitli take attach shadow into account , and find the princip eigenmod and eigenvalu with respect to light variabl . our analysi make use of an analyt formula for the irradi in term of spherical-harmon coeffici of the illumin and show , under appropri assumpt , that the princip compon or eigenvector are ident to the spheric harmon basi function evalu at the surfac normal vector . our main contribut is in extend these result to the single-viewpoint case , show how the princip eigenmod and eigenvalu are affect when onli a limit subset ( the upper hemispher ) of normal is avail and the spheric harmon are no longer orthonorm over the restrict domain . our result are veri close , both qualit and quantit , to previou empir observ and repres the first essenti complet theoret explan of these observ","ordered_present_kp":[654,44,145,696,390,503],"keyphrases":["lighting variability","convex Lambertian object","principal eigenmodes","irradiance","spherical harmonics","surface normal vectors","analytic principal component analysis","five-dimensional subspace","principal eigenvalues","radiance"],"prmu":["P","P","P","P","P","P","R","M","R","U"]} {"id":"1314","title":"Multi-timescale Internet traffic engineering","abstract":"The Internet is a collection of packet-based hop-by-hop routed networks. Internet traffic engineering is the process of allocating resources to meet the performance requirements of users and operators for their traffic. Current mechanisms for doing so, exemplified by TCP's congestion control or the variety of packet marking disciplines, concentrate on allocating resources on a per-packet basis or at data timescales. This article motivates the need for traffic engineering in the Internet at other timescales, namely control and management timescales, and presents three mechanisms for this. It also presents a scenario to show how these mechanisms increase the flexibility of operators' service offerings and potentially also ease problems of Internet management","tok_text":"multi-timescal internet traffic engin \n the internet is a collect of packet-bas hop-by-hop rout network . internet traffic engin is the process of alloc resourc to meet the perform requir of user and oper for their traffic . current mechan for do so , exemplifi by tcp 's congest control or the varieti of packet mark disciplin , concentr on alloc resourc on a per-packet basi or at data timescal . thi articl motiv the need for traffic engin in the internet at other timescal , name control and manag timescal , and present three mechan for thi . it also present a scenario to show how these mechan increas the flexibl of oper ' servic offer and potenti also eas problem of internet manag","ordered_present_kp":[0,69,306,675],"keyphrases":["multi-timescale Internet traffic engineering","packet-based hop-by-hop routed networks","packet marking disciplines","Internet management","TCP congestion control","resource allocation","control timescale","operator services","admission control","ECN proxy","BGP routing protocol"],"prmu":["P","P","P","P","R","R","R","R","M","U","M"]} {"id":"897","title":"Optimization of advertising expenses in the functioning of an insurance company","abstract":"With the use of Pontryagin's maximum principle, a problem of optimal time distribution of advertising expenses in the functioning of an insurance company is solved","tok_text":"optim of advertis expens in the function of an insur compani \n with the use of pontryagin 's maximum principl , a problem of optim time distribut of advertis expens in the function of an insur compani is solv","ordered_present_kp":[0,9,47,125],"keyphrases":["optimization","advertising expenses","insurance company","optimal time distribution","Pontryagin maximum principle","differential equations"],"prmu":["P","P","P","P","R","U"]} {"id":"1050","title":"Secrets of the Glasgow Haskell compiler inliner","abstract":"Higher-order languages such as Haskell encourage the programmer to build abstractions by composing functions. A good compiler must inline many of these calls to recover an efficiently executable program. In principle, inlining is dead simple: just replace the call of a function by an instance of its body. But any compiler-writer will tell you that inlining is a black art, full of delicate compromises that work together to give good performance without unnecessary code bloat. The purpose of this paper is, therefore, to articulate the key lessons we learned from a full-scale \"production\" inliner, the one used in the Glasgow Haskell compiler. We focus mainly on the algorithmic aspects, but we also provide some indicative measurements to substantiate the importance of various aspects of the inliner","tok_text":"secret of the glasgow haskel compil inlin \n higher-ord languag such as haskel encourag the programm to build abstract by compos function . a good compil must inlin mani of these call to recov an effici execut program . in principl , inlin is dead simpl : just replac the call of a function by an instanc of it bodi . but ani compiler-writ will tell you that inlin is a black art , full of delic compromis that work togeth to give good perform without unnecessari code bloat . the purpos of thi paper is , therefor , to articul the key lesson we learn from a full-scal \" product \" inlin , the one use in the glasgow haskel compil . we focu mainli on the algorithm aspect , but we also provid some indic measur to substanti the import of variou aspect of the inlin","ordered_present_kp":[14,44,109,202,435,653],"keyphrases":["Glasgow Haskell compiler inliner","higher-order languages","abstractions","executable program","performance","algorithmic aspects","functional programming","functional language","optimising compiler"],"prmu":["P","P","P","P","P","P","R","R","M"]} {"id":"1015","title":"Scalable techniques from nonparametric statistics for real time robot learning","abstract":"Locally weighted learning (LWL) is a class of techniques from nonparametric statistics that provides useful representations and training algorithms for learning about complex phenomena during autonomous adaptive control of robotic systems. The paper introduces several LWL algorithms that have been tested successfully in real-time learning of complex robot tasks. We discuss two major classes of LWL, memory-based LWL and purely incremental LWL that does not need to remember any data explicitly. In contrast to the traditional belief that LWL methods cannot work well in high-dimensional spaces, we provide new algorithms that have been tested on up to 90 dimensional learning problems. The applicability of our LWL algorithms is demonstrated in various robot learning examples, including the learning of devil-sticking, pole-balancing by a humanoid robot arm, and inverse-dynamics learning for a seven and a 30 degree-of-freedom robot. In all these examples, the application of our statistical neural networks techniques allowed either faster or more accurate acquisition of motor control than classical control engineering","tok_text":"scalabl techniqu from nonparametr statist for real time robot learn \n local weight learn ( lwl ) is a class of techniqu from nonparametr statist that provid use represent and train algorithm for learn about complex phenomena dure autonom adapt control of robot system . the paper introduc sever lwl algorithm that have been test success in real-tim learn of complex robot task . we discuss two major class of lwl , memory-bas lwl and pure increment lwl that doe not need to rememb ani data explicitli . in contrast to the tradit belief that lwl method can not work well in high-dimension space , we provid new algorithm that have been test on up to 90 dimension learn problem . the applic of our lwl algorithm is demonstr in variou robot learn exampl , includ the learn of devil-stick , pole-balanc by a humanoid robot arm , and inverse-dynam learn for a seven and a 30 degree-of-freedom robot . in all these exampl , the applic of our statist neural network techniqu allow either faster or more accur acquisit of motor control than classic control engin","ordered_present_kp":[0,22,46,70,175,207,230,773,787,804,829,936],"keyphrases":["scalable techniques","nonparametric statistics","real time robot learning","locally weighted learning","training algorithms","complex phenomena","autonomous adaptive control","devil-sticking","pole-balancing","humanoid robot arm","inverse-dynamics learning","statistical neural networks techniques","memory-based learning","purely incremental learning","nonparametric regression"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R","M"]} {"id":"1268","title":"Reachability in contextual nets","abstract":"Contextual nets, or Petri nets with read arcs, are models of concurrent systems with context dependent actions. The problem of reachability in such nets consists in finding a sequence of transitions that leads from the initial marking of a given contextual net to a given goal marking. The solution to this problem that is presented in this paper consists in constructing a finite complete prefix of the unfolding of the given contextual net, that is a finite prefix in which all the markings that are reachable from the initial marking are present, and in searching in each branch of this prefix for the goal marking by solving an appropriate linear programming problem","tok_text":"reachabl in contextu net \n contextu net , or petri net with read arc , are model of concurr system with context depend action . the problem of reachabl in such net consist in find a sequenc of transit that lead from the initi mark of a given contextu net to a given goal mark . the solut to thi problem that is present in thi paper consist in construct a finit complet prefix of the unfold of the given contextu net , that is a finit prefix in which all the mark that are reachabl from the initi mark are present , and in search in each branch of thi prefix for the goal mark by solv an appropri linear program problem","ordered_present_kp":[45,84,104,428,266,596],"keyphrases":["Petri nets","concurrent systems","context dependent actions","goal marking","finite prefix","linear programming","contextual nets reachability"],"prmu":["P","P","P","P","P","P","R"]} {"id":"678","title":"Marketing in CSIR libraries and information centres: a study on promotional efforts","abstract":"This paper examines the attitudes of librarians towards the promotional aspects in several CSIR libraries and information centres of India. The issues related to promotional activities of these libraries have been evaluated to determine the extent to which they are being practised. Librarians hold positive attitudes about promotional aspects of libraries and often practise them without knowing they are practising marketing concepts. Suggestions and strategies for improving the promotional activities in libraries and information services are put forth so as to meet the information needs and demands of clientele","tok_text":"market in csir librari and inform centr : a studi on promot effort \n thi paper examin the attitud of librarian toward the promot aspect in sever csir librari and inform centr of india . the issu relat to promot activ of these librari have been evalu to determin the extent to which they are be practis . librarian hold posit attitud about promot aspect of librari and often practis them without know they are practis market concept . suggest and strategi for improv the promot activ in librari and inform servic are put forth so as to meet the inform need and demand of clientel","ordered_present_kp":[10,27,178,204,0,544],"keyphrases":["marketing","CSIR libraries","information centres","India","promotional activities","information needs"],"prmu":["P","P","P","P","P","P"]} {"id":"110","title":"A switching synchronization scheme for a class of chaotic systems","abstract":"In this Letter, we propose an observer-based synchronization scheme for a class of chaotic systems. This class of systems are given by piecewise-linear dynamics. By using some properties of such systems, we give a procedure to construct the gain of the observer. We prove various stability results and comment on the robustness of the proposed scheme. We also present some simulation results","tok_text":"a switch synchron scheme for a class of chaotic system \n in thi letter , we propos an observer-bas synchron scheme for a class of chaotic system . thi class of system are given by piecewise-linear dynam . by use some properti of such system , we give a procedur to construct the gain of the observ . we prove variou stabil result and comment on the robust of the propos scheme . we also present some simul result","ordered_present_kp":[2,40,180,349],"keyphrases":["switching synchronization scheme","chaotic systems","piecewise-linear dynamics","robustness","state observers"],"prmu":["P","P","P","P","M"]} {"id":"953","title":"Take it to the next level [law firm innovation]","abstract":"It's called innovating. Our clients do it. Our culture worships it. Our future hinges on it. Why is it so difficult in law firms? How can we make it easier? Viva la difference!","tok_text":"take it to the next level [ law firm innov ] \n it 's call innov . our client do it . our cultur worship it . our futur hing on it . whi is it so difficult in law firm ? how can we make it easier ? viva la differ !","ordered_present_kp":[37,28],"keyphrases":["law firms","innovation"],"prmu":["P","P"]} {"id":"916","title":"Attribute generation based on association rules","abstract":"A decision tree is considered to be appropriate (1) if the tree can classify the unseen data accurately, and (2) if the size of the tree is small. One of the approaches to induce such a good decision tree is to add new attributes and their values to enhance the expressiveness of the training data at the data pre-processing stage. There are many existing methods for attribute extraction and construction, but constructing new attributes is still an art. These methods are very time consuming, and some of them need a priori knowledge of the data domain. They are not suitable for data mining dealing with large volumes of data. We propose a novel approach that the knowledge on attributes relevant to the class is extracted as association rules from the training data. The new attributes and the values are generated from the association rules among the originally given attributes. We elaborate on the method and investigate its feature. The effectiveness of our approach is demonstrated through some experiments","tok_text":"attribut gener base on associ rule \n a decis tree is consid to be appropri ( 1 ) if the tree can classifi the unseen data accur , and ( 2 ) if the size of the tree is small . one of the approach to induc such a good decis tree is to add new attribut and their valu to enhanc the express of the train data at the data pre-process stage . there are mani exist method for attribut extract and construct , but construct new attribut is still an art . these method are veri time consum , and some of them need a priori knowledg of the data domain . they are not suitabl for data mine deal with larg volum of data . we propos a novel approach that the knowledg on attribut relev to the class is extract as associ rule from the train data . the new attribut and the valu are gener from the associ rule among the origin given attribut . we elabor on the method and investig it featur . the effect of our approach is demonstr through some experi","ordered_present_kp":[0,23,39,294,369,569,930],"keyphrases":["attribute generation","association rules","decision tree","training data","attribute extraction","data mining","experiments","large database"],"prmu":["P","P","P","P","P","P","P","M"]} {"id":"584","title":"Hybrid fuzzy modeling of chemical processes","abstract":"Fuzzy models have been proved to have the ability of modeling all plants without any priori information. However, the performance of conventional fuzzy models can be very poor in the case of insufficient training data due to their poor extrapolation capacity. In order to overcome this problem, a hybrid grey-box fuzzy modeling approach is proposed in this paper to combine expert experience, local linear models and historical data into a uniform framework. It consists of two layers. The expert fuzzy model constructed from linguistic information, the local linear model and the T-S type fuzzy model constructed from data are all put in the first layer. Layer 2 is a fuzzy decision module that is used to decide which model in the first layer should be employed to make the final prediction. The output of the second layer is the output of the hybrid fuzzy model. With the help of the linguistic information, the poor extrapolation capacity problem caused by sparse training data for conventional fuzzy models can be overcome. Simulation result for pH neutralization process demonstrates its modeling ability over the linear models, the expert fuzzy model and the conventional fuzzy model","tok_text":"hybrid fuzzi model of chemic process \n fuzzi model have been prove to have the abil of model all plant without ani priori inform . howev , the perform of convent fuzzi model can be veri poor in the case of insuffici train data due to their poor extrapol capac . in order to overcom thi problem , a hybrid grey-box fuzzi model approach is propos in thi paper to combin expert experi , local linear model and histor data into a uniform framework . it consist of two layer . the expert fuzzi model construct from linguist inform , the local linear model and the t- type fuzzi model construct from data are all put in the first layer . layer 2 is a fuzzi decis modul that is use to decid which model in the first layer should be employ to make the final predict . the output of the second layer is the output of the hybrid fuzzi model . with the help of the linguist inform , the poor extrapol capac problem caus by spars train data for convent fuzzi model can be overcom . simul result for ph neutral process demonstr it model abil over the linear model , the expert fuzzi model and the convent fuzzi model","ordered_present_kp":[7,22,476,645],"keyphrases":["fuzzy modeling","chemical processes","expert fuzzy model","fuzzy decision module","process modeling"],"prmu":["P","P","P","P","R"]} {"id":"1194","title":"New methods for oscillatory problems based on classical codes","abstract":"The numerical integration of differential equations with oscillatory solutions is a very common problem in many fields of the applied sciences. Some methods have been specially devised for this kind of problem. In most of them, the calculation of the coefficients needs more computational effort than the classical codes because such coefficients depend on the step-size in a not simple manner. On the contrary, in this work we present new algorithms specially designed for perturbed oscillators whose coefficients have a simple dependence on the step-size. The methods obtained are competitive when comparing with classical and special codes","tok_text":"new method for oscillatori problem base on classic code \n the numer integr of differenti equat with oscillatori solut is a veri common problem in mani field of the appli scienc . some method have been special devis for thi kind of problem . in most of them , the calcul of the coeffici need more comput effort than the classic code becaus such coeffici depend on the step-siz in a not simpl manner . on the contrari , in thi work we present new algorithm special design for perturb oscil whose coeffici have a simpl depend on the step-siz . the method obtain are competit when compar with classic and special code","ordered_present_kp":[15,43,62,78,100,474],"keyphrases":["oscillatory problems","classical codes","numerical integration","differential equations","oscillatory solutions","perturbed oscillators"],"prmu":["P","P","P","P","P","P"]} {"id":"1169","title":"An efficient algorithm for sequential generation of failure states in a network with multi-mode components","abstract":"In this work, a new algorithm for the sequential generation of failure states in a network with multi-mode components is proposed. The algorithm presented in the paper transforms the state enumeration problem into a K-shortest paths problem. Taking advantage of the inherent efficiency of an algorithm for shortest paths enumeration and also of the characteristics of the reliability problem in which it will be used, an algorithm with lower complexity than the best algorithm in the literature for solving this problem, was obtained. Computational results will be presented for comparing the efficiency of both algorithms in terms of CPU time and for problems of different size","tok_text":"an effici algorithm for sequenti gener of failur state in a network with multi-mod compon \n in thi work , a new algorithm for the sequenti gener of failur state in a network with multi-mod compon is propos . the algorithm present in the paper transform the state enumer problem into a k-shortest path problem . take advantag of the inher effici of an algorithm for shortest path enumer and also of the characterist of the reliabl problem in which it will be use , an algorithm with lower complex than the best algorithm in the literatur for solv thi problem , wa obtain . comput result will be present for compar the effici of both algorithm in term of cpu time and for problem of differ size","ordered_present_kp":[257,285,653],"keyphrases":["state enumeration problem","K-shortest paths problem","CPU time","multi-mode components reliability","sequential failure states generation algorithm","network failure states"],"prmu":["P","P","P","R","R","R"]} {"id":"579","title":"Steinmetz system design under unbalanced conditions","abstract":"This paper studies and develops general analytical expressions to obtain three-phase current symmetrization under unbalanced voltage conditions. It proposes two procedures for this symmetrization: the application of the traditional expressions assuming symmetry conditions and the use of optimization methods based on the general analytical equations. Specifically, the paper applies and evaluates these methods to analyze the Steinmetz system design. Several graphics evaluating the error introduced by assumption of balanced voltage in the design are plotted and an example is studied to compare both procedures. In the example the necessity to apply the optimization techniques in highly unbalanced conditions is demonstrated","tok_text":"steinmetz system design under unbalanc condit \n thi paper studi and develop gener analyt express to obtain three-phas current symmetr under unbalanc voltag condit . it propos two procedur for thi symmetr : the applic of the tradit express assum symmetri condit and the use of optim method base on the gener analyt equat . specif , the paper appli and evalu these method to analyz the steinmetz system design . sever graphic evalu the error introduc by assumpt of balanc voltag in the design are plot and an exampl is studi to compar both procedur . in the exampl the necess to appli the optim techniqu in highli unbalanc condit is demonstr","ordered_present_kp":[107,140,0,276,301],"keyphrases":["Steinmetz system design","three-phase current symmetrization","unbalanced voltage conditions","optimization methods","general analytical equations","power system control design","balanced voltage assumption"],"prmu":["P","P","P","P","P","M","R"]} {"id":"685","title":"Robotically enhanced placement of left ventricular epicardial electrodes during implantation of a biventricular implantable cardioverter defibrillator system","abstract":"Biventricular pacing has gained increasing acceptance in advanced heart failure patients. One major limitation of this therapy is positioning the left ventricular stimulation lead via the coronary sinus. This report demonstrates the feasibility of totally endoscopic direct placement of an epicardial stimulation lead on the left ventricle using the daVinci surgical system","tok_text":"robot enhanc placement of left ventricular epicardi electrod dure implant of a biventricular implant cardiovert defibril system \n biventricular pace ha gain increas accept in advanc heart failur patient . one major limit of thi therapi is posit the left ventricular stimul lead via the coronari sinu . thi report demonstr the feasibl of total endoscop direct placement of an epicardi stimul lead on the left ventricl use the davinci surgic system","ordered_present_kp":[26,286,425,337,175],"keyphrases":["left ventricular epicardial electrodes","advanced heart failure patients","coronary sinus","totally endoscopic direct placement","daVinci surgical system","epicardial leads","left ventricular pacing","biventricular implantable cardioverter defibrillator system implantation","left ventricular stimulation lead positioning"],"prmu":["P","P","P","P","P","R","R","R","R"]} {"id":"1295","title":"Development of visual design steering as an aid in large-scale multidisciplinary design optimization. II. Method validation","abstract":"For pt. I see ibid., pp. 412-24. Graph morphing, the first concept developed under the newly proposed paradigm of visual design steering (VDS), is applied to optimal design problems. Graph morphing, described in Part I of this paper, can be used to provide insights to a designer to improve efficiency, reliability, and accuracy of an optimal design in less cycle time. It is demonstrated in this part of the paper that graph morphing can be used to provide insights into design variable impact, constraint redundancy, reasonable values for constraint allowable limits, and function smoothness, that otherwise might not be attainable","tok_text":"develop of visual design steer as an aid in large-scal multidisciplinari design optim . ii . method valid \n for pt . i see ibid . , pp . 412 - 24 . graph morph , the first concept develop under the newli propos paradigm of visual design steer ( vd ) , is appli to optim design problem . graph morph , describ in part i of thi paper , can be use to provid insight to a design to improv effici , reliabl , and accuraci of an optim design in less cycl time . it is demonstr in thi part of the paper that graph morph can be use to provid insight into design variabl impact , constraint redund , reason valu for constraint allow limit , and function smooth , that otherwis might not be attain","ordered_present_kp":[11,44,93,148,264,394,408,444,547,571,607,636],"keyphrases":["visual design steering","large-scale multidisciplinary design optimization","method validation","graph morphing","optimal design problems","reliability","accuracy","cycle time","design variable impact","constraint redundancy","constraint allowable limits","function smoothness"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1074","title":"Inhibiting decoherence via ancilla processes","abstract":"General conditions are derived for preventing the decoherence of a single two-state quantum system (qubit) in a thermal bath. The employed auxiliary systems required for this purpose are merely assumed to be weak for the general condition while various examples such as extra qubits and extra classical fields are studied for applications in quantum information processing. The general condition is confirmed by well known approaches toward inhibiting decoherence. An approach to decoherence-free quantum memories and quantum operations is presented by placing the qubit into the center of a sphere with extra qubits on its surface","tok_text":"inhibit decoher via ancilla process \n gener condit are deriv for prevent the decoher of a singl two-stat quantum system ( qubit ) in a thermal bath . the employ auxiliari system requir for thi purpos are mere assum to be weak for the gener condit while variou exampl such as extra qubit and extra classic field are studi for applic in quantum inform process . the gener condit is confirm by well known approach toward inhibit decoher . an approach to decoherence-fre quantum memori and quantum oper is present by place the qubit into the center of a sphere with extra qubit on it surfac","ordered_present_kp":[20,8,90,122,135,161,275,291,335,38,451,486],"keyphrases":["decoherence","ancilla processes","general condition","single two-state quantum system","qubit","thermal bath","auxiliary systems","extra qubits","extra classical fields","quantum information processing","decoherence-free quantum memories","quantum operations","decoherence inhibition","sphere surface"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"1031","title":"Noise-constrained hyperspectral data compression","abstract":"Storage and transmission requirements for hyperspectral data sets are significant. To reduce hardware costs, well-designed compression techniques are needed to preserve information content while maximizing compression ratios. Lossless compression techniques maintain data integrity, but yield small compression ratios. We present a slightly lossy compression algorithm that uses the noise statistics of the data to preserve information content while maximizing compression ratios. The adaptive principal components analysis (APCA) algorithm uses noise statistics to determine the number of significant principal components and selects only those that are required to represent each pixel to within the noise level. We demonstrate the effectiveness of these methods with airborne visible\/infrared spectrometer (AVIRIS), hyperspectral digital imagery collection experiment (HYDICE), hyperspectral mapper (HYMAP), and Hyperion datasets","tok_text":"noise-constrain hyperspectr data compress \n storag and transmiss requir for hyperspectr data set are signific . to reduc hardwar cost , well-design compress techniqu are need to preserv inform content while maxim compress ratio . lossless compress techniqu maintain data integr , but yield small compress ratio . we present a slightli lossi compress algorithm that use the nois statist of the data to preserv inform content while maxim compress ratio . the adapt princip compon analysi ( apca ) algorithm use nois statist to determin the number of signific princip compon and select onli those that are requir to repres each pixel to within the nois level . we demonstr the effect of these method with airborn visibl \/ infrar spectromet ( aviri ) , hyperspectr digit imageri collect experi ( hydic ) , hyperspectr mapper ( hymap ) , and hyperion dataset","ordered_present_kp":[0,55,76,121,186,213,230,266,326,373,645,802,823,837],"keyphrases":["noise-constrained hyperspectral data compression","transmission requirements","hyperspectral data sets","hardware costs","information content","compression ratios","lossless compression techniques","data integrity","slightly lossy compression algorithm","noise statistics","noise level","hyperspectral mapper","HYMAP","Hyperion datasets","storage requirements","adaptive principal components analysis algorithm","airborne visible\/infrared spectrometer hyperspectral digital imagery collection experiment","AVIRIS HYDICE","Gaussian statistics"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R","M"]} {"id":"798","title":"ClioWeb, ClioRequest, and Clio database: enhancing patron and staff satisfaction","abstract":"Faced with increased demand from students and faculty for a speedier and more user-friendly method of obtaining materials from other institutions, the interlibrary loan (ILL) department sought to implement a management system which would accomplish the task. Students wanted remote interconnectivity to the system and staff wanted increased workflow efficiency, reduced paper work, and better data management. This paper focuses on Washington College's experience in selecting and implementing an interlibrary loan system, which would enhance student satisfaction as well as that of the library staff","tok_text":"clioweb , cliorequest , and clio databas : enhanc patron and staff satisfact \n face with increas demand from student and faculti for a speedier and more user-friendli method of obtain materi from other institut , the interlibrari loan ( ill ) depart sought to implement a manag system which would accomplish the task . student want remot interconnect to the system and staff want increas workflow effici , reduc paper work , and better data manag . thi paper focus on washington colleg 's experi in select and implement an interlibrari loan system , which would enhanc student satisfact as well as that of the librari staff","ordered_present_kp":[28,10,0,61,121,109,153,272,332,388,436,468],"keyphrases":["ClioWeb","ClioRequest","Clio database","staff satisfaction","students","faculty","user-friendly method","management system","remote interconnectivity","workflow efficiency","data management","Washington College","patron satisfaction","interlibrary loan department"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"765","title":"Simulating fermions on a quantum computer","abstract":"The real-time probabilistic simulation of quantum systems in classical computers is known to be limited by the so-called dynamical sign problem, a problem leading to exponential complexity. In 1981 Richard Feynman raised some provocative questions in connection to the \"exact imitation\" of such systems using a special device named a \"quantum computer\". Feynman hesitated about the possibility of imitating fermion systems using such a device. Here we address some of his concerns and, in particular, investigate the simulation of fermionic systems. We show how quantum computers avoid the sign problem in some cases by reducing the complexity from exponential to polynomial. Our demonstration is based upon the use of isomorphisms of algebras. We present specific quantum algorithms that illustrate the main points of our algebraic approach","tok_text":"simul fermion on a quantum comput \n the real-tim probabilist simul of quantum system in classic comput is known to be limit by the so-cal dynam sign problem , a problem lead to exponenti complex . in 1981 richard feynman rais some provoc question in connect to the \" exact imit \" of such system use a special devic name a \" quantum comput \" . feynman hesit about the possibl of imit fermion system use such a devic . here we address some of hi concern and , in particular , investig the simul of fermion system . we show how quantum comput avoid the sign problem in some case by reduc the complex from exponenti to polynomi . our demonstr is base upon the use of isomorph of algebra . we present specif quantum algorithm that illustr the main point of our algebra approach","ordered_present_kp":[19,40,88,138,177,383,144,663,675],"keyphrases":["quantum computer","real-time probabilistic simulation","classical computers","dynamical sign problem","sign problem","exponential complexity","fermion systems","isomorphisms","algebras","fermions simulation"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"720","title":"19in monitors [CRT survey]","abstract":"Upgrade your monitor from as little as Pounds 135. With displays on test and ranging up to Pounds 400, whether you're after the last word in quality or simply looking for again, this Labs holds the answer. Looks at ADI MicroScan M900, CTX PR960F, Eizo FlexScan T766, Hansol 920D, Hansol920P, Hitachi CM715ET, Hitachi CM721FET, liyama Vision Master Pro 454, LG Flatron 915FT Plus, Mitsubishi Diamond Pro 920, NEC MultiSync FE950+, Philips 109S40, Samsung SyncMaster 959NF, Sony Multiscan CPD-G420, and ViewSonic G90f","tok_text":"19 in monitor [ crt survey ] \n upgrad your monitor from as littl as pound 135 . with display on test and rang up to pound 400 , whether you 're after the last word in qualiti or simpli look for again , thi lab hold the answer . look at adi microscan m900 , ctx pr960f , eizo flexscan t766 , hansol 920d , hansol920p , hitachi cm715et , hitachi cm721fet , liyama vision master pro 454 , lg flatron 915ft plu , mitsubishi diamond pro 920 , nec multisync fe950 + , philip 109s40 , samsung syncmast 959nf , soni multiscan cpd-g420 , and viewson g90f","ordered_present_kp":[0,16,236,257,270,291,305,318,336,355,386,409,438,462,478,503,533,0],"keyphrases":["19in monitors","19 in","CRT survey","ADI MicroScan M900","CTX PR960F","Eizo FlexScan T766","Hansol 920D","Hansol920P","Hitachi CM715ET","Hitachi CM721FET","liyama Vision Master Pro 454","LG Flatron 915FT Plus","Mitsubishi Diamond Pro 920","NEC MultiSync FE950","Philips 109S40","Samsung SyncMaster 959NF","Sony Multiscan CPD-G420","ViewSonic G90f"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1375","title":"Evaluation of the usability of digital maintenance manuals developed without either user input or a task analysis","abstract":"The primary objective was to investigate the value that can be added to a low-cost digital maintenance manual by the addition of a navigational aid. Two versions of a digital maintenance manual were developed, the difference between them being the number of design heuristics observed when designing navigational aids. Neither version was based on an analysis of the tasks carried out by users, nor were users involved in the design process. Instead, the manuals were developed directly from the digital information used to produce the existing paper manual. Usability trials were carried out to test both versions according to the time taken and errors committed by users during typical information retrieval tasks. Users were questioned to determine their ease of use (EOU) perceptions for each manual. The main outcomes were that the navigation aid used in the second version reduced the time taken to use the manual but increased the number of errors made by users. The navigational aid also seemed to reduce the perceived EOU compared with the first version. In both cases, the perceived EOU was lower than for a previous digital manual that had been developed using a task analysis and user input. The paper concludes by recommending the development of a generic task model of user interaction with digital maintenance manuals","tok_text":"evalu of the usabl of digit mainten manual develop without either user input or a task analysi \n the primari object wa to investig the valu that can be ad to a low-cost digit mainten manual by the addit of a navig aid . two version of a digit mainten manual were develop , the differ between them be the number of design heurist observ when design navig aid . neither version wa base on an analysi of the task carri out by user , nor were user involv in the design process . instead , the manual were develop directli from the digit inform use to produc the exist paper manual . usabl trial were carri out to test both version accord to the time taken and error commit by user dure typic inform retriev task . user were question to determin their eas of use ( eou ) percept for each manual . the main outcom were that the navig aid use in the second version reduc the time taken to use the manual but increas the number of error made by user . the navig aid also seem to reduc the perceiv eou compar with the first version . in both case , the perceiv eou wa lower than for a previou digit manual that had been develop use a task analysi and user input . the paper conclud by recommend the develop of a gener task model of user interact with digit mainten manual","ordered_present_kp":[208,579,688,1203,1223,82],"keyphrases":["task analysis","navigational aid","usability trials","information retrieval","generic task model","user interaction","digital maintenance manuals usability"],"prmu":["P","P","P","P","P","P","R"]} {"id":"1330","title":"Strobbe Graphics' next frontier: CTP for commercial printers","abstract":"Strobbe is one of the more successful makers of newspaper platesetters, which are sold by Agfa under the Polaris name. But the company also has a growing presence in commercial printing markets, where it sells under its own name","tok_text":"strobb graphic ' next frontier : ctp for commerci printer \n strobb is one of the more success maker of newspap platesett , which are sold by agfa under the polari name . but the compani also ha a grow presenc in commerci print market , where it sell under it own name","ordered_present_kp":[0,141,41,156,111],"keyphrases":["Strobbe Graphics","commercial printing","platesetters","Agfa","Polaris","Punch International","workflow"],"prmu":["P","P","P","P","P","U","U"]} {"id":"1458","title":"Direct gear tooth contact analysis for hypoid bevel gears","abstract":"A new methodology for tooth contact analysis based on a very general mathematical model of the generating process is proposed. Considering the line of action as a first order singularity of a certain operator equation we develop first and second order conditions for a pair of generated gear tooth flanks to be in contact. The constructive approach allows the direct computation of the paths of contact as the solution of a nonlinear equation system including the exact determination of the bounds of the paths of contact. The transmission error as well as curvature properties in the contact points are obtained in a convenient way. The resulting contact ellipses approximate the bearing area. Through the use of automatic differentiation all the geometric quantities are calculable within the machine accuracy of the computer","tok_text":"direct gear tooth contact analysi for hypoid bevel gear \n a new methodolog for tooth contact analysi base on a veri gener mathemat model of the gener process is propos . consid the line of action as a first order singular of a certain oper equat we develop first and second order condit for a pair of gener gear tooth flank to be in contact . the construct approach allow the direct comput of the path of contact as the solut of a nonlinear equat system includ the exact determin of the bound of the path of contact . the transmiss error as well as curvatur properti in the contact point are obtain in a conveni way . the result contact ellips approxim the bear area . through the use of automat differenti all the geometr quantiti are calcul within the machin accuraci of the comput","ordered_present_kp":[0,38,122,144,201,235,267,301,431,522,549,629,657,688,715,754,383],"keyphrases":["direct gear tooth contact analysis","hypoid bevel gears","mathematical model","generating process","first order singularity","operator equation","second order conditions","generated gear tooth flanks","computer","nonlinear equation system","transmission error","curvature properties","contact ellipses","bearing area","automatic differentiation","geometric quantities","machine accuracy","first order conditions","contact paths","exact bound determination"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1089","title":"Accuracy and stability of splitting with Stabilizing Corrections","abstract":"This paper contains a convergence analysis for the method of stabilizing corrections, which is an internally consistent splitting scheme for initial-boundary value problems. To obtain more accuracy and a better treatment of explicit terms several extensions are regarded and analyzed. The relevance of the theoretical results is tested for convection-diffusion-reaction equations","tok_text":"accuraci and stabil of split with stabil correct \n thi paper contain a converg analysi for the method of stabil correct , which is an intern consist split scheme for initial-boundari valu problem . to obtain more accuraci and a better treatment of explicit term sever extens are regard and analyz . the relev of the theoret result is test for convection-diffusion-react equat","ordered_present_kp":[13,71,34,149,166,343],"keyphrases":["stability","stabilizing corrections","convergence analysis","splitting scheme","initial-boundary value problems","convection-diffusion-reaction equations"],"prmu":["P","P","P","P","P","P"]} {"id":"758","title":"Four-terminal quantum resistor network for electron-wave computing","abstract":"Interconnected ultrathin conducting wires or, equivalently, interconnected quasi-one-dimensional electron waveguides, which form a quantum resistor network, are presented here in four-terminal configurations. The transmission behaviors through such four-terminal networks are evaluated and classified. In addition, we show that such networks can be used as the basic building blocks for a possible massive wave computing machine in the future. In a network, each interconnection, a node point, is an elastic scatterer that routes the electron wave. Routing and rerouting of electron waves in a network is described in the framework of quantum transport from Landauer-Buttiker theory in the presence of multiple elastic scatterers. Transmissions through various types of four-terminal generalized clean Aharonov-Bohm rings are investigated at zero temperature. Useful logic functions are gathered based on the transmission probability to each terminal with the use of the Buttiker symmetry rule. In the generalized rings, even and odd numbers of terminals can possess some distinctly different transmission characteristics as we have shown here and earlier. Just as an even or odd number of atoms in a ring is an important quantity for classifying the transmission behavior, we show here that whether the number of terminals is an even or an odd number is just as important in understanding the physics of transmission through such a ring. Furthermore, we show that there are three basic classes of four-terminal rings and the scaling relation for each class is provided. In particular, the existence of equitransmission among all four terminals is shown here. This particular physical phenomena cannot exist in any three-terminal ring. Comparisons and discussions of transmission characteristics between three-terminal and four-terminal rings are also presented. The node-equation approach by considering the Kirchhoff current conservation law at each node point is used for this analysis. Many useful logic functions for electron-wave computing are shown here. In particular, we show that a full adder can be constructed very simply using the equitransmission property of the four-terminal ring. This is in sharp contrast with circuits based on transistor logic","tok_text":"four-termin quantum resistor network for electron-wav comput \n interconnect ultrathin conduct wire or , equival , interconnect quasi-one-dimension electron waveguid , which form a quantum resistor network , are present here in four-termin configur . the transmiss behavior through such four-termin network are evalu and classifi . in addit , we show that such network can be use as the basic build block for a possibl massiv wave comput machin in the futur . in a network , each interconnect , a node point , is an elast scatter that rout the electron wave . rout and rerout of electron wave in a network is describ in the framework of quantum transport from landauer-buttik theori in the presenc of multipl elast scatter . transmiss through variou type of four-termin gener clean aharonov-bohm ring are investig at zero temperatur . use logic function are gather base on the transmiss probabl to each termin with the use of the buttik symmetri rule . in the gener ring , even and odd number of termin can possess some distinctli differ transmiss characterist as we have shown here and earlier . just as an even or odd number of atom in a ring is an import quantiti for classifi the transmiss behavior , we show here that whether the number of termin is an even or an odd number is just as import in understand the physic of transmiss through such a ring . furthermor , we show that there are three basic class of four-termin ring and the scale relat for each class is provid . in particular , the exist of equitransmiss among all four termin is shown here . thi particular physic phenomena can not exist in ani three-termin ring . comparison and discuss of transmiss characterist between three-termin and four-termin ring are also present . the node-equ approach by consid the kirchhoff current conserv law at each node point is use for thi analysi . mani use logic function for electron-wav comput are shown here . in particular , we show that a full adder can be construct veri simpli use the equitransmiss properti of the four-termin ring . thi is in sharp contrast with circuit base on transistor logic","ordered_present_kp":[0,41,63,568,659,700,781,838,876,929,254,1778,1996],"keyphrases":["four-terminal quantum resistor network","electron-wave computing","interconnected ultrathin conducting wires","transmission behavior","rerouting","Landauer-Buttiker theory","multiple elastic scatterers","Aharonov-Bohm rings","logic functions","transmission probability","Buttiker symmetry rule","Kirchhoff current conservation law","equitransmission property","quasi1D electron waveguides"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","M"]} {"id":"1348","title":"Reconstructing surfaces by volumetric regularization using radial basis functions","abstract":"We present a new method of surface reconstruction that generates smooth and seamless models from sparse, noisy, nonuniform, and low resolution range data. Data acquisition techniques from computer vision, such as stereo range images and space carving, produce 3D point sets that are imprecise and nonuniform when compared to laser or optical range scanners. Traditional reconstruction algorithms designed for dense and precise data do not produce smooth reconstructions when applied to vision-based data sets. Our method constructs a 3D implicit surface, formulated as a sum of weighted radial basis functions. We achieve three primary advantages over existing algorithms: (1) the implicit functions we construct estimate the surface well in regions where there is little data, (2) the reconstructed surface is insensitive to noise in data acquisition because we can allow the surface to approximate, rather than exactly interpolate, the data, and (3) the reconstructed surface is locally detailed, yet globally smooth, because we use radial basis functions that achieve multiple orders of smoothness","tok_text":"reconstruct surfac by volumetr regular use radial basi function \n we present a new method of surfac reconstruct that gener smooth and seamless model from spars , noisi , nonuniform , and low resolut rang data . data acquisit techniqu from comput vision , such as stereo rang imag and space carv , produc 3d point set that are imprecis and nonuniform when compar to laser or optic rang scanner . tradit reconstruct algorithm design for dens and precis data do not produc smooth reconstruct when appli to vision-bas data set . our method construct a 3d implicit surfac , formul as a sum of weight radial basi function . we achiev three primari advantag over exist algorithm : ( 1 ) the implicit function we construct estim the surfac well in region where there is littl data , ( 2 ) the reconstruct surfac is insensit to nois in data acquisit becaus we can allow the surfac to approxim , rather than exactli interpol , the data , and ( 3 ) the reconstruct surfac is local detail , yet global smooth , becaus we use radial basi function that achiev multipl order of smooth","ordered_present_kp":[93,22,43,187,211,239,263,284,304,503,548,588],"keyphrases":["volumetric regularization","radial basis functions","surfaces reconstruction","low resolution range data","data acquisition techniques","computer vision","stereo range images","space carving","3D point sets","vision-based data sets","3D implicit surface","weighted radial basis functions","sparse range data","noisy data","nonuniform data"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1420","title":"PDF subscriptions bolster revenue","abstract":"In 1999 SD Times offered prospective subscribers the option of receiving their issues as Adobe Acrobat PDF files. What set the proposal apart from what other publishers were doing electronically on the Web was that readers would get the entire version of the paper-including both advertising and editorial just as it looked when it was laid out and went to press. SD Times is only one of a small, but growing, number of publications that are taking on the electronic world and finding success. In the past six months alone, the New York Times, Popular Mechanics, trade magazine Electronic Buyers' News, and the Harvard Business Review have launched digital versions of their newspapers and magazines to augment their online and print versions. The reasons are as varied as the publishers themselves. Some companies are finding that readers don't like their Web-based versions either due to poor navigation or missing graphics and images. Others want to expand their publications nationally and internationally, but don't want the added cost of postage and printing. Still others are looking for ways to give advertisers additional visibility and boost advertising and subscription revenues. No matter what the reason, it's a trend worth watching","tok_text":"pdf subscript bolster revenu \n in 1999 sd time offer prospect subscrib the option of receiv their issu as adob acrobat pdf file . what set the propos apart from what other publish were do electron on the web wa that reader would get the entir version of the paper-includ both advertis and editori just as it look when it wa laid out and went to press . sd time is onli one of a small , but grow , number of public that are take on the electron world and find success . in the past six month alon , the new york time , popular mechan , trade magazin electron buyer ' news , and the harvard busi review have launch digit version of their newspap and magazin to augment their onlin and print version . the reason are as vari as the publish themselv . some compani are find that reader do n't like their web-bas version either due to poor navig or miss graphic and imag . other want to expand their public nation and intern , but do n't want the ad cost of postag and print . still other are look for way to give advertis addit visibl and boost advertis and subscript revenu . no matter what the reason , it 's a trend worth watch","ordered_present_kp":[0,39,106,636,613,541],"keyphrases":["PDF subscriptions","SD Times","Adobe Acrobat PDF files","magazines","digital versions","newspaper","electronic issue"],"prmu":["P","P","P","P","P","P","R"]} {"id":"836","title":"Recruitment and retention of women graduate students in computer science and engineering: results of a workshop organized by the Computing Research Association","abstract":"This document is the report of a workshop that convened a group of experts to discuss the recruitment and retention of women in computer science and engineering (CSE) graduate programs. Participants included long-time members of the CSE academic and research communities, social scientists engaged in relevant research, and directors of successful retention efforts. The report is a compendium of the experience and expertise of workshop participants, rather than the result of a full-scale, scholarly study into the range of issues. Its goal is to provide departments with practical advice on recruitment and retention in the form of a set of specific recommendations","tok_text":"recruit and retent of women graduat student in comput scienc and engin : result of a workshop organ by the comput research associ \n thi document is the report of a workshop that conven a group of expert to discuss the recruit and retent of women in comput scienc and engin ( cse ) graduat program . particip includ long-tim member of the cse academ and research commun , social scientist engag in relev research , and director of success retent effort . the report is a compendium of the experi and expertis of workshop particip , rather than the result of a full-scal , scholarli studi into the rang of issu . it goal is to provid depart with practic advic on recruit and retent in the form of a set of specif recommend","ordered_present_kp":[0,12,22,47,65,107,371,353,418,511],"keyphrases":["recruitment","retention","women graduate students","computer science","engineering","Computing Research Association","research communities","social scientists","directors","workshop participants","academic communities"],"prmu":["P","P","P","P","P","P","P","P","P","P","R"]} {"id":"873","title":"Programmatic efforts encouraging women to enter the information technology workforce","abstract":"For over a decade the National Science Foundation (NSF) has been supporting projects designed to improve opportunities for women in computing. From an initial emphasis on increasing the number of women in graduate school studying computer science and engineering, NSF's current emphasis has broadened to include research studies examining the underlying reasons why women are underrepresented in the information technology (IT) workforce. This paper describes the recent history of NSF's activities in this area and the subsequent emergence of a research portfolio addressing the underrepresentation issue","tok_text":"programmat effort encourag women to enter the inform technolog workforc \n for over a decad the nation scienc foundat ( nsf ) ha been support project design to improv opportun for women in comput . from an initi emphasi on increas the number of women in graduat school studi comput scienc and engin , nsf 's current emphasi ha broaden to includ research studi examin the underli reason whi women are underrepres in the inform technolog ( it ) workforc . thi paper describ the recent histori of nsf 's activ in thi area and the subsequ emerg of a research portfolio address the underrepresent issu","ordered_present_kp":[95,27,188,253,292,482],"keyphrases":["women","National Science Foundation","computing","graduate school","engineering","history","IT workforce"],"prmu":["P","P","P","P","P","P","R"]} {"id":"1049","title":"A typed representation for HTML and XML documents in Haskell","abstract":"We define a family of embedded domain specific languages for generating HTML and XML documents. Each language is implemented as a combinator library in Haskell. The generated HTML\/XML documents are guaranteed to be well-formed. In addition, each library can guarantee that the generated documents are valid XML documents to a certain extent (for HTML only a weaker guarantee is possible). On top of the libraries, Haskell serves as a meta language to define parameterized documents, to map structured documents to HTML\/XML, to define conditional content, or to define entire Web sites. The combinator libraries support element-transforming style, a programming style that allows programs to have a visual appearance similar to HTML\/XML documents, without modifying the syntax of Haskell","tok_text":"a type represent for html and xml document in haskel \n we defin a famili of embed domain specif languag for gener html and xml document . each languag is implement as a combin librari in haskel . the gener html \/ xml document are guarante to be well-form . in addit , each librari can guarante that the gener document are valid xml document to a certain extent ( for html onli a weaker guarante is possibl ) . on top of the librari , haskel serv as a meta languag to defin parameter document , to map structur document to html \/ xml , to defin condit content , or to defin entir web site . the combin librari support element-transform style , a program style that allow program to have a visual appear similar to html \/ xml document , without modifi the syntax of haskel","ordered_present_kp":[2,169,451,473,544,579,617,754,30,46,76],"keyphrases":["typed representation","XML documents","Haskell","embedded domain specific languages","combinator library","meta language","parameterized documents","conditional content","Web sites","element-transforming style","syntax","HTML documents","software libraries","functional programming"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","M","M"]} {"id":"1111","title":"The contiguity in R\/M","abstract":"An r.e. degree c is contiguous if deg\/sub wtt\/(A)=deg\/sub wtt\/(B) for any r.e. sets A,B in c. In this paper, we generalize the notation of contiguity to the structure R\/M, the upper semilattice of the r.e. degree set R modulo the cappable r.e. degree set M. An element [c] in R\/M is contiguous if [deg\/sub wtt\/(A)]=[deg\/sub wtt\/(B)] for any r.e. sets A, B such that deg\/sub T\/(A),deg\/sub T\/(B) in [c]. It is proved in this paper that every nonzero element in R\/M is not contiguous, i.e., for every element [c] in R\/M, if [c] not=[o] then there exist at least two r.e. sets A, B such that deg\/sub T\/(A), deg\/sub T\/(B) in [c] and [deg\/sub wtt\/(A)] not=[deg\/sub wtt\/(B)]","tok_text":"the contigu in r \/ m \n an r.e . degre c is contigu if deg \/ sub wtt\/(a)=deg \/ sub wtt\/(b ) for ani r.e . set a , b in c. in thi paper , we gener the notat of contigu to the structur r \/ m , the upper semilattic of the r.e . degre set r modulo the cappabl r.e . degre set m. an element [ c ] in r \/ m is contigu if [ deg \/ sub wtt\/(a)]=[deg \/ sub wtt\/(b ) ] for ani r.e . set a , b such that deg \/ sub t\/(a),deg \/ sub t\/(b ) in [ c ] . it is prove in thi paper that everi nonzero element in r \/ m is not contigu , i.e. , for everi element [ c ] in r \/ m , if [ c ] not=[o ] then there exist at least two r.e . set a , b such that deg \/ sub t\/(a ) , deg \/ sub t\/(b ) in [ c ] and [ deg \/ sub wtt\/(a ) ] not=[deg \/ sub wtt\/(b ) ]","ordered_present_kp":[4,194,471],"keyphrases":["contiguity","upper semilattice","nonzero element","Turing degree","recursively enumerable set","recursion theory"],"prmu":["P","P","P","M","M","U"]} {"id":"1154","title":"The effect of voxel size on the accuracy of dose-volume histograms of prostate \/sup 125\/I seed implants","abstract":"Cumulative dose-volume histograms (DVH) are crucial in evaluating the quality of radioactive seed prostate implants. When calculating DVHs, the choice of voxel size is a compromise between computational speed (larger voxels) and accuracy (smaller voxels). We quantified the effect of voxel size on the accuracy of DVHs using an in-house computer program. The program was validated by comparison with a hand-calculated DVH for a single 0.4-U iodine-125 model 6711 seed. We used the program to find the voxel size required to obtain accurate DVHs of five iodine-125 prostate implant patients at our institution. One-millimeter cubes were sufficient to obtain DVHs that are accurate within 5% up to 200% of the prescription dose. For the five patient plans, we obtained good agreement with the VariSeed (version 6.7, Varian, USA) treatment planning software's DVH algorithm by using voxels with a sup-inf dimension equal to the spacing between successive transverse seed implant planes (5 mm). The volume that receives at least 200% of the target dose, V\/sub 200\/, calculated by VariSeed was 30% to 43% larger than that calculated by our program with small voxels. The single-seed DVH calculated by VariSeed fell below the hand calculation by up to 50% at low doses (30 Gy), and above it by over 50% at high doses (>250 Gy)","tok_text":"the effect of voxel size on the accuraci of dose-volum histogram of prostat \/sup 125 \/ i seed implant \n cumul dose-volum histogram ( dvh ) are crucial in evalu the qualiti of radioact seed prostat implant . when calcul dvh , the choic of voxel size is a compromis between comput speed ( larger voxel ) and accuraci ( smaller voxel ) . we quantifi the effect of voxel size on the accuraci of dvh use an in-hous comput program . the program wa valid by comparison with a hand-calcul dvh for a singl 0.4-u iodine-125 model 6711 seed . we use the program to find the voxel size requir to obtain accur dvh of five iodine-125 prostat implant patient at our institut . one-millimet cube were suffici to obtain dvh that are accur within 5 % up to 200 % of the prescript dose . for the five patient plan , we obtain good agreement with the varise ( version 6.7 , varian , usa ) treatment plan softwar 's dvh algorithm by use voxel with a sup-inf dimens equal to the space between success transvers seed implant plane ( 5 mm ) . the volum that receiv at least 200 % of the target dose , v \/ sub 200\/ , calcul by varise wa 30 % to 43 % larger than that calcul by our program with small voxel . the single-se dvh calcul by varise fell below the hand calcul by up to 50 % at low dose ( 30 gy ) , and abov it by over 50 % at high dose ( > 250 gy )","ordered_present_kp":[104,68,175,14,272,402,21],"keyphrases":["voxel size","I","prostate \/sup 125\/I seed implants","cumulative dose-volume histograms","radioactive seed prostate implants","computational speed","in-house computer program","hand-calculated dose-volume histograms","single-seed dose-volume histograms","\/sup 125\/I model","\/sup 125\/I prostate implant patients","VariSeed treatment planning software's dose-volume histogram algorithm"],"prmu":["P","P","P","P","P","P","P","R","R","R","R","R"]} {"id":"993","title":"A large deviations analysis of the transient of a queue with many Markov fluid inputs: approximations and fast simulation","abstract":"This article analyzes the transient buffer content distribution of a queue fed by a large number of Markov fluid sources. We characterize the probability of overflow at time t, given the current buffer level and the number of sources in the on-state. After scaling buffer and bandwidth resources by the number of sources n, we can apply large deviations techniques. The transient overflow probability decays exponentially in n. In the case of exponential on\/off sources, we derive an expression for the decay rate of the rare event probability under consideration. For general Markov fluid sources, we present a plausible conjecture. We also provide the \"most likely path\" from the initial state to overflow (at time t). Knowledge of the decay rate and the most likely path to overflow leads to (i) approximations of the transient overflow probability and (ii) efficient simulation methods of the rare event of buffer overflow. The simulation methods, based on importance sampling, give a huge speed-up compared to straightforward simulations. The approximations are of low computational complexity and are accurate, as verified by means of simulation experiments","tok_text":"a larg deviat analysi of the transient of a queue with mani markov fluid input : approxim and fast simul \n thi articl analyz the transient buffer content distribut of a queue fed by a larg number of markov fluid sourc . we character the probabl of overflow at time t , given the current buffer level and the number of sourc in the on-stat . after scale buffer and bandwidth resourc by the number of sourc n , we can appli larg deviat techniqu . the transient overflow probabl decay exponenti in n. in the case of exponenti on \/ off sourc , we deriv an express for the decay rate of the rare event probabl under consider . for gener markov fluid sourc , we present a plausibl conjectur . we also provid the \" most like path \" from the initi state to overflow ( at time t ) . knowledg of the decay rate and the most like path to overflow lead to ( i ) approxim of the transient overflow probabl and ( ii ) effici simul method of the rare event of buffer overflow . the simul method , base on import sampl , give a huge speed-up compar to straightforward simul . the approxim are of low comput complex and are accur , as verifi by mean of simul experi","ordered_present_kp":[2,60,129,364,81,449,911,990,1084],"keyphrases":["large deviations analysis","Markov fluid inputs","approximations","transient buffer content distribution","bandwidth resources","transient overflow probability","simulation methods","importance sampling","computational complexity","buffer resources","ATM multiplexers","IP routers","queuing theory"],"prmu":["P","P","P","P","P","P","P","P","P","R","U","U","U"]} {"id":"544","title":"Virtual reality treatment of flying phobia","abstract":"Flying phobia (FP) might become a very incapacitating and disturbing problem in a person's social, working, and private areas. Psychological interventions based on exposure therapy have proved to be effective, but given the particular nature of this disorder they bear important limitations. Exposure therapy for FP might be excessively costly in terms of time, money, and efforts. Virtual reality (VR) overcomes these difficulties as different significant environments might be created, where the patient can interact with what he or she fears while in a totally safe and protected environment, the therapist's consulting room. This paper intends, on one hand, to show the different scenarios designed by our team for the VR treatment of FP, and on the other, to present the first results supporting the effectiveness of this new tool for the treatment of FP in a multiple baseline study","tok_text":"virtual realiti treatment of fli phobia \n fli phobia ( fp ) might becom a veri incapacit and disturb problem in a person 's social , work , and privat area . psycholog intervent base on exposur therapi have prove to be effect , but given the particular natur of thi disord they bear import limit . exposur therapi for fp might be excess costli in term of time , money , and effort . virtual realiti ( vr ) overcom these difficulti as differ signific environ might be creat , where the patient can interact with what he or she fear while in a total safe and protect environ , the therapist 's consult room . thi paper intend , on one hand , to show the differ scenario design by our team for the vr treatment of fp , and on the other , to present the first result support the effect of thi new tool for the treatment of fp in a multipl baselin studi","ordered_present_kp":[158,29,158,186],"keyphrases":["flying phobia","psychology","psychological interventions","exposure therapy","medical virtual reality","patient treatment","anxiety disorders","virtual exposure"],"prmu":["P","P","P","P","M","R","M","R"]} {"id":"82","title":"Bit-serial AB\/sup 2\/ multiplier using modified inner product","abstract":"This paper presents a new multiplication algorithm and, based on this algorithm, proposes a hardware architecture, called modified inner-product multiplier (MIPM), which computes AB\/sup 2\/ multiplication based on a linear feedback shift register (LFSR). The algorithm is based on the property of the irreducible all one polynomial (AOP) over the finite field GF(2\/sup m\/). The proposed architecture reduces the time and space complexity for computing AB\/sup 2\/. The proposed architecture has a potential application to implementing exponentiation architecture for a public-key cryptosystem","tok_text":"bit-seri ab \/ sup 2\/ multipli use modifi inner product \n thi paper present a new multipl algorithm and , base on thi algorithm , propos a hardwar architectur , call modifi inner-product multipli ( mipm ) , which comput ab \/ sup 2\/ multipl base on a linear feedback shift regist ( lfsr ) . the algorithm is base on the properti of the irreduc all one polynomi ( aop ) over the finit field gf(2 \/ sup m\/ ) . the propos architectur reduc the time and space complex for comput ab \/ sup 2\/. the propos architectur ha a potenti applic to implement exponenti architectur for a public-key cryptosystem","ordered_present_kp":[0,34,81,138,165,249,334,448,570],"keyphrases":["bit-serial AB\/sup 2\/ multiplier","modified inner product","multiplication algorithm","hardware architecture","modified inner-product multiplier","linear feedback shift register","irreducible all one polynomial","space complexity","public-key cryptosystem","time complexity"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"1210","title":"Adaptive optimizing compilers for the 21st century","abstract":"Historically, compilers have operated by applying a fixed set of optimizations in a predetermined order. We call such an ordered list of optimizations a compilation sequence. This paper describes a prototype system that uses biased random search to discover a program-specific compilation sequence that minimizes an explicit, external objective function. The result is a compiler framework that adapts its behavior to the application being compiled, to the pool of available transformations, to the objective function, and to the target machine. This paper describes experiments that attempt to characterize the space that the adaptive compiler must search. The preliminary results suggest that optimal solutions are rare and that local minima are frequent. If this holds true, biased random searches, such as a,genetic algorithm, should find good solutions more quickly than simpler strategies, such as hill climbing","tok_text":"adapt optim compil for the 21st centuri \n histor , compil have oper by appli a fix set of optim in a predetermin order . we call such an order list of optim a compil sequenc . thi paper describ a prototyp system that use bias random search to discov a program-specif compil sequenc that minim an explicit , extern object function . the result is a compil framework that adapt it behavior to the applic be compil , to the pool of avail transform , to the object function , and to the target machin . thi paper describ experi that attempt to character the space that the adapt compil must search . the preliminari result suggest that optim solut are rare and that local minima are frequent . if thi hold true , bias random search , such as a , genet algorithm , should find good solut more quickli than simpler strategi , such as hill climb","ordered_present_kp":[12,6,159,569,6,221],"keyphrases":["optimizations","optimizing compilers","compilers","compilation sequence","biased random search","adaptive compiler","configurable compilers"],"prmu":["P","P","P","P","P","P","M"]} {"id":"1255","title":"Succession in standardization: grafting XML onto SGML","abstract":"Succession in standardization is often a problem. The advantages of improvements must be weighed against those of compatibility. If compatibility considerations dominate, a grafting process takes place. According to our taxonomy of succession, there are three types of outcomes. A Type I succession, where grafting is successful, entails compatibility between successors, technical paradigm compliance and continuity in the standards trajectory. In this paper, we examine issues of succession and focus on the Extensible Markup Language (XML). It was to be grafted on the Standard Generalized Markup Language (SGML), a stable standard since 1988. However, XML was a profile, a subset and an extension of SGML (1988). Adaptation of SGML was needed (SGML 1999) to forge full (downward) compatibility with XML (1998). We describe the grafting efforts and analyze their outcomes. Our conclusion is that although SGML was a technical exemplar for XML developers, full compatibility was not achieved. The widespread use of HyperText Mark-up Language (HTML) exemplified the desirability of simplicity in XML, standardization. This and HTML's user market largely explain the discontinuity in SGML-XML succession","tok_text":"success in standard : graft xml onto sgml \n success in standard is often a problem . the advantag of improv must be weigh against those of compat . if compat consider domin , a graft process take place . accord to our taxonomi of success , there are three type of outcom . a type i success , where graft is success , entail compat between successor , technic paradigm complianc and continu in the standard trajectori . in thi paper , we examin issu of success and focu on the extens markup languag ( xml ) . it wa to be graft on the standard gener markup languag ( sgml ) , a stabl standard sinc 1988 . howev , xml wa a profil , a subset and an extens of sgml ( 1988 ) . adapt of sgml wa need ( sgml 1999 ) to forg full ( downward ) compat with xml ( 1998 ) . we describ the graft effort and analyz their outcom . our conclus is that although sgml wa a technic exemplar for xml develop , full compat wa not achiev . the widespread use of hypertext mark-up languag ( html ) exemplifi the desir of simplic in xml , standard . thi and html 's user market larg explain the discontinu in sgml-xml success","ordered_present_kp":[28,37,11,177,275,476,533],"keyphrases":["standardization","XML","SGML","grafting process","Type I succession","Extensible Markup Language","Standard Generalized Markup Language"],"prmu":["P","P","P","P","P","P","P"]} {"id":"600","title":"Development of railway VR safety simulation system","abstract":"Abnormal conditions occur in railway transportation due to trouble or accidents and it affects a number of passengers. It is very important, therefore, to quickly recover and return to normal train operation. For this purpose we developed a system, \"Computer VR Simulation System for the Safety of Railway Transportation.\" It is a new type simulation system to evaluate measures to be taken under abnormal conditions. Users of this simulation system cooperate with one another to correct the abnormal conditions that have occurred in virtual reality. This paper reports the newly developed simulation system","tok_text":"develop of railway vr safeti simul system \n abnorm condit occur in railway transport due to troubl or accid and it affect a number of passeng . it is veri import , therefor , to quickli recov and return to normal train oper . for thi purpos we develop a system , \" comput vr simul system for the safeti of railway transport . \" it is a new type simul system to evalu measur to be taken under abnorm condit . user of thi simul system cooper with one anoth to correct the abnorm condit that have occur in virtual realiti . thi paper report the newli develop simul system","ordered_present_kp":[67,102,206,265],"keyphrases":["railway transportation","accidents","normal train operation","Computer VR Simulation System","virtual reality simulation system","abnormal conditions correction"],"prmu":["P","P","P","P","R","R"]} {"id":"645","title":"Oxygen-enhanced MRI of the brain","abstract":"Blood oxygenation level-dependent (BOLD) contrast MRI is a potential method for a physiological characterization of tissue beyond mere morphological representation. The purpose of this study was to develop evaluation techniques for such examinations using a hyperoxia challenge. Administration of pure oxygen was applied to test these techniques, as pure oxygen can be expected to induce relatively small signal intensity (SI) changes compared to CO\/sub 2\/-containing gases and thus requires very sensitive evaluation methods. Fourteen volunteers were investigated by alternating between breathing 100% O\/sub 2\/ and normal air, using two different paradigms of administration. Changes ranged from >30% in large veins to 1.71%+or-0.14% in basal ganglia and 0.82%+or-0.08% in white matter. To account for a slow physiological response function, a reference for correlation analysis was derived from the venous reaction. An objective method is presented that allows the adaptation of the significance threshold to the complexity of the paradigm used. Reference signal characteristics in representative brain tissue regions were established. As the presented evaluation scheme proved its applicability to small SI changes induced by pure oxygen, it can readily be used for similar experiments with other gases","tok_text":"oxygen-enhanc mri of the brain \n blood oxygen level-depend ( bold ) contrast mri is a potenti method for a physiolog character of tissu beyond mere morpholog represent . the purpos of thi studi wa to develop evalu techniqu for such examin use a hyperoxia challeng . administr of pure oxygen wa appli to test these techniqu , as pure oxygen can be expect to induc rel small signal intens ( si ) chang compar to co \/ sub 2\/-contain gase and thu requir veri sensit evalu method . fourteen volunt were investig by altern between breath 100 % o \/ sub 2\/ and normal air , use two differ paradigm of administr . chang rang from > 30 % in larg vein to 1.71%+or-0.14 % in basal ganglia and 0.82%+or-0.08 % in white matter . to account for a slow physiolog respons function , a refer for correl analysi wa deriv from the venou reaction . an object method is present that allow the adapt of the signific threshold to the complex of the paradigm use . refer signal characterist in repres brain tissu region were establish . as the present evalu scheme prove it applic to small si chang induc by pure oxygen , it can readili be use for similar experi with other gase","ordered_present_kp":[0,245,25,737,778,811,884],"keyphrases":["oxygen-enhanced MRI","brain","hyperoxia","physiological response function","correlation analysis","venous reaction","significance threshold","BOLD contrast MRI","oxygen breathing","normal air breathing","paradigm complexity","MRI contrast agent","functional imaging","Fourier transform Analysis"],"prmu":["P","P","P","P","P","P","P","R","R","R","R","M","M","M"]} {"id":"1396","title":"Construction of double sampling s-control charts for agile manufacturing","abstract":"Double sampling (DS) X-control charts are designed to allow quick detection of a small shift of process mean and provides a quick response in an agile manufacturing environment. However, the DS X-control charts assume that the process standard deviation remains unchanged throughout the entire course of the statistical process control. Therefore, a complementary DS chart that can be used to monitor the process variation caused by changes in process standard deviation should be developed. In this paper, the development of the DS s-charts for quickly detecting small shift in process standard deviation for agile manufacturing is presented. The construction of the DS s-charts is based on the same concepts in constructing the DS X-charts and is formulated as an optimization problem and solved with a genetic algorithm. The efficiency of the DS s-control chart is compared with that of the traditional s-control chart. The results show that the DS s-control charts can be a more economically preferable alternative in detecting small shifts than traditional s-control charts","tok_text":"construct of doubl sampl s-control chart for agil manufactur \n doubl sampl ( ds ) x-control chart are design to allow quick detect of a small shift of process mean and provid a quick respons in an agil manufactur environ . howev , the ds x-control chart assum that the process standard deviat remain unchang throughout the entir cours of the statist process control . therefor , a complementari ds chart that can be use to monitor the process variat caus by chang in process standard deviat should be develop . in thi paper , the develop of the ds s-chart for quickli detect small shift in process standard deviat for agil manufactur is present . the construct of the ds s-chart is base on the same concept in construct the ds x-chart and is formul as an optim problem and solv with a genet algorithm . the effici of the ds s-control chart is compar with that of the tradit s-control chart . the result show that the ds s-control chart can be a more econom prefer altern in detect small shift than tradit s-control chart","ordered_present_kp":[13,45,269,342,785],"keyphrases":["double sampling s-control charts","agile manufacturing","process standard deviation","statistical process control","genetic algorithm","double sampling X-control charts","process mean shift detection"],"prmu":["P","P","P","P","P","R","R"]} {"id":"7","title":"Anti-spam suit attempts to hold carriers accountable","abstract":"A lawsuit alleges that Sprint has violated Utah's new anti-spam act. The action could open the door to new regulations on telecom service providers","tok_text":"anti-spam suit attempt to hold carrier account \n a lawsuit alleg that sprint ha violat utah 's new anti-spam act . the action could open the door to new regul on telecom servic provid","ordered_present_kp":[70,162,153,99,51],"keyphrases":["lawsuit","Sprint","anti-spam act","regulations","telecom service providers"],"prmu":["P","P","P","P","P"]} {"id":"1403","title":"IT: Utilities","abstract":"A look at five utilities to make your PCs more, efficient, effective, and efficacious","tok_text":"it : util \n a look at five util to make your pc more , effici , effect , and efficaci","ordered_present_kp":[5,45],"keyphrases":["utilities","PCs","MobileMessenger","Post-it software","EasyNotes","Print Shop Pro","Download Accelerator Plus"],"prmu":["P","P","U","U","U","U","U"]} {"id":"1097","title":"A study on an automatic seam tracking system by using an electromagnetic sensor for sheet metal arc welding of butt joints","abstract":"Many sensors, such as the vision sensor and the laser displacement sensor, have been developed to automate the arc welding process. However, these sensors have some problems due to the effects of arc light, fumes and spatter. An electromagnetic sensor, which utilizes the generation of an eddy current, was developed for detecting the weld line of a butt joint in which the root gap size was zero. An automatic seam tracking system designed for sheet metal arc welding was constructed with a sensor. Through experiments, it was revealed that the system had an excellent seam tracking accuracy of the order of +or-0.2 mm","tok_text":"a studi on an automat seam track system by use an electromagnet sensor for sheet metal arc weld of butt joint \n mani sensor , such as the vision sensor and the laser displac sensor , have been develop to autom the arc weld process . howev , these sensor have some problem due to the effect of arc light , fume and spatter . an electromagnet sensor , which util the gener of an eddi current , wa develop for detect the weld line of a butt joint in which the root gap size wa zero . an automat seam track system design for sheet metal arc weld wa construct with a sensor . through experi , it wa reveal that the system had an excel seam track accuraci of the order of + or-0.2 mm","ordered_present_kp":[75,99,14,50,457,630],"keyphrases":["automatic seam tracking system","electromagnetic sensor","sheet metal arc welding","butt joints","root gap size","seam tracking accuracy","eddy current generation","weld line detection"],"prmu":["P","P","P","P","P","P","R","R"]} {"id":"1446","title":"The Tattletale technique","abstract":"Practical experience has taught many Java developers one thing: critical resources (mutexes, database connections, transactions, file handles, etc.) require timely and systematic release. Unfortunately, Java's garbage collector is not up to that job. According to the Java Language Specification, there are no guarantees when a garbage collector will run, when it will collect an object, or when it will finalize an object - if ever. Even more unfortunately, Java's counterpart to the C++ destructor (the finally block) is both tedious and error-prone, requiring developers to constantly remember and duplicate resource-releasing code. Consequently, even good Java developers can forget to release critical resources. There is a light at the end of the tunnel. Java may make it easier to leak critical resources, but it also provides the necessary mechanisms to easily track them down. The Tattletale technique is a simple method for designing new classes and retrofitting existing classes to quickly and easily detect the offending code responsible for leaking resources","tok_text":"the tattletal techniqu \n practic experi ha taught mani java develop one thing : critic resourc ( mutex , databas connect , transact , file handl , etc . ) requir time and systemat releas . unfortun , java 's garbag collector is not up to that job . accord to the java languag specif , there are no guarante when a garbag collector will run , when it will collect an object , or when it will final an object - if ever . even more unfortun , java 's counterpart to the c++ destructor ( the final block ) is both tediou and error-pron , requir develop to constantli rememb and duplic resource-releas code . consequ , even good java develop can forget to releas critic resourc . there is a light at the end of the tunnel . java may make it easier to leak critic resourc , but it also provid the necessari mechan to easili track them down . the tattletal techniqu is a simpl method for design new class and retrofit exist class to quickli and easili detect the offend code respons for leak resourc","ordered_present_kp":[55,80,97,105,123,134,208,581,4],"keyphrases":["Tattletale technique","Java","critical resources","mutexes","database connections","transactions","file handles","garbage collector","resource-releasing code","resources leaking"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"850","title":"Encouraging women in computer science","abstract":"At a cost to both their own opportunities and society's ability to produce people with much-needed technical skills, women continue to be underrepresented in computer science degree programs at both the undergraduate and graduate level. Although some of the barriers that women face have their foundations in cultural expectations established well before the college level, we believe that departments can take effective steps to increase recruitment and retention of women students. This paper describes several strategies we have adopted at Stanford over the past decade","tok_text":"encourag women in comput scienc \n at a cost to both their own opportun and societi 's abil to produc peopl with much-need technic skill , women continu to be underrepres in comput scienc degre program at both the undergradu and graduat level . although some of the barrier that women face have their foundat in cultur expect establish well befor the colleg level , we believ that depart can take effect step to increas recruit and retent of women student . thi paper describ sever strategi we have adopt at stanford over the past decad","ordered_present_kp":[122,173,228,311],"keyphrases":["technical skills","computer science degree programs","graduate level","cultural expectations","undergraduate level","women student recruitment","women student retention"],"prmu":["P","P","P","P","R","R","R"]} {"id":"815","title":"The canonical dual frame of a wavelet frame","abstract":"We show that there exist wavelet frames that have nice dual wavelet frames, but for which the canonical dual frame does not consist of wavelets, i.e., cannot be generated by the translates and dilates of a single function","tok_text":"the canon dual frame of a wavelet frame \n we show that there exist wavelet frame that have nice dual wavelet frame , but for which the canon dual frame doe not consist of wavelet , i.e. , can not be gener by the translat and dilat of a singl function","ordered_present_kp":[4,26],"keyphrases":["canonical dual frame","wavelet frame","Gabor frames","multiresolution hierarchy","compact support"],"prmu":["P","P","M","U","U"]} {"id":"1276","title":"A comparative study of some generalized rough approximations","abstract":"In this paper we focus upon a comparison of some generalized rough approximations of sets, where the classical indiscernibility relation is generalized to any binary reflexive relation. We aim at finding the best of several candidates for generalized rough approximation mappings, where both definability of sets by elementary granules of information as well as the issue of distinction among positive, negative, and border regions of a set are taken into account","tok_text":"a compar studi of some gener rough approxim \n in thi paper we focu upon a comparison of some gener rough approxim of set , where the classic indiscern relat is gener to ani binari reflex relat . we aim at find the best of sever candid for gener rough approxim map , where both defin of set by elementari granul of inform as well as the issu of distinct among posit , neg , and border region of a set are taken into account","ordered_present_kp":[23,133,173,239,293],"keyphrases":["generalized rough approximations","classical indiscernibility relation","binary reflexive relation","generalized rough approximation mappings","elementary granules"],"prmu":["P","P","P","P","P"]} {"id":"1233","title":"Advanced optimization strategies in the Rice dHPF compiler","abstract":"High-Performance Fortran (HPF) was envisioned as a vehicle for modernizing legacy Fortran codes to achieve scalable parallel performance. To a large extent, today's commercially available HPF compilers have failed to deliver scalable parallel performance for a broad spectrum of applications because of insufficiently powerful compiler analysis and optimization. Substantial restructuring and hand-optimization can be required to achieve acceptable performance with an HPF port of an existing Fortran application, even for regular data-parallel applications. A key goal of the Rice dHPF compiler project has been to develop optimization techniques that enable a wide range of existing scientific applications to be ported easily to efficient HPF with minimal restructuring. This paper describes the challenges to effective parallelization presented by complex (but regular) data-parallel applications, and then describes how the novel analysis and optimization technologies in the dHPF compiler address these challenges effectively, without major rewriting of the applications. We illustrate the techniques by describing their use for parallelizing the NAS SP and BT benchmarks. The dHPF compiler generates multipartitioned parallelizations of these codes that are approaching the scalability and efficiency of sophisticated hand-coded parallelizations","tok_text":"advanc optim strategi in the rice dhpf compil \n high-perform fortran ( hpf ) wa envis as a vehicl for modern legaci fortran code to achiev scalabl parallel perform . to a larg extent , today 's commerci avail hpf compil have fail to deliv scalabl parallel perform for a broad spectrum of applic becaus of insuffici power compil analysi and optim . substanti restructur and hand-optim can be requir to achiev accept perform with an hpf port of an exist fortran applic , even for regular data-parallel applic . a key goal of the rice dhpf compil project ha been to develop optim techniqu that enabl a wide rang of exist scientif applic to be port easili to effici hpf with minim restructur . thi paper describ the challeng to effect parallel present by complex ( but regular ) data-parallel applic , and then describ how the novel analysi and optim technolog in the dhpf compil address these challeng effect , without major rewrit of the applic . we illustr the techniqu by describ their use for parallel the na sp and bt benchmark . the dhpf compil gener multipartit parallel of these code that are approach the scalabl and effici of sophist hand-cod parallel","ordered_present_kp":[109,147,35,321,29,1054],"keyphrases":["Rice dHPF compiler","HPF compilers","legacy Fortran codes","parallel performance","compiler analysis","multipartitioning","Mgh-Performance Fortran","compiler optimization","automatic parallelization"],"prmu":["P","P","P","P","P","P","M","R","M"]} {"id":"623","title":"Stochastic recurrences of Jackpot Keno","abstract":"We describe a mathematical model and simulation study for Jackpot Keno, as implemented by Jupiters Network Gaming (JNG) in the Australian state of Queensland, and as controlled by the Queensland Office of Gaming Regulation (QOGR) (http:\/\/www.qogr.qld.gov.au\/keno.shtml). The recurrences for the house net hold are derived and it is seen that these are piecewise linear with a ternary domain split, and further, the split points are stochastic in nature. Since this structure is intractable (Brockett and Levine, Statistics & Probability & their Applications, CBS College Publishing, 1984), estimation of house net hold obtained through an appropriately designed simulator using a random number generator with desirable properties is described. Since the model and simulation naturally derives hold given payscale, but JNG and QOGR require payscale given hold, an inverse problem was required to be solved. This required development of a special algorithm, which may be described as a stochastic binary search. Experimental results are presented, in which the simulator is used to determine jackpot pay-scales so as to satisfy legal requirements of approximately 75% of net revenue returned to the players, i.e., 25% net hold for the house (JNG). Details of the algorithm used to solve this problem are presented, and notwithstanding the stochastic nature of the simulation, convergence to a specified hold for the inverse problem has been achieved to within 0.1% in all cases of interest to date","tok_text":"stochast recurr of jackpot keno \n we describ a mathemat model and simul studi for jackpot keno , as implement by jupit network game ( jng ) in the australian state of queensland , and as control by the queensland offic of game regul ( qogr ) ( http:\/\/www.qogr.qld.gov.au\/keno.shtml ) . the recurr for the hous net hold are deriv and it is seen that these are piecewis linear with a ternari domain split , and further , the split point are stochast in natur . sinc thi structur is intract ( brockett and levin , statist & probabl & their applic , cb colleg publish , 1984 ) , estim of hous net hold obtain through an appropri design simul use a random number gener with desir properti is describ . sinc the model and simul natur deriv hold given payscal , but jng and qogr requir payscal given hold , an invers problem wa requir to be solv . thi requir develop of a special algorithm , which may be describ as a stochast binari search . experiment result are present , in which the simul is use to determin jackpot pay-scal so as to satisfi legal requir of approxim 75 % of net revenu return to the player , i.e. , 25 % net hold for the hous ( jng ) . detail of the algorithm use to solv thi problem are present , and notwithstand the stochast natur of the simul , converg to a specifi hold for the invers problem ha been achiev to within 0.1 % in all case of interest to date","ordered_present_kp":[0,19,47,66,113,305,359,382,644,803,911,521,936,1040],"keyphrases":["stochastic recurrences","Jackpot Keno","mathematical model","simulation","Jupiters Network Gaming","house net hold","piecewise linear","ternary domain split","probability","random number generator","inverse problem","stochastic binary search","experimental results","legal requirement","Chinese lottery game"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","M"]} {"id":"908","title":"Multivariable H\/sub infinity \/\/ mu feedback control design for high-precision wafer stage motion","abstract":"Conventional PID-like SISO controllers are still the most common in industry, but with performance requirements becoming tighter there is a growing need for advanced controllers. For the positioning devices in IC-manufacturing, plant interaction is a major performance-limiting factor. MIMO control can be invoked to tackle this problem. A practically feasible procedure is presented to design MIMO feedback controllers for electromechanical positioning devices, using H\/sub infinity \/\/ mu techniques. Weighting filters are proposed to straightforwardly and effectively impose performance and uncertainty specifications. Experiments show that MIMO control can considerably improve upon the performance with multiloop SISO control. Some problems are highlighted that are important for industrial practice, but lacking a workable solution","tok_text":"multivari h \/ sub infin \/\/ mu feedback control design for high-precis wafer stage motion \n convent pid-lik siso control are still the most common in industri , but with perform requir becom tighter there is a grow need for advanc control . for the posit devic in ic-manufactur , plant interact is a major performance-limit factor . mimo control can be invok to tackl thi problem . a practic feasibl procedur is present to design mimo feedback control for electromechan posit devic , use h \/ sub infin \/\/ mu techniqu . weight filter are propos to straightforwardli and effect impos perform and uncertainti specif . experi show that mimo control can consider improv upon the perform with multiloop siso control . some problem are highlight that are import for industri practic , but lack a workabl solut","ordered_present_kp":[518,30],"keyphrases":["feedback","weighting filters","IC manufacture","multivariable control systems","MIMO systems","H\/sub infinity \/ control","servo systems","model uncertainty","motion control","mechatronics","mu synthesis"],"prmu":["P","P","U","M","M","R","U","M","R","U","M"]} {"id":"1177","title":"Comparative statistical analysis of hole taper and circularity in laser percussion drilling","abstract":"Investigates the relationships and parameter interactions between six controllable variables on the hole taper and circularity in laser percussion drilling. Experiments have been conducted on stainless steel workpieces and a comparison was made between stainless steel and mild steel. The central composite design was employed to plan the experiments in order to achieve required information with reduced number of experiments. The process performance was evaluated. The ratio of minimum to maximum Feret's diameter was considered as circularity characteristic of the hole. The models of these three process characteristics were developed by linear multiple regression technique. The significant coefficients were obtained by performing analysis of variance (ANOVA) at 1, 5 and 7% levels of significance. The final models were checked by complete residual analysis and finally were experimentally verified. It was found that the pulse frequency had a significant effect on the hole entrance diameter and hole circularity in drilling stainless steel unlike the drilling of mild steel where the pulse frequency had no significant effect on the hole characteristics","tok_text":"compar statist analysi of hole taper and circular in laser percuss drill \n investig the relationship and paramet interact between six control variabl on the hole taper and circular in laser percuss drill . experi have been conduct on stainless steel workpiec and a comparison wa made between stainless steel and mild steel . the central composit design wa employ to plan the experi in order to achiev requir inform with reduc number of experi . the process perform wa evalu . the ratio of minimum to maximum feret 's diamet wa consid as circular characterist of the hole . the model of these three process characterist were develop by linear multipl regress techniqu . the signific coeffici were obtain by perform analysi of varianc ( anova ) at 1 , 5 and 7 % level of signific . the final model were check by complet residu analysi and final were experiment verifi . it wa found that the puls frequenc had a signific effect on the hole entranc diamet and hole circular in drill stainless steel unlik the drill of mild steel where the puls frequenc had no signific effect on the hole characterist","ordered_present_kp":[0,26,41,53,234,312,889,329,449,635,714,735,810],"keyphrases":["comparative statistical analysis","hole taper","circularity","laser percussion drilling","stainless steel workpieces","mild steel","central composite design","process performance","linear multiple regression technique","analysis of variance","ANOVA","complete residual analysis","pulse frequency","laser peak power","laser pulse width","assist gas pressure","focal plane position","equivalent entrance diameter","Ferets diameter","least squares procedure","stepwise regression method"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","M","M","U","U","M","R","U","M"]} {"id":"1132","title":"Semidefinite programming vs. LP relaxations for polynomial programming","abstract":"We consider the global minimization of a multivariate polynomial on a semi-algebraic set Omega defined with polynomial inequalities. We then compare two hierarchies of relaxations, namely, LP relaxations based on products of the original constraints, in the spirit of the RLT procedure of Sherali and Adams (1990), and recent semidefinite programming (SDP) relaxations introduced by the author. The comparison is analyzed in light of recent results in real algebraic geometry on various representations of polynomials, positive on a compact semi-algebraic set","tok_text":"semidefinit program vs. lp relax for polynomi program \n we consid the global minim of a multivari polynomi on a semi-algebra set omega defin with polynomi inequ . we then compar two hierarchi of relax , name , lp relax base on product of the origin constraint , in the spirit of the rlt procedur of sherali and adam ( 1990 ) , and recent semidefinit program ( sdp ) relax introduc by the author . the comparison is analyz in light of recent result in real algebra geometri on variou represent of polynomi , posit on a compact semi-algebra set","ordered_present_kp":[37,24,70,88,146,451,112,283],"keyphrases":["LP relaxations","polynomial programming","global minimization","multivariate polynomial","semi-algebraic set","polynomial inequalities","RLT procedure","real algebraic geometry","semidefinite programming relaxations","reformulation linearization technique","constraint products"],"prmu":["P","P","P","P","P","P","P","P","R","U","R"]} {"id":"567","title":"Hidden Markov model-based tool wear monitoring in turning","abstract":"This paper presents a new modeling framework for tool wear monitoring in machining processes using hidden Markov models (HMMs). Feature vectors are extracted from vibration signals measured during turning. A codebook is designed and used for vector quantization to convert the feature vectors into a symbol sequence for the hidden Markov model. A series of experiments are conducted to evaluate the effectiveness of the approach for different lengths of training data and observation sequence. Experimental results show that successful tool state detection rates as high as 97% can be achieved by using this approach","tok_text":"hidden markov model-bas tool wear monitor in turn \n thi paper present a new model framework for tool wear monitor in machin process use hidden markov model ( hmm ) . featur vector are extract from vibrat signal measur dure turn . a codebook is design and use for vector quantiz to convert the featur vector into a symbol sequenc for the hidden markov model . a seri of experi are conduct to evalu the effect of the approach for differ length of train data and observ sequenc . experiment result show that success tool state detect rate as high as 97 % can be achiev by use thi approach","ordered_present_kp":[24,117,0,197,232,263,513],"keyphrases":["hidden Markov models","tool wear monitoring","machining processes","vibration signals","codebook","vector quantization","tool state detection","feature extraction","turning process","HMM training","discrete wavelet transform"],"prmu":["P","P","P","P","P","P","P","R","R","R","U"]} {"id":"61","title":"Application of time-frequency principal component analysis to text-independent speaker identification","abstract":"We propose a formalism, called vector filtering of spectral trajectories, that allows the integration of a number of speech parameterization approaches (cepstral analysis, Delta and Delta Delta parameterizations, auto-regressive vector modeling, ...) under a common formalism. We then propose a new filtering, called contextual principal components (CPC) or time-frequency principal components (TFPC). This filtering consists in extracting the principal components of the contextual covariance matrix, which is the covariance matrix of a sequence of vectors expanded by their context. We apply this new filtering in the framework of closed-set speaker identification, using a subset of the POLYCOST database. When using speaker-dependent TFPC filters, our results show a relative improvement of approximately 20% compared to the use of the classical cepstral coefficients augmented by their Delta -coefficients, which is significantly better with a 90% confidence level","tok_text":"applic of time-frequ princip compon analysi to text-independ speaker identif \n we propos a formal , call vector filter of spectral trajectori , that allow the integr of a number of speech parameter approach ( cepstral analysi , delta and delta delta parameter , auto-regress vector model , ... ) under a common formal . we then propos a new filter , call contextu princip compon ( cpc ) or time-frequ princip compon ( tfpc ) . thi filter consist in extract the princip compon of the contextu covari matrix , which is the covari matrix of a sequenc of vector expand by their context . we appli thi new filter in the framework of closed-set speaker identif , use a subset of the polycost databas . when use speaker-depend tfpc filter , our result show a rel improv of approxim 20 % compar to the use of the classic cepstral coeffici augment by their delta -coeffici , which is significantli better with a 90 % confid level","ordered_present_kp":[10,47,105,122,181,209,238,244,262,355,483,628,677,813,908,848],"keyphrases":["time-frequency principal component analysis","text-independent speaker identification","vector filtering","spectral trajectories","speech parameterization","cepstral analysis","Delta Delta parameterization","Delta parameterization","auto-regressive vector modeling","contextual principal components","contextual covariance matrix","closed-set speaker identification","POLYCOST database","cepstral coefficients","Delta -coefficients","confidence level"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"935","title":"Experimental feedforward and feedback control of a one-dimensional SMA composite","abstract":"The control of embedded shape memory alloy (SMA) actuators has recently become a topic of interest in the field of smart structures. The inherent difficulties associated with SMA actuators has resulted in a variety of approaches. Homogenization provides a simplified, yet mathematically rigorous, method of determining average stress and strain fields in a composite. A modified constitutive model is presented based on experimental results demonstrating the inability of most simple phenomenological models to capture the effective behavior of SMAs during thermal activation. A feedforward controller is presented for a SMA composite based on the homogenization of a modified phenomenological model for SMAs in a linear matrix","tok_text":"experiment feedforward and feedback control of a one-dimension sma composit \n the control of embed shape memori alloy ( sma ) actuat ha recent becom a topic of interest in the field of smart structur . the inher difficulti associ with sma actuat ha result in a varieti of approach . homogen provid a simplifi , yet mathemat rigor , method of determin averag stress and strain field in a composit . a modifi constitut model is present base on experiment result demonstr the inabl of most simpl phenomenolog model to captur the effect behavior of sma dure thermal activ . a feedforward control is present for a sma composit base on the homogen of a modifi phenomenolog model for sma in a linear matrix","ordered_present_kp":[554,93,63,185,235,283,686,417],"keyphrases":["SMA","embedded shape memory alloy","smart structures","SMA actuators","homogenization","models","thermal activation","linear matrix"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"970","title":"Complex dynamics in nearly symmetric three-cell cellular neural networks","abstract":"The paper introduces a class of third-order nonsymmetric Cellular Neural Networks (CNNs), and shows through computer simulations that they undergo a cascade of period doubling bifurcations which leads to the birth of a large-size complex attractor. A major point is that these bifurcations and complex dynamics happen in a small neighborhood of a particular CNN with a symmetric interconnection matrix","tok_text":"complex dynam in nearli symmetr three-cel cellular neural network \n the paper introduc a class of third-ord nonsymmetr cellular neural network ( cnn ) , and show through comput simul that they undergo a cascad of period doubl bifurc which lead to the birth of a large-s complex attractor . a major point is that these bifurc and complex dynam happen in a small neighborhood of a particular cnn with a symmetr interconnect matrix","ordered_present_kp":[0,17,145,213,262,401],"keyphrases":["complex dynamics","nearly symmetric three-cell cellular neural networks","CNN","period doubling bifurcations","large-size complex attractor","symmetric interconnection matrix","robustness","complete stability","perturbations","stable limit cycles","differential equations","neuron interconnection matrix"],"prmu":["P","P","P","P","P","P","U","U","U","U","U","M"]} {"id":"133","title":"L\/sub p\/ stability and linearization","abstract":"A theorem by Hadamard gives a two-part condition under which a map from one Banach space to another is a homeomorphism. The theorem, while often very useful, is incomplete in the sense that it does not explicitly specify the family of maps for which the condition is met. Recently, under a typically weak additional assumption on the map, it was shown that Hadamard's condition is met if and only if the map is a homeomorphism with a Lipschitz continuous inverse. Here, an application is given concerning the relation between the L\/sub p\/ stability (with 1 . You can use this code as is, or as a starting point for your own more complete implementation","tok_text":"adapt dialog box for cross-platform program \n the author present a framework for build dialog box that adapt to the look and feel of their platform . thi method also help with a few relat problem : specifi cross-platform resourc and handl dialog size chang due to local . he use a combin of xml , automat layout , and run-tim dialog creation to give you most of the benefit of platform-specif resourc , without the associ pain . sourc code with an implement of the layout engin for mac os 9.1 ( \" carbon \" ) , mac os x , and microsoft window can be download from the cuj websit at < www.cuj.com\/cod > . you can use thi code as is , or as a start point for your own more complet implement","ordered_present_kp":[6,206,239,264,291,297,318,377,482,510,525,21,0],"keyphrases":["adaptable dialog boxes","dialog boxes","cross-platform programming","cross-platform resources","dialog size changes","localization","XML","automatic layout","run-time dialog creation","platform-specific resources","Mac OS 9.1","Mac OS X","Microsoft Windows"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"852","title":"Building an effective computer science student organization: the Carnegie Mellon Women@SCS action plan","abstract":"This paper aims to provide a practical guide for building a student organization and designing activities and events that can encourage and support a community of women in computer science. This guide is based on our experience in building Women@SCS, a community of women in the School of Computer Science (SCS) at Carnegie Mellon University. Rather than provide an abstract \"to-do\" or \"must-do\" list, we present a sampling of concrete activities and events in the hope that these might suggest possibilities for a likeminded student organization. However, since we have found it essential to have a core group of activist students at the helm, we provide a \"to-do\" list of features that we feel are essential for forming, supporting and sustaining creative and effective student leadership","tok_text":"build an effect comput scienc student organ : the carnegi mellon women@sc action plan \n thi paper aim to provid a practic guid for build a student organ and design activ and event that can encourag and support a commun of women in comput scienc . thi guid is base on our experi in build women@sc , a commun of women in the school of comput scienc ( sc ) at carnegi mellon univers . rather than provid an abstract \" to-do \" or \" must-do \" list , we present a sampl of concret activ and event in the hope that these might suggest possibl for a likemind student organ . howev , sinc we have found it essenti to have a core group of activist student at the helm , we provid a \" to-do \" list of featur that we feel are essenti for form , support and sustain creativ and effect student leadership","ordered_present_kp":[16,65,65,357,772],"keyphrases":["computer science student organization","Women@SCS action plan","women","Carnegie Mellon University","student leadership","gender issues","computer science education"],"prmu":["P","P","P","P","P","U","M"]} {"id":"817","title":"Summarization beyond sentence extraction: A probabilistic approach to sentence compression","abstract":"When humans produce summaries of documents, they do not simply extract sentences and concatenate them. Rather, they create new sentences that are grammatical, that cohere with one another, and that capture the most salient pieces of information in the original document. Given that large collections of text\/abstract pairs are available online, it is now possible to envision algorithms that are trained to mimic this process. In this paper, we focus on sentence compression, a simpler version of this larger challenge. We aim to achieve two goals simultaneously: our compressions should be grammatical, and they should retain the most important pieces of information. These two goals can conflict. We devise both a noisy-channel and a decision-tree approach to the problem, and we evaluate results against manual compressions and a simple baseline","tok_text":"summar beyond sentenc extract : a probabilist approach to sentenc compress \n when human produc summari of document , they do not simpli extract sentenc and concaten them . rather , they creat new sentenc that are grammat , that coher with one anoth , and that captur the most salient piec of inform in the origin document . given that larg collect of text \/ abstract pair are avail onlin , it is now possibl to envis algorithm that are train to mimic thi process . in thi paper , we focu on sentenc compress , a simpler version of thi larger challeng . we aim to achiev two goal simultan : our compress should be grammat , and they should retain the most import piec of inform . these two goal can conflict . we devis both a noisy-channel and a decision-tre approach to the problem , and we evalu result against manual compress and a simpl baselin","ordered_present_kp":[58,213,725,745],"keyphrases":["sentence compression","grammatical","noisy-channel","decision-tree","document summarization"],"prmu":["P","P","P","P","R"]} {"id":"779","title":"Domesticating computers and the Internet","abstract":"The people who use computers and the ways they use them have changed substantially over the past 25 years. In the beginning highly educated people, mostly men, in technical professions used computers for work, but over time a much broader range of people are using computers for personal and domestic purposes. This trend is still continuing, and over a shorter time scale has been replicated with the use of the Internet. The paper uses data from four national surveys to document how personal computers and the Internet have become increasingly domesticated since 1995 and to explore the mechanisms for this shift. Now people log on more often from home than from places of employment and do so for pleasure and for personal purposes rather than for their jobs. Analyses comparing veteran Internet users to novices in 1998 and 2000 and analyses comparing the change in use within a single sample between 1995 and 1996 support two complementary explanations for how these technologies have become domesticated. Women, children, and less well-educated individuals are increasingly using computers and the Internet and have a more personal set of motives than well-educated men. In addition, the widespread diffusion of the PC and the Internet and the response of the computing industry to the diversity in consumers has led to a rich set of personal and domestic services","tok_text":"domest comput and the internet \n the peopl who use comput and the way they use them have chang substanti over the past 25 year . in the begin highli educ peopl , mostli men , in technic profess use comput for work , but over time a much broader rang of peopl are use comput for person and domest purpos . thi trend is still continu , and over a shorter time scale ha been replic with the use of the internet . the paper use data from four nation survey to document how person comput and the internet have becom increasingli domest sinc 1995 and to explor the mechan for thi shift . now peopl log on more often from home than from place of employ and do so for pleasur and for person purpos rather than for their job . analys compar veteran internet user to novic in 1998 and 2000 and analys compar the chang in use within a singl sampl between 1995 and 1996 support two complementari explan for how these technolog have becom domest . women , children , and less well-educ individu are increasingli use comput and the internet and have a more person set of motiv than well-educ men . in addit , the widespread diffus of the pc and the internet and the respons of the comput industri to the divers in consum ha led to a rich set of person and domest servic","ordered_present_kp":[22,142,178,289,439,469,732,757,935,943,1167,1242],"keyphrases":["Internet","highly educated people","technical professions","domestic purposes","national surveys","personal computers","veteran Internet users","novices","women","children","computing industry","domestic services","computer domestication","personal usage","personal motives","PC diffusion","demographics","online behavior"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","M","R","R","U","U"]} {"id":"1369","title":"Use of Bayesian Belief Networks when combining disparate sources of information in the safety assessment of software-based systems","abstract":"The paper discusses how disparate sources of information can be combined in the safety assessment of software-based systems. The emphasis is put on an emerging methodology, relevant for intelligent product-support systems, to combine information about disparate evidences systematically based on Bayesian Belief Networks. The objective is to show the link between basic information and the confidence one can have in a system. How one combines the Bayesian Belief Net (BBN) method with a software safety standard (RTCA\/DO-178B,) for safety assessment of software-based systems is also discussed. Finally, the applicability of the BBN methodology and experiences from cooperative research work together with Kongsberg Defence & Aerospace and Det Norske Veritas, and ongoing research with VTT Automation are presented","tok_text":"use of bayesian belief network when combin dispar sourc of inform in the safeti assess of software-bas system \n the paper discuss how dispar sourc of inform can be combin in the safeti assess of software-bas system . the emphasi is put on an emerg methodolog , relev for intellig product-support system , to combin inform about dispar evid systemat base on bayesian belief network . the object is to show the link between basic inform and the confid one can have in a system . how one combin the bayesian belief net ( bbn ) method with a softwar safeti standard ( rtca \/ do-178b , ) for safeti assess of software-bas system is also discuss . final , the applic of the bbn methodolog and experi from cooper research work togeth with kongsberg defenc & aerospac and det norsk verita , and ongo research with vtt autom are present","ordered_present_kp":[7,271,538,73,90],"keyphrases":["Bayesian belief networks","safety assessment","software-based systems","intelligent product-support systems","software safety standard"],"prmu":["P","P","P","P","P"]} {"id":"1394","title":"Subject access to government documents in an era of globalization: intellectual bundling of entities affected by the decisions of supranational organizations","abstract":"As a result of the growing influence of supranational organizations, there is a need for a new model for subject access to government information in academic libraries. Rulings made by supranational bodies such as the World Trade Organization (WTO) and rulings determined under the auspices of transnational economic agreements such as the North American Free Trade Agreement (NAFTA) often supersede existing law, resulting in obligatory changes to national, provincial, state, and municipal legislation. Just as important is the relationship among private sector companies, third party actors such as nongovernmental organizations (NGOs), and governments. The interaction among the various entities affected by supranational rulings could potentially form the basis of a new model for subject access to government information","tok_text":"subject access to govern document in an era of global : intellectu bundl of entiti affect by the decis of supran organ \n as a result of the grow influenc of supran organ , there is a need for a new model for subject access to govern inform in academ librari . rule made by supran bodi such as the world trade organ ( wto ) and rule determin under the auspic of transnat econom agreement such as the north american free trade agreement ( nafta ) often supersed exist law , result in obligatori chang to nation , provinci , state , and municip legisl . just as import is the relationship among privat sector compani , third parti actor such as nongovernment organ ( ngo ) , and govern . the interact among the variou entiti affect by supran rule could potenti form the basi of a new model for subject access to govern inform","ordered_present_kp":[18,47,56,106,243,297,361,399,534],"keyphrases":["government documents","globalization","intellectual bundling","supranational organizations","academic libraries","World Trade Organization","transnational economic agreements","North American Free Trade Agreement","municipal legislation","national legislation","provincial legislation","state legislation"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"784","title":"Where tech is cheap [servers]","abstract":"Talk, consultancy, support, not tech is the expensive part of network installations. It's a good job that small-scale servers can either be remotely managed, or require little actual management","tok_text":"where tech is cheap [ server ] \n talk , consult , support , not tech is the expens part of network instal . it 's a good job that small-scal server can either be remot manag , or requir littl actual manag","ordered_present_kp":[130,91,168],"keyphrases":["network installations","small-scale servers","management"],"prmu":["P","P","P"]} {"id":"1068","title":"Quantum phase gate for photonic qubits using only beam splitters and postselection","abstract":"We show that a beam splitter of reflectivity one-third can be used to realize a quantum phase gate operation if only the outputs conserving the number of photons on each side are postselected","tok_text":"quantum phase gate for photon qubit use onli beam splitter and postselect \n we show that a beam splitter of reflect one-third can be use to realiz a quantum phase gate oper if onli the output conserv the number of photon on each side are postselect","ordered_present_kp":[0,23,63,108,149,185],"keyphrases":["quantum phase gate","photonic qubits","postselection","reflectivity","quantum phase gate operation","outputs","multiqubit networks","postselected quantum gate","optical quantum gate operations","photon number conservation","postselected photon number conserving outputs","quantum computation","quantum information processing","postselected quantum phase gate","polarization beam splitters"],"prmu":["P","P","P","P","P","P","U","R","M","R","R","M","M","R","M"]} {"id":"699","title":"Novel line conditioner with voltage up\/down capability","abstract":"In this paper, a novel pulsewidth-modulated line conditioner with fast output voltage control is proposed. The line conditioner is made up of an AC chopper with reversible voltage control and a transformer for series voltage compensation. In the AC chopper, a proper switching operation is achieved without the commutation problem. To absorb energy stored in line stray inductance, a regenerative DC snubber can be utilized which has only one capacitor without discharging resistors or complicated regenerative circuit for snubber energy. Therefore, the proposed AC chopper gives high efficiency and reliability. The output voltage of the line conditioner is controlled using a fast sensing technique of the output voltage. It is also shown via some experimental results that the presented line conditioner gives good dynamic and steady-state performance for high quality of the output voltage","tok_text":"novel line condition with voltag up \/ down capabl \n in thi paper , a novel pulsewidth-modul line condition with fast output voltag control is propos . the line condition is made up of an ac chopper with revers voltag control and a transform for seri voltag compens . in the ac chopper , a proper switch oper is achiev without the commut problem . to absorb energi store in line stray induct , a regen dc snubber can be util which ha onli one capacitor without discharg resistor or complic regen circuit for snubber energi . therefor , the propos ac chopper give high effici and reliabl . the output voltag of the line condition is control use a fast sens techniqu of the output voltag . it is also shown via some experiment result that the present line condition give good dynam and steady-st perform for high qualiti of the output voltag","ordered_present_kp":[75,117,187,203,296,330,373,395,783],"keyphrases":["pulsewidth-modulated line conditioner","output voltage control","AC chopper","reversible voltage control","switching operation","commutation","line stray inductance","regenerative DC snubber","steady-state performance","series voltage compensation transformer","dynamic performance"],"prmu":["P","P","P","P","P","P","P","P","P","R","R"]} {"id":"1289","title":"Combining PC control and HMI","abstract":"Integrating PC-based control with human machine interface (HMI) technology can benefit a plant floor system. However, before one decides on PC-based control, there are many things one should consider, especially when using a soft programmable logic controller (PLC) to command the input\/output. There are three strategies to integrate a PC-based control system with an HMI: treat the PC running the control application as if it were a PLC, integrate the system using standard PC interfaces; or using application programming interfaces","tok_text":"combin pc control and hmi \n integr pc-base control with human machin interfac ( hmi ) technolog can benefit a plant floor system . howev , befor one decid on pc-base control , there are mani thing one should consid , especi when use a soft programm logic control ( plc ) to command the input \/ output . there are three strategi to integr a pc-base control system with an hmi : treat the pc run the control applic as if it were a plc , integr the system use standard pc interfac ; or use applic program interfac","ordered_present_kp":[340,487,466,56,240],"keyphrases":["human machine interface","programmable logic controller","PC-based control system","PC interfaces","application programming interfaces","shop floor system"],"prmu":["P","P","P","P","P","M"]} {"id":"1175","title":"Prediction of tool and chip temperature in continuous and interrupted machining","abstract":"A numerical model based on the finite difference method is presented to predict tool and chip temperature fields in continuous machining and time varying milling processes. Continuous or steady state machining operations like orthogonal cutting are studied by modeling the heat transfer between the tool and chip at the tool-rake face contact zone. The shear energy created in the primary zone, the friction energy produced at the rake face-chip contact zone and the heat balance between the moving chip and stationary tool are considered. The temperature distribution is solved using the finite difference method. Later, the model is extended to milling where the cutting is interrupted and the chip thickness varies with time. The proposed model combines the steady-state temperature prediction in continuous machining with transient temperature evaluation in interrupted cutting operations where the chip and the process change in a discontinuous manner. The mathematical models and simulation results are in satisfactory agreement with experimental temperature measurements reported in the literature","tok_text":"predict of tool and chip temperatur in continu and interrupt machin \n a numer model base on the finit differ method is present to predict tool and chip temperatur field in continu machin and time vari mill process . continu or steadi state machin oper like orthogon cut are studi by model the heat transfer between the tool and chip at the tool-rak face contact zone . the shear energi creat in the primari zone , the friction energi produc at the rake face-chip contact zone and the heat balanc between the move chip and stationari tool are consid . the temperatur distribut is solv use the finit differ method . later , the model is extend to mill where the cut is interrupt and the chip thick vari with time . the propos model combin the steady-st temperatur predict in continu machin with transient temperatur evalu in interrupt cut oper where the chip and the process chang in a discontinu manner . the mathemat model and simul result are in satisfactori agreement with experiment temperatur measur report in the literatur","ordered_present_kp":[172,51,72,96,191,257,293,340,373,399,418,555],"keyphrases":["interrupted machining","numerical model","finite difference method","continuous machining","time varying milling processes","orthogonal cutting","heat transfer","tool-rake face contact zone","shear energy","primary zone","friction energy","temperature distribution","tool temperature prediction","chip temperature prediction","first-order dynamic system","thermal properties"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R","U","U"]} {"id":"1130","title":"Node-capacitated ring routing","abstract":"We consider the node-capacitated routing problem in an undirected ring network along with its fractional relaxation, the node-capacitated multicommodity flow problem. For the feasibility problem, Farkas' lemma provides a characterization for general undirected graphs, asserting roughly that there exists such a flow if and only if the so-called distance inequality holds for every choice of distance functions arising from nonnegative node weights. For rings, this (straightforward) result will be improved in two ways. We prove that, independent of the integrality of node capacities, it suffices to require the distance inequality only for distances arising from (0-1-2)-valued node weights, a requirement that will be called the double-cut condition. Moreover, for integer-valued node capacities, the double-cut condition implies the existence of a half-integral multicommodity flow. In this case there is even an integer-valued multicommodity flow that violates each node capacity by at most one. Our approach gives rise to a combinatorial, strongly polynomial algorithm to compute either a violating double-cut or a node-capacitated multicommodity flow. A relation of the problem to its edge-capacitated counterpart will also be explained","tok_text":"node-capacit ring rout \n we consid the node-capacit rout problem in an undirect ring network along with it fraction relax , the node-capacit multicommod flow problem . for the feasibl problem , farka ' lemma provid a character for gener undirect graph , assert roughli that there exist such a flow if and onli if the so-cal distanc inequ hold for everi choic of distanc function aris from nonneg node weight . for ring , thi ( straightforward ) result will be improv in two way . we prove that , independ of the integr of node capac , it suffic to requir the distanc inequ onli for distanc aris from ( 0 - 1 - 2)-valu node weight , a requir that will be call the double-cut condit . moreov , for integer-valu node capac , the double-cut condit impli the exist of a half-integr multicommod flow . in thi case there is even an integer-valu multicommod flow that violat each node capac by at most one . our approach give rise to a combinatori , strongli polynomi algorithm to comput either a violat double-cut or a node-capacit multicommod flow . a relat of the problem to it edge-capacit counterpart will also be explain","ordered_present_kp":[39,0,71,107,128,176,237,324,362,389,663,696,765,825,989],"keyphrases":["node-capacitated ring routing","node-capacitated routing problem","undirected ring network","fractional relaxation","node-capacitated multicommodity flow problem","feasibility problem","undirected graphs","distance inequality","distance functions","nonnegative node weights","double-cut condition","integer-valued node capacities","half-integral multicommodity flow","integer-valued multicommodity flow","violating double-cut","Farkas lemma","node capacity integrality","combinatorial strongly polynomial algorithm","edge-cut criterion"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","U"]} {"id":"565","title":"Control of thin film growth in chemical vapor deposition manufacturing systems: a feasibility study","abstract":"A study is carried out to design and optimize chemical vapor deposition (CVD) systems for material fabrication. Design and optimization of the CVD process is necessary to satisfying strong global demand and ever increasing quality requirements for thin film production. Advantages of computer aided optimization include high design turnaround time, flexibility to explore a larger design space and the development and adaptation of automation techniques for design and optimization. A CVD reactor consisting of a vertical impinging jet at atmospheric pressure, for growing titanium nitride films, is studied for thin film deposition. Numerical modeling and simulation are used to determine the rate of deposition and film uniformity over a wide range of design variables and operating conditions. These results are used for system design and optimization. The optimization procedure employs an objective function characterizing film quality, productivity and operational costs based on reactor gas flow rate, susceptor temperature and precursor concentration. Parameter space mappings are used to determine the design space, while a minimization algorithm, such as the steepest descent method, is used to determine optimal operating conditions for the system. The main features of computer aided design and optimization using these techniques are discussed in detail","tok_text":"control of thin film growth in chemic vapor deposit manufactur system : a feasibl studi \n a studi is carri out to design and optim chemic vapor deposit ( cvd ) system for materi fabric . design and optim of the cvd process is necessari to satisfi strong global demand and ever increas qualiti requir for thin film product . advantag of comput aid optim includ high design turnaround time , flexibl to explor a larger design space and the develop and adapt of autom techniqu for design and optim . a cvd reactor consist of a vertic imping jet at atmospher pressur , for grow titanium nitrid film , is studi for thin film deposit . numer model and simul are use to determin the rate of deposit and film uniform over a wide rang of design variabl and oper condit . these result are use for system design and optim . the optim procedur employ an object function character film qualiti , product and oper cost base on reactor ga flow rate , susceptor temperatur and precursor concentr . paramet space map are use to determin the design space , while a minim algorithm , such as the steepest descent method , is use to determin optim oper condit for the system . the main featur of comput aid design and optim use these techniqu are discuss in detail","ordered_present_kp":[31,171,125,895,868,574,11,913,936,961,982],"keyphrases":["thin film growth","chemical vapor deposition","optimization","material fabrication","titanium nitride films","film quality","operational costs","reactor gas flow rate","susceptor temperature","precursor concentration","parameter space mappings","TiN"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","U"]} {"id":"598","title":"From FREE to FEE [online advertising market]","abstract":"As the online advertising market continues to struggle, many online content marketers are wrestling with the issue of how to add at least some level of paid subscription income to their revenue mix in order to reach or improve profitability. Since the business of selling content online is still in its infancy, and many consumers clearly still think of Web content as simply and rightfully free, few roadmaps are available to show the way to effective marketing strategies, but some guiding principles have emerged","tok_text":"from free to fee [ onlin advertis market ] \n as the onlin advertis market continu to struggl , mani onlin content market are wrestl with the issu of how to add at least some level of paid subscript incom to their revenu mix in order to reach or improv profit . sinc the busi of sell content onlin is still in it infanc , and mani consum clearli still think of web content as simpli and right free , few roadmap are avail to show the way to effect market strategi , but some guid principl have emerg","ordered_present_kp":[19,183,278,447],"keyphrases":["online advertising market","paid subscription income","selling content online","marketing strategies"],"prmu":["P","P","P","P"]} {"id":"1188","title":"It's time to buy","abstract":"There is an upside to a down economy: over-zealous suppliers are willing to make deals that were unthinkable a few years ago. That's because vendors are experiencing the same money squeeze as manufacturers, which makes the year 2002 the perfect time to invest in new technology. The author states that when negotiating the deal, provisions for unexpected costs, an exit strategy, and even shared risk with the vendor should be on the table","tok_text":"it 's time to buy \n there is an upsid to a down economi : over-zeal supplier are will to make deal that were unthink a few year ago . that 's becaus vendor are experienc the same money squeez as manufactur , which make the year 2002 the perfect time to invest in new technolog . the author state that when negoti the deal , provis for unexpect cost , an exit strategi , and even share risk with the vendor should be on the tabl","ordered_present_kp":[306,335,354,379,149,68,179],"keyphrases":["suppliers","vendor","money squeeze","negotiation","unexpected costs","exit strategy","shared risk","buyers market","bargaining power"],"prmu":["P","P","P","P","P","P","P","U","U"]} {"id":"1274","title":"Bounded model checking for the universal fragment of CTL","abstract":"Bounded Model Checking (BMC) has been recently introduced as an efficient verification method for reactive systems. BMC based on SAT methods consists in searching for a counterexample of a particular length and generating a propositional formula that is satisfiable iff such a counterexample-exists. This new technique has been introduced by E. Clarke et al. for model checking of linear time temporal logic (LTL). Our paper shows how the concept of bounded model checking can be extended to ACTL (the universal fragment of CTL). The implementation of the algorithm for Elementary Net Systems is described together with the experimental results","tok_text":"bound model check for the univers fragment of ctl \n bound model check ( bmc ) ha been recent introduc as an effici verif method for reactiv system . bmc base on sat method consist in search for a counterexampl of a particular length and gener a proposit formula that is satisfi iff such a counterexample-exist . thi new techniqu ha been introduc by e. clark et al . for model check of linear time tempor logic ( ltl ) . our paper show how the concept of bound model check can be extend to actl ( the univers fragment of ctl ) . the implement of the algorithm for elementari net system is describ togeth with the experiment result","ordered_present_kp":[0,26,115,132,161,245,6,385,563],"keyphrases":["bounded model checking","model checking","universal fragment","verification method","reactive systems","SAT methods","propositional formula","linear time temporal logic","elementary net systems","bounded semantics"],"prmu":["P","P","P","P","P","P","P","P","P","M"]} {"id":"1231","title":"Efficient parallel programming on scalable shared memory systems with High Performance Fortran","abstract":"OpenMP offers a high-level interface for parallel programming on scalable shared memory (SMP) architectures. It provides the user with simple work-sharing directives while it relies on the compiler to generate parallel programs based on thread parallelism. However, the lack of language features for exploiting data locality often results in poor performance since the non-uniform memory access times on scalable SMP machines cannot be neglected. High Performance Fortran (HPF), the de-facto standard for data parallel programming, offers a rich set of data distribution directives in order to exploit data locality, but it has been mainly targeted towards distributed memory machines. In this paper we describe an optimized execution model for HPF programs on SMP machines that avails itself with mechanisms provided by OpenMP for work sharing and thread parallelism, while exploiting data locality based on user-specified distribution directives. Data locality does not only ensure that most memory accesses are close to the executing threads and are therefore faster, but it also minimizes synchronization overheads, especially in the case of unstructured reductions. The proposed shared memory execution model for HPF relies on a small set of language extensions, which resemble the OpenMP work-sharing features. These extensions, together with an optimized shared memory parallelization and execution model, have been implemented in the ADAPTOR HPF compilation system and experimental results verify the efficiency of the chosen approach","tok_text":"effici parallel program on scalabl share memori system with high perform fortran \n openmp offer a high-level interfac for parallel program on scalabl share memori ( smp ) architectur . it provid the user with simpl work-shar direct while it reli on the compil to gener parallel program base on thread parallel . howev , the lack of languag featur for exploit data local often result in poor perform sinc the non-uniform memori access time on scalabl smp machin can not be neglect . high perform fortran ( hpf ) , the de-facto standard for data parallel program , offer a rich set of data distribut direct in order to exploit data local , but it ha been mainli target toward distribut memori machin . in thi paper we describ an optim execut model for hpf program on smp machin that avail itself with mechan provid by openmp for work share and thread parallel , while exploit data local base on user-specifi distribut direct . data local doe not onli ensur that most memori access are close to the execut thread and are therefor faster , but it also minim synchron overhead , especi in the case of unstructur reduct . the propos share memori execut model for hpf reli on a small set of languag extens , which resembl the openmp work-shar featur . these extens , togeth with an optim share memori parallel and execut model , have been implement in the adaptor hpf compil system and experiment result verifi the effici of the chosen approach","ordered_present_kp":[7,27,60],"keyphrases":["parallel programming","scalable shared memory","High Performance Fortran","multiprocessor architectures","scalable hardware","shared memory multiprocessor"],"prmu":["P","P","P","M","M","M"]} {"id":"664","title":"The agile revolution [business agility]","abstract":"There is a new business revolution in the air. The theory is there, the technology is evolving, fast. It is all about agility","tok_text":"the agil revolut [ busi agil ] \n there is a new busi revolut in the air . the theori is there , the technolog is evolv , fast . it is all about agil","ordered_present_kp":[19],"keyphrases":["business agility","software design","software deployment","organisational structures","supply chains"],"prmu":["P","U","U","U","U"]} {"id":"621","title":"MPEG-4 video object-based rate allocation with variable temporal rates","abstract":"In object-based coding, bit allocation is performed at the object level and temporal rates of different objects may vary. The proposed algorithm deals with these two issues when coding multiple video objects (MVOs). The proposed algorithm is able to successfully achieve the target bit rate, effectively code arbitrarily shaped MVOs with different temporal rates, and maintain a stable buffer level","tok_text":"mpeg-4 video object-bas rate alloc with variabl tempor rate \n in object-bas code , bit alloc is perform at the object level and tempor rate of differ object may vari . the propos algorithm deal with these two issu when code multipl video object ( mvo ) . the propos algorithm is abl to success achiev the target bit rate , effect code arbitrarili shape mvo with differ tempor rate , and maintain a stabl buffer level","ordered_present_kp":[83,224,13,40],"keyphrases":["object-based rate allocation","variable temporal rates","bit allocation","multiple video objects","MPEG-4 video coding","rate-distortion encoding"],"prmu":["P","P","P","P","R","U"]} {"id":"1438","title":"Three-dimensional particle image tracking for dilute particle-liquid flows in a pipe","abstract":"A three-dimensional (3D) particle image tracking technique was used to study the coarse spherical particle-liquid flows in a pipe. The flow images from both the front view and the normal side view, which was reflected into the front view by a mirror, were recorded with a CCD camera and digitized by a PC with an image grabber card. An image processing program was developed to enhance and segment the flow image, and then to identify the particles. Over 90% of all the particles can be identified and located from the partially overlapped particle images using the circular Hough transform. Then the 3D position of each detected particle was determined by matching its front view image to its side view image. The particle velocity was then obtained by pairing its images in successive video fields. The measurements for the spherical expanded polystyrene particle-oil flows show that the particles, like the spherical bubbles in laminar bubbly flows, tend to conglomerate near the pipe wall and to line up to form the particle clusters. As liquid velocity decreases, the particle clusters disperse and more particles are distributed in the pipe centre region","tok_text":"three-dimension particl imag track for dilut particle-liquid flow in a pipe \n a three-dimension ( 3d ) particl imag track techniqu wa use to studi the coars spheric particle-liquid flow in a pipe . the flow imag from both the front view and the normal side view , which wa reflect into the front view by a mirror , were record with a ccd camera and digit by a pc with an imag grabber card . an imag process program wa develop to enhanc and segment the flow imag , and then to identifi the particl . over 90 % of all the particl can be identifi and locat from the partial overlap particl imag use the circular hough transform . then the 3d posit of each detect particl wa determin by match it front view imag to it side view imag . the particl veloc wa then obtain by pair it imag in success video field . the measur for the spheric expand polystyren particle-oil flow show that the particl , like the spheric bubbl in laminar bubbl flow , tend to conglomer near the pipe wall and to line up to form the particl cluster . as liquid veloc decreas , the particl cluster dispers and more particl are distribut in the pipe centr region","ordered_present_kp":[0,39,334,636,1003,901,609],"keyphrases":["three-dimensional particle image tracking","dilute particle-liquid flows","CCD camera","Hough transform","3D position","spherical bubble","particle clusters","two-phase flow","pipe flow","stereo-imaging technique","phase distribution","spherical expanded polystyrene particle","Wiener filter","image segmentation","region growing technique","image recognition","image matching"],"prmu":["P","P","P","P","P","P","P","M","R","M","M","R","U","R","M","M","R"]} {"id":"705","title":"Use of extra degrees of freedom in multilevel drives","abstract":"Multilevel converters with series connection of semiconductors allow power electronics to reach medium voltages (1-10 kV) with relatively standard components. The increase of the number of semiconductors provides extra degrees of freedom, which can be used to improve different characteristics. This paper is focused on variable-speed drives and it is shown that with the proposed multilevel direct torque control strategy (DiCoIF) the tradeoff between the performances of the drive (harmonic distortions, torque dynamics, voltage step gradients, etc.) and the switching frequency of the semiconductors is improved. Then, a slightly modified strategy reducing common-mode voltage and bearing currents is presented","tok_text":"use of extra degre of freedom in multilevel drive \n multilevel convert with seri connect of semiconductor allow power electron to reach medium voltag ( 1 - 10 kv ) with rel standard compon . the increas of the number of semiconductor provid extra degre of freedom , which can be use to improv differ characterist . thi paper is focus on variable-spe drive and it is shown that with the propos multilevel direct torqu control strategi ( dicoif ) the tradeoff between the perform of the drive ( harmon distort , torqu dynam , voltag step gradient , etc . ) and the switch frequenc of the semiconductor is improv . then , a slightli modifi strategi reduc common-mod voltag and bear current is present","ordered_present_kp":[13,76,92,112,136,337,393,493,510,524,563,674,33],"keyphrases":["degrees of freedom","multilevel drives","series connection","semiconductors","power electronics","medium voltages","variable-speed drives","multilevel direct torque control strategy","harmonic distortions","torque dynamics","voltage step gradients","switching frequency","bearing currents","common-mode voltage reduction","delay estimation","industrial power systems","insulated gate bipolar transistors","state estimation","fixed-frequency dynamic control","1 to 10 kV"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","M","U","M","U","U","M","R"]} {"id":"740","title":"The Malaysian model","abstract":"Japan's first third generation service, Foma, is unlikely to be truly attractive to consumers until 2005. That still falls well within the financial planning of its operator Docomo. But where does that leave European 3G operators looking for reassurance? Malaysia, says Simon Marshall","tok_text":"the malaysian model \n japan 's first third gener servic , foma , is unlik to be truli attract to consum until 2005 . that still fall well within the financi plan of it oper docomo . but where doe that leav european 3 g oper look for reassur ? malaysia , say simon marshal","ordered_present_kp":[215,4],"keyphrases":["Malaysia","3G operators","Maxis Communications","Telekom Malaysia"],"prmu":["P","P","U","M"]} {"id":"1315","title":"Traffic engineering with traditional IP routing protocols","abstract":"Traffic engineering involves adapting the routing of traffic to network conditions, with the joint goals of good user performance and efficient use of network resources. We describe an approach to intradomain traffic engineering that works within the existing deployed base of interior gateway protocols, such as Open Shortest Path First and Intermediate System-Intermediate System. We explain how to adapt the configuration of link weights, based on a networkwide view of the traffic and topology within a domain. In addition, we summarize the results of several studies of techniques for optimizing OSPF\/IS-IS weights to the prevailing traffic. The article argues that traditional shortest path routing protocols are surprisingly effective for engineering the flow of traffic in large IP networks","tok_text":"traffic engin with tradit ip rout protocol \n traffic engin involv adapt the rout of traffic to network condit , with the joint goal of good user perform and effici use of network resourc . we describ an approach to intradomain traffic engin that work within the exist deploy base of interior gateway protocol , such as open shortest path first and intermedi system-intermedi system . we explain how to adapt the configur of link weight , base on a networkwid view of the traffic and topolog within a domain . in addit , we summar the result of sever studi of techniqu for optim ospf \/ is-i weight to the prevail traffic . the articl argu that tradit shortest path rout protocol are surprisingli effect for engin the flow of traffic in larg ip network","ordered_present_kp":[26,283,95,140,171,215,578,650,740],"keyphrases":["IP routing protocols","network conditions","user performance","network resources","intradomain traffic engineering","interior gateway protocols","OSPF\/IS-IS weights","shortest path routing protocols","IP networks","link weights configuration","traffic routing","network topology","TCP","transmission control protocol","Open Shortest Path First protocol","Intermediate System-Intermediate System protocol"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R","U","M","R","R"]} {"id":"1350","title":"Generalized mosaicing: wide field of view multispectral imaging","abstract":"We present an approach to significantly enhance the spectral resolution of imaging systems by generalizing image mosaicing. A filter transmitting spatially varying spectral bands is rigidly attached to a camera. As the system moves, it senses each scene point multiple times, each time in a different spectral band. This is an additional dimension of the generalized mosaic paradigm, which has demonstrated yielding high radiometric dynamic range images in a wide field of view, using a spatially varying density filter. The resulting mosaic represents the spectrum at each scene point. The image acquisition is as easy as in traditional image mosaics. We derive an efficient scene sampling rate, and use a registration method that accommodates the spatially varying properties of the filter. Using the data acquired by this method, we demonstrate scene rendering under different simulated illumination spectra. We are also able to infer information about the scene illumination. The approach was tested using a standard 8-bit black\/white video camera and a fixed spatially varying spectral (interference) filter","tok_text":"gener mosaic : wide field of view multispectr imag \n we present an approach to significantli enhanc the spectral resolut of imag system by gener imag mosaic . a filter transmit spatial vari spectral band is rigidli attach to a camera . as the system move , it sens each scene point multipl time , each time in a differ spectral band . thi is an addit dimens of the gener mosaic paradigm , which ha demonstr yield high radiometr dynam rang imag in a wide field of view , use a spatial vari densiti filter . the result mosaic repres the spectrum at each scene point . the imag acquisit is as easi as in tradit imag mosaic . we deriv an effici scene sampl rate , and use a registr method that accommod the spatial vari properti of the filter . use the data acquir by thi method , we demonstr scene render under differ simul illumin spectra . we are also abl to infer inform about the scene illumin . the approach wa test use a standard 8-bit black \/ white video camera and a fix spatial vari spectral ( interfer ) filter","ordered_present_kp":[0,15,177,476,570,641,670,789,815,881],"keyphrases":["generalized mosaicing","wide field of view multispectral imaging","spatially varying spectral bands","spatially varying density filter","image acquisition","scene sampling rate","registration method","scene rendering","simulated illumination spectra","scene illumination","hyperspectral imaging","color balance","image fusion","physics-based vision","image-based rendering"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","U","M","U","M"]} {"id":"896","title":"Calculation of the probability of survival of an insurance company with allowance for the rate of return for a Poisson stream of premiums","abstract":"The probability of survival of an insurance company with the working capital is calculated for a Poisson stream of premiums","tok_text":"calcul of the probabl of surviv of an insur compani with allow for the rate of return for a poisson stream of premium \n the probabl of surviv of an insur compani with the work capit is calcul for a poisson stream of premium","ordered_present_kp":[38],"keyphrases":["insurance company","survival probability","return rate","Poisson premium stream","probability density function"],"prmu":["P","R","R","R","M"]} {"id":"1014","title":"Modelling of complete robot dynamics based on a multi-dimensional, RBF-like neural architecture","abstract":"A neural network based identification approach of manipulator dynamics is presented. For a structured modelling, RBF-like static neural networks are used in order to represent and adapt all model parameters with their non-linear dependences on the joint positions. The neural architecture is hierarchically organised to reach optimal adjustment to structural a priori-knowledge about the identification problem. The model structure is substantially simplified by general system analysis independent of robot type. But also a lot of specific features of the utilised experimental robot are taken into account. A fixed, grid based neuron placement together with application of B-spline polynomial basis functions is utilised favourably for a very effective recursive implementation of the neural architecture. Thus, an online identification of a dynamic model is submitted for a complete 6 joint industrial robot","tok_text":"model of complet robot dynam base on a multi-dimension , rbf-like neural architectur \n a neural network base identif approach of manipul dynam is present . for a structur model , rbf-like static neural network are use in order to repres and adapt all model paramet with their non-linear depend on the joint posit . the neural architectur is hierarch organis to reach optim adjust to structur a priori-knowledg about the identif problem . the model structur is substanti simplifi by gener system analysi independ of robot type . but also a lot of specif featur of the utilis experiment robot are taken into account . a fix , grid base neuron placement togeth with applic of b-spline polynomi basi function is utilis favour for a veri effect recurs implement of the neural architectur . thu , an onlin identif of a dynam model is submit for a complet 6 joint industri robot","ordered_present_kp":[9,129,188,66,482,673,740,794,813,841],"keyphrases":["complete robot dynamics","neural architecture","manipulator dynamics","static neural networks","general system analysis","B-spline polynomial basis functions","recursive implementation","online identification","dynamic model","complete 6 joint industrial robot","multi-dimensional RBF-like neural architecture","fixed grid based neuron placement","online learning"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R","M"]} {"id":"1051","title":"Faking it: simulating dependent types in Haskell","abstract":"Dependent types reflect the fact that validity of data is often a relative notion by allowing prior data to affect the types of subsequent data. Not only does this make for a precise type system, but also a highly generic one: both the type and the program for each instance of a family of operations can be computed from the data which codes for that instance. Recent experimental extensions to the Haskell type class mechanism give us strong tools to relativize types to other types. We may simulate some aspects of dependent typing by making counterfeit type-level copies of data, with type constructors simulating data constructors and type classes simulating datatypes. This paper gives examples of the technique and discusses its potential","tok_text":"fake it : simul depend type in haskel \n depend type reflect the fact that valid of data is often a rel notion by allow prior data to affect the type of subsequ data . not onli doe thi make for a precis type system , but also a highli gener one : both the type and the program for each instanc of a famili of oper can be comput from the data which code for that instanc . recent experiment extens to the haskel type class mechan give us strong tool to relativ type to other type . we may simul some aspect of depend type by make counterfeit type-level copi of data , with type constructor simul data constructor and type class simul datatyp . thi paper give exampl of the techniqu and discuss it potenti","ordered_present_kp":[16,31,195,410,16,528,571,594,632],"keyphrases":["dependent types","dependent types","Haskell","precise type system","type class mechanism","counterfeit type-level copies","type constructors","data constructors","datatypes","data validity","dependent typing","functional programming"],"prmu":["P","P","P","P","P","P","P","P","P","R","P","M"]} {"id":"1109","title":"The existence condition of gamma -acyclic database schemes with MVDs constraints","abstract":"It is very important to use database technology for a large-scale system such as ERP and MIS. A good database design may improve the performance of the system. Some research shows that a gamma -acyclic database scheme has many good properties, e.g., each connected join expression is monotonous, which helps to improve query performance of the database system. Thus what conditions are needed to generate a gamma -acyclic database scheme for a given relational scheme? In this paper, the sufficient and necessary condition of the existence of gamma -acyclic, join-lossless and dependencies-preserved database schemes meeting 4NF is given","tok_text":"the exist condit of gamma -acycl databas scheme with mvd constraint \n it is veri import to use databas technolog for a large-scal system such as erp and mi . a good databas design may improv the perform of the system . some research show that a gamma -acycl databas scheme ha mani good properti , e.g. , each connect join express is monoton , which help to improv queri perform of the databas system . thu what condit are need to gener a gamma -acycl databas scheme for a given relat scheme ? in thi paper , the suffici and necessari condit of the exist of gamma -acycl , join-lossless and dependencies-preserv databas scheme meet 4nf is given","ordered_present_kp":[4,95,119,309,364,512,20,53],"keyphrases":["existence condition","gamma -acyclic database schemes","MVDs constraints","database technology","large-scale system","connected join expression","query performance","sufficient and necessary condition"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"933","title":"Real-time estimations of multi-modal frequencies for smart structures","abstract":"In this paper, various methods for the real-time estimation of multi-modal frequencies are realized in real time and compared through numerical and experimental tests. These parameter-based frequency estimation methods can be applied to various engineering fields such as communications, radar and adaptive vibration and noise control. Well-known frequency estimation methods are introduced and explained. The Bairstow method is introduced to find the roots of a characteristic equation for estimations of multi-modal frequencies, and the computational efficiency of the Bairstow method is shown quantitatively. For a simple numerical test, we consider two sinusoids of the same amplitudes mixed with various amounts of white noise. The test results show that the auto regressive (AR) and auto regressive and moving average (ARMA) methods are unsuitable in noisy environments. The other methods apart from the AR method have fast tracking capability. From the point of view of computational efficiency, the results reveal that the ARMA method is inefficient, while the cascade notch filter method is very effective. The linearized adaptive notch filter and recursive maximum likelihood methods have average performances. Experimental tests are devised to confirm the feasibility of real-time computations and to impose the severe conditions of drastically different amplitudes and of considerable changes of natural frequencies. We have performed experiments to extract the natural frequencies from the vibration signal of wing-like composite plates in real time. The natural frequencies of the specimen are changed by added masses. Especially, the AR method exhibits a remarkable performance in spite of the severe conditions. This study will be helpful to anyone who needs a frequency estimation algorithm for real-time applications","tok_text":"real-tim estim of multi-mod frequenc for smart structur \n in thi paper , variou method for the real-tim estim of multi-mod frequenc are realiz in real time and compar through numer and experiment test . these parameter-bas frequenc estim method can be appli to variou engin field such as commun , radar and adapt vibrat and nois control . well-known frequenc estim method are introduc and explain . the bairstow method is introduc to find the root of a characterist equat for estim of multi-mod frequenc , and the comput effici of the bairstow method is shown quantit . for a simpl numer test , we consid two sinusoid of the same amplitud mix with variou amount of white nois . the test result show that the auto regress ( ar ) and auto regress and move averag ( arma ) method are unsuit in noisi environ . the other method apart from the ar method have fast track capabl . from the point of view of comput effici , the result reveal that the arma method is ineffici , while the cascad notch filter method is veri effect . the linear adapt notch filter and recurs maximum likelihood method have averag perform . experiment test are devis to confirm the feasibl of real-tim comput and to impos the sever condit of drastic differ amplitud and of consider chang of natur frequenc . we have perform experi to extract the natur frequenc from the vibrat signal of wing-lik composit plate in real time . the natur frequenc of the specimen are chang by ad mass . especi , the ar method exhibit a remark perform in spite of the sever condit . thi studi will be help to anyon who need a frequenc estim algorithm for real-tim applic","ordered_present_kp":[18,41,0,223,324,403,453,763,979,1027,1057,1164,1341,1358,1577,1606],"keyphrases":["real-time estimation","multi-modal frequencies","smart structures","frequency estimation","noise control","Bairstow method","characteristic equation","ARMA","cascade notch filter","linearized adaptive notch filter","recursive maximum likelihood methods","real-time computations","vibration signal","wing-like composite plates","frequency estimation algorithm","real-time applications","adaptive vibration control","auto regressive and moving average methods"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"976","title":"Completion to involution and semidiscretisations","abstract":"We discuss the relation between the completion to involution of linear over-determined systems of partial differential equations with constant coefficients and the properties of differential algebraic equations obtained by their semidiscretisation. For a certain class of \"weakly over-determined\" systems, we show that the differential algebraic equations do not contain hidden constraints, if and only if the original partial differential system is involutive. We also demonstrate how the formal theory can be used to obtain an existence and uniqueness theorem for smooth solutions of strongly hyperbolic systems and to estimate the drift off the constraints, if an underlying equation is numerically solved. Finally, we show for general linear systems how the index of differential algebraic equations obtained by semidiscretisations can be predicted from the result of a completion of the partial differential system","tok_text":"complet to involut and semidiscretis \n we discuss the relat between the complet to involut of linear over-determin system of partial differenti equat with constant coeffici and the properti of differenti algebra equat obtain by their semidiscretis . for a certain class of \" weakli over-determin \" system , we show that the differenti algebra equat do not contain hidden constraint , if and onli if the origin partial differenti system is involut . we also demonstr how the formal theori can be use to obtain an exist and uniqu theorem for smooth solut of strongli hyperbol system and to estim the drift off the constraint , if an underli equat is numer solv . final , we show for gener linear system how the index of differenti algebra equat obtain by semidiscretis can be predict from the result of a complet of the partial differenti system","ordered_present_kp":[0,11,94,125,23,155,709,193,522,556],"keyphrases":["completion","involution","semidiscretisations","linear over-determined systems","partial differential equations","constant coefficients","differential algebraic equations","uniqueness theorem","strongly hyperbolic systems","index","matrices"],"prmu":["P","P","P","P","P","P","P","P","P","P","U"]} {"id":"135","title":"Hysteretic threshold logic and quasi-delay insensitive asynchronous design","abstract":"We introduce the class of hysteretic linear-threshold (HLT) logic functions as a novel extension of linear threshold logic, and prove their general applicability for constructing state-holding Boolean functions. We then demonstrate a fusion of HLT logic with the quasi-delay insensitive style of asynchronous circuit design, complete with logical design examples. Future research directions are also identified","tok_text":"hysteret threshold logic and quasi-delay insensit asynchron design \n we introduc the class of hysteret linear-threshold ( hlt ) logic function as a novel extens of linear threshold logic , and prove their gener applic for construct state-hold boolean function . we then demonstr a fusion of hlt logic with the quasi-delay insensit style of asynchron circuit design , complet with logic design exampl . futur research direct are also identifi","ordered_present_kp":[232,291,310,340,380],"keyphrases":["state-holding Boolean functions","HLT logic","quasi-delay insensitive style","asynchronous circuit design","logic design","hysteretic linear-threshold logic functions","digital logic","CMOS implementation"],"prmu":["P","P","P","P","P","R","M","U"]} {"id":"1208","title":"A Virtual Test Facility for the simulation of dynamic response in materials","abstract":"The Center for Simulating Dynamic Response of Materials at the California Institute of Technology is constructing a virtual shock physics facility for studying the response of various target materials to very strong shocks. The Virtual Test Facility (VTF) is an end-to-end, fully three-dimensional simulation of the detonation of high explosives (HE), shock wave propagation, solid material response to pressure loading, and compressible turbulence. The VTF largely consists of a parallel fluid solver and a parallel solid mechanics package that are coupled together by the exchange of boundary data. The Eulerian fluid code and Lagrangian solid mechanics model interact via a novel approach based on level sets. The two main computational packages are integrated through the use of Pyre, a problem solving environment written in the Python scripting language. Pyre allows application developers to interchange various computational models and solver packages without recompiling code, and it provides standardized access to several data visualization engines and data input mechanisms. In this paper, we outline the main components of the VTF, discuss their integration via Pyre, and describe some recent accomplishments in large-scale simulation using the VTF","tok_text":"a virtual test facil for the simul of dynam respons in materi \n the center for simul dynam respons of materi at the california institut of technolog is construct a virtual shock physic facil for studi the respons of variou target materi to veri strong shock . the virtual test facil ( vtf ) is an end-to-end , fulli three-dimension simul of the deton of high explos ( he ) , shock wave propag , solid materi respons to pressur load , and compress turbul . the vtf larg consist of a parallel fluid solver and a parallel solid mechan packag that are coupl togeth by the exchang of boundari data . the eulerian fluid code and lagrangian solid mechan model interact via a novel approach base on level set . the two main comput packag are integr through the use of pyre , a problem solv environ written in the python script languag . pyre allow applic develop to interchang variou comput model and solver packag without recompil code , and it provid standard access to sever data visual engin and data input mechan . in thi paper , we outlin the main compon of the vtf , discuss their integr via pyre , and describ some recent accomplish in large-scal simul use the vtf","ordered_present_kp":[164,2,354,375,395,419,438,482,510,970,760,769,805],"keyphrases":["Virtual Test Facility","virtual shock physics facility","high explosives","shock wave propagation","solid material response","pressure loading","compressible turbulence","parallel fluid solver","parallel solid mechanics","Pyre","problem solving environment","Python scripting language","data visualization","shock physics simulation"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"67","title":"Metaschemas for ER, ORM and UML data models: a comparison","abstract":"This paper provides metaschemas for some of the main database modeling notations used in industry. Two Entity Relationship (ER) notations (Information Engineering and Barker) are examined in detail, as well as Object Role Modeling (ORM) conceptual schema diagrams. The discussion of optionality, cardinality and multiplicity is widened to include Unified Modeling Language (UML) class diagrams. Issues addressed in the metamodel analysis include the normalization impact of non-derived constraints on derived associations, the influence of orthogonality on language transparency, and trade-offs between simplicity and expressibility. To facilitate comparison, the same modeling notation is used to display each metaschema. For this purpose, ORM is used because of its greater expressibility and clarity","tok_text":"metaschema for er , orm and uml data model : a comparison \n thi paper provid metaschema for some of the main databas model notat use in industri . two entiti relationship ( er ) notat ( inform engin and barker ) are examin in detail , as well as object role model ( orm ) conceptu schema diagram . the discuss of option , cardin and multipl is widen to includ unifi model languag ( uml ) class diagram . issu address in the metamodel analysi includ the normal impact of non-deriv constraint on deriv associ , the influenc of orthogon on languag transpar , and trade-off between simplic and express . to facilit comparison , the same model notat is use to display each metaschema . for thi purpos , orm is use becaus of it greater express and clariti","ordered_present_kp":[0,32,28,109,186,246,272,313,322,333,360,388,453,525,537,20],"keyphrases":["metaschemas","ORM","UML","data models","database modeling notations","Information Engineering","Object Role Modeling","conceptual schema diagrams","optionality","cardinality","multiplicity","Unified Modeling Language","class diagrams","normalization","orthogonality","language transparency","Entity Relationship modeling","Barker notation"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"618","title":"Blind source separation applied to image cryptosystems with dual encryption","abstract":"Blind source separation (BSS) is explored to add another encryption level besides the existing encryption methods for image cryptosystems. The transmitted images are covered with a noise image by specific mixing before encryption and then recovered through BSS after decryption. Simulation results illustrate the validity of the proposed method","tok_text":"blind sourc separ appli to imag cryptosystem with dual encrypt \n blind sourc separ ( bss ) is explor to add anoth encrypt level besid the exist encrypt method for imag cryptosystem . the transmit imag are cover with a nois imag by specif mix befor encrypt and then recov through bss after decrypt . simul result illustr the valid of the propos method","ordered_present_kp":[0,187,218,27,50],"keyphrases":["blind source separation","image cryptosystems","dual encryption","transmitted images","noise image"],"prmu":["P","P","P","P","P"]} {"id":"108","title":"Exploiting randomness in quantum information processing","abstract":"We consider how randomness can be made to play a useful role in quantum information processing-in particular, for decoherence control and the implementation of quantum algorithms. For a two-level system in which the decoherence channel is non-dissipative, we show that decoherence suppression is possible if memory is present in the channel. Random switching between two potentially harmful noise sources can then provide a source of stochastic control. Such random switching can also be used in an advantageous way for the implementation of quantum algorithms","tok_text":"exploit random in quantum inform process \n we consid how random can be made to play a use role in quantum inform processing-in particular , for decoher control and the implement of quantum algorithm . for a two-level system in which the decoher channel is non-dissip , we show that decoher suppress is possibl if memori is present in the channel . random switch between two potenti harm nois sourc can then provid a sourc of stochast control . such random switch can also be use in an advantag way for the implement of quantum algorithm","ordered_present_kp":[18,8,144,181,207,348,387,425],"keyphrases":["randomness","quantum information processing","decoherence control","quantum algorithms","two-level system","random switching","noise","stochastic control"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1270","title":"A comparison of different decision algorithms used in volumetric storm cells classification","abstract":"Decision algorithms useful in classifying meteorological volumetric radar data are discussed. Such data come from the radar decision support system (RDSS) database of Environment Canada and concern summer storms created in this country. Some research groups used the data completed by RDSS for verifying the utility of chosen methods in volumetric storm cells classification. The paper consists of a review of experiments that were made on the data from RDSS database of Environment Canada and presents the quality of particular classifiers. The classification accuracy coefficient is used to express the quality. For five research groups that led their experiments in a similar way it was possible to compare received outputs. Experiments showed that the support vector machine (SVM) method and rough set algorithms which use object oriented reducts for rule generation to classify volumetric storm data perform better than other classifiers","tok_text":"a comparison of differ decis algorithm use in volumetr storm cell classif \n decis algorithm use in classifi meteorolog volumetr radar data are discuss . such data come from the radar decis support system ( rdss ) databas of environ canada and concern summer storm creat in thi countri . some research group use the data complet by rdss for verifi the util of chosen method in volumetr storm cell classif . the paper consist of a review of experi that were made on the data from rdss databas of environ canada and present the qualiti of particular classifi . the classif accuraci coeffici is use to express the qualiti . for five research group that led their experi in a similar way it wa possibl to compar receiv output . experi show that the support vector machin ( svm ) method and rough set algorithm which use object orient reduct for rule gener to classifi volumetr storm data perform better than other classifi","ordered_present_kp":[23,46,108,177,251,562,744,785,815],"keyphrases":["decision algorithms","volumetric storm cells classification","meteorological volumetric radar data","radar decision support system","summer storms","classification accuracy","support vector machine","rough set algorithms","object oriented reducts"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"1235","title":"Finding performance bugs with the TNO HPF benchmark suite","abstract":"High-Performance Fortran (HPF) has been designed to provide portable performance on distributed memory machines. An important aspect of portable performance is the behavior of the available HPF compilers. Ideally, a programmer may expect comparable performance between different HPF compilers, given the same program and the same machine. To test the performance portability between compilers, we have designed a special benchmark suite, called the TNO HPF benchmark suite. It consists of a set of HPF programs that test various aspects of efficient parallel code generation. The benchmark suite consists of a number of template programs that are used to generate test programs with different array sizes, alignments, distributions, and iteration spaces. It ranges from very simple assignments to more complex assignments such as triangular iteration spaces, convex iteration spaces, coupled subscripts, and indirection arrays. We have run the TNO HPF benchmark suite on three compilers: the PREPARE prototype compiler, the PGI-HPF compiler, and the GMD Adaptor HPF compiler. Results show performance differences that can be quite large (up to two orders of magnitude for the same test program). Closer inspection reveals that the origin of most of the differences in performance is due to differences in local enumeration and storage of distributed array elements","tok_text":"find perform bug with the tno hpf benchmark suit \n high-perform fortran ( hpf ) ha been design to provid portabl perform on distribut memori machin . an import aspect of portabl perform is the behavior of the avail hpf compil . ideal , a programm may expect compar perform between differ hpf compil , given the same program and the same machin . to test the perform portabl between compil , we have design a special benchmark suit , call the tno hpf benchmark suit . it consist of a set of hpf program that test variou aspect of effici parallel code gener . the benchmark suit consist of a number of templat program that are use to gener test program with differ array size , align , distribut , and iter space . it rang from veri simpl assign to more complex assign such as triangular iter space , convex iter space , coupl subscript , and indirect array . we have run the tno hpf benchmark suit on three compil : the prepar prototyp compil , the pgi-hpf compil , and the gmd adaptor hpf compil . result show perform differ that can be quit larg ( up to two order of magnitud for the same test program ) . closer inspect reveal that the origin of most of the differ in perform is due to differ in local enumer and storag of distribut array element","ordered_present_kp":[51,105,124,215,358,34],"keyphrases":["benchmark suite","High-Performance Fortran","portable performance","distributed memory machines","HPF compilers","performance portability","parallel compilers","compiler optimizations"],"prmu":["P","P","P","P","P","P","R","M"]} {"id":"660","title":"At your service [agile businesses]","abstract":"Senior software executives from three of the world's leading software companies, and one smaller, entrepreneurial software developer, explain the impact that web services, business process management and integrated application architectures are having on their product development plans, and share their vision of the roles these products will play in creating agile businesses","tok_text":"at your servic [ agil busi ] \n senior softwar execut from three of the world 's lead softwar compani , and one smaller , entrepreneuri softwar develop , explain the impact that web servic , busi process manag and integr applic architectur are have on their product develop plan , and share their vision of the role these product will play in creat agil busi","ordered_present_kp":[17,213,190,177,85],"keyphrases":["agile businesses","software companies","web services","business process management","integrated application architectures"],"prmu":["P","P","P","P","P"]} {"id":"625","title":"Identifying multivariate discordant observations: a computer-intensive approach","abstract":"The problem of identifying multiple outliers in a multivariate normal sample is approached via successive testing using P-values rather than tabled critical values. Caroni and Prescott (Appl. Statist. 41, p.355, 1992) proposed a generalization of the EDR-ESD procedure of Rosner (Technometrics, 25, 1983)). Venter and Viljoen (Comput. Statist. Data Anal. 29, p.261, 1999) introduced a computer intensive method to identify outliers in a univariate outlier situation. We now generalize this method to the multivariate outlier situation and compare this new procedure with that of Caroni and Prescott (Appl. Statist. 4, p.355, 1992)","tok_text":"identifi multivari discord observ : a computer-intens approach \n the problem of identifi multipl outlier in a multivari normal sampl is approach via success test use p-valu rather than tabl critic valu . caroni and prescott ( appl . statist . 41 , p.355 , 1992 ) propos a gener of the edr-esd procedur of rosner ( technometr , 25 , 1983 ) ) . venter and viljoen ( comput . statist . data anal . 29 , p.261 , 1999 ) introduc a comput intens method to identifi outlier in a univari outlier situat . we now gener thi method to the multivari outlier situat and compar thi new procedur with that of caroni and prescott ( appl . statist . 4 , p.355 , 1992 )","ordered_present_kp":[9,38,89,110,166,185,472,528],"keyphrases":["multivariate discordant observations","computer-intensive approach","multiple outliers","multivariate normal sample","P-values","tabled critical values","univariate outlier","multivariate outlier","data analysis","EDR-EHD procedure","stepwise testing approach"],"prmu":["P","P","P","P","P","P","P","P","M","M","M"]} {"id":"1171","title":"Manufacturing data analysis of machine tool errors within a contemporary small manufacturing enterprise","abstract":"The main focus of the paper is directed at the determination of manufacturing errors within the contemporary smaller manufacturing enterprise sector. The manufacturing error diagnosis is achieved through the manufacturing data analysis of the results obtained from the inspection of the component on a co-ordinate measuring machine. This manufacturing data analysis activity adopts a feature-based approach and is conducted through the application of a forward chaining expert system, called the product data analysis distributed diagnostic expert system, which forms part of a larger prototype feedback system entitled the production data analysis framework. The paper introduces the manufacturing error categorisations that are associated with milling type operations, knowledge acquisition and representation, conceptual structure and operating procedure of the prototype manufacturing data analysis facility. The paper concludes with a brief evaluation of the logic employed through the simulation of manufacturing error scenarios. This prototype manufacturing data analysis expert system provides a valuable aid for the rapid diagnosis and elimination of manufacturing errors on a 3-axis vertical machining centre in an environment where operator expertise is limited","tok_text":"manufactur data analysi of machin tool error within a contemporari small manufactur enterpris \n the main focu of the paper is direct at the determin of manufactur error within the contemporari smaller manufactur enterpris sector . the manufactur error diagnosi is achiev through the manufactur data analysi of the result obtain from the inspect of the compon on a co-ordin measur machin . thi manufactur data analysi activ adopt a feature-bas approach and is conduct through the applic of a forward chain expert system , call the product data analysi distribut diagnost expert system , which form part of a larger prototyp feedback system entitl the product data analysi framework . the paper introduc the manufactur error categoris that are associ with mill type oper , knowledg acquisit and represent , conceptu structur and oper procedur of the prototyp manufactur data analysi facil . the paper conclud with a brief evalu of the logic employ through the simul of manufactur error scenario . thi prototyp manufactur data analysi expert system provid a valuabl aid for the rapid diagnosi and elimin of manufactur error on a 3-axi vertic machin centr in an environ where oper expertis is limit","ordered_present_kp":[0,27,54,1126,337,754,771,805,827,364,431,491,530],"keyphrases":["manufacturing data analysis","machine tool errors","contemporary small manufacturing enterprise","inspection","co-ordinate measuring machine","feature-based approach","forward chaining expert system","product data analysis distributed diagnostic expert system","milling type operations","knowledge acquisition","conceptual structure","operating procedure","3-axis vertical machining centre","fixturing errors","programming errors","2 1\/2D components","knowledge representation"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","M","M","M","R"]} {"id":"1134","title":"Relationship between strong monotonicity property, P\/sub 2\/-property, and the GUS-property in semidefinite linear complementarity problems","abstract":"In a recent paper on semidefinite linear complementarity problems, Gowda and Song (2000) introduced and studied the P-property, P\/sub 2\/-property, GUS-property, and strong monotonicity property for linear transformation L: S\/sup n\/ to S\/sup n\/, where S\/sup n\/ is the space of all symmetric and real n * n matrices. In an attempt to characterize the P\/sub 2\/-property, they raised the following two questions: (i) Does the strong monotonicity imply the P\/sub 2\/-property? (ii) Does the GUS-property imply the P\/sub 2\/-property? In this paper, we show that the strong monotonicity property implies the P\/sub 2\/-property for any linear transformation and describe an equivalence between these two properties for Lyapunov and other transformations. We show by means of an example that the GUS-property need not imply the P\/sub 2\/-property, even for Lyapunov transformations","tok_text":"relationship between strong monoton properti , p \/ sub 2\/-properti , and the gus-properti in semidefinit linear complementar problem \n in a recent paper on semidefinit linear complementar problem , gowda and song ( 2000 ) introduc and studi the p-properti , p \/ sub 2\/-properti , gus-properti , and strong monoton properti for linear transform l : s \/ sup n\/ to s \/ sup n\/ , where s \/ sup n\/ is the space of all symmetr and real n * n matric . in an attempt to character the p \/ sub 2\/-properti , they rais the follow two question : ( i ) doe the strong monoton impli the p \/ sub 2\/-properti ? ( ii ) doe the gus-properti impli the p \/ sub 2\/-properti ? in thi paper , we show that the strong monoton properti impli the p \/ sub 2\/-properti for ani linear transform and describ an equival between these two properti for lyapunov and other transform . we show by mean of an exampl that the gus-properti need not impli the p \/ sub 2\/-properti , even for lyapunov transform","ordered_present_kp":[93,21,47,77,327,951],"keyphrases":["strong monotonicity property","P\/sub 2\/-property","GUS-property","semidefinite linear complementarity problems","linear transformation","Lyapunov transformations","symmetric real matrices"],"prmu":["P","P","P","P","P","P","R"]} {"id":"561","title":"SubSeven's Honey Pot program","abstract":"A serious security threat today are malicious executables, especially new, unseen malicious executables often arriving as email attachments. These new malicious executables are created at the rate of thousands every year and pose a serious threat. Current anti-virus systems attempt to detect these new malicious programs with heuristics generated by hand. This approach is costly and often ineffective. We introduce the Trojan Horse SubSeven, its capabilities and influence over intrusion detection systems. A Honey Pot program is implemented, simulating the SubSeven Server. The Honey Pot Program provides feedback and stores data to and from the SubSeven's client","tok_text":"subseven 's honey pot program \n a seriou secur threat today are malici execut , especi new , unseen malici execut often arriv as email attach . these new malici execut are creat at the rate of thousand everi year and pose a seriou threat . current anti-viru system attempt to detect these new malici program with heurist gener by hand . thi approach is costli and often ineffect . we introduc the trojan hors subseven , it capabl and influenc over intrus detect system . a honey pot program is implement , simul the subseven server . the honey pot program provid feedback and store data to and from the subseven 's client","ordered_present_kp":[12,41,64,129,248,397,0,448],"keyphrases":["SubSeven","Honey Pot program","security threat","malicious executables","email attachments","anti-virus systems","Trojan Horse","intrusion detection systems"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1390","title":"Data quality - unlocking the ROI in CRM","abstract":"While many organisations realise their most valuable asset is their customers, many more fail to realise the importance of auditing, maintaining and updating the information contained in their customer databases. Today's growing awareness in the importance of data quality in relation to CRM and ROI will help change this attitude. In response, CRM vendors will follow suit and begin to differentiate themselves by offering data quality as part of an enterprise-wide data management methodology","tok_text":"data qualiti - unlock the roi in crm \n while mani organis realis their most valuabl asset is their custom , mani more fail to realis the import of audit , maintain and updat the inform contain in their custom databas . today 's grow awar in the import of data qualiti in relat to crm and roi will help chang thi attitud . in respons , crm vendor will follow suit and begin to differenti themselv by offer data qualiti as part of an enterprise-wid data manag methodolog","ordered_present_kp":[202,447,33],"keyphrases":["CRM","customer databases","data management","customer relationships","return on investment"],"prmu":["P","P","P","M","U"]} {"id":"780","title":"Failures and successes: notes on the development of electronic cash","abstract":"Between 1997 and 2001, two mid-sized communities in Canada hosted North America's most comprehensive experiment to introduce electronic cash and, in the process, replace physical cash for casual, low-value payments. The technology used was Mondex, and its implementation was supported by all the country's major banks. It was launched with an extensive publicity campaign to promote Mondex not only in the domestic but also in the global market, for which the Canadian implementation was to serve as a \"showcase.\" However, soon after the start of the first field test it became apparent that the new technology did not work smoothly. On the contrary, it created a host of controversies, in areas as varied as computer security, consumer privacy, and monetary policy. In the following years, few of these controversies could be resolved and Mondex could not be established as a widely used payment mechanism. In 2001, the experiment was finally terminated. Using the concepts developed in recent science and technology studies (STS), the article analyzes these controversies as resulting from the difficulties of fitting electronic cash, a new sociotechnical system, into the complex setting of the existing payment system. The story of Mondex not only offers lessons on why technologies fail, but also offers insight into how short-term failures can contribute to long-term transformations. This suggests the need to rethink the dichotomy of success and failure","tok_text":"failur and success : note on the develop of electron cash \n between 1997 and 2001 , two mid-siz commun in canada host north america 's most comprehens experi to introduc electron cash and , in the process , replac physic cash for casual , low-valu payment . the technolog use wa mondex , and it implement wa support by all the countri 's major bank . it wa launch with an extens public campaign to promot mondex not onli in the domest but also in the global market , for which the canadian implement wa to serv as a \" showcas . \" howev , soon after the start of the first field test it becam appar that the new technolog did not work smoothli . on the contrari , it creat a host of controversi , in area as vari as comput secur , consum privaci , and monetari polici . in the follow year , few of these controversi could be resolv and mondex could not be establish as a wide use payment mechan . in 2001 , the experi wa final termin . use the concept develop in recent scienc and technolog studi ( st ) , the articl analyz these controversi as result from the difficulti of fit electron cash , a new sociotechn system , into the complex set of the exist payment system . the stori of mondex not onli offer lesson on whi technolog fail , but also offer insight into how short-term failur can contribut to long-term transform . thi suggest the need to rethink the dichotomi of success and failur","ordered_present_kp":[44,106,239,279,338,379,451,481,715,730,751,879,969,1100,1269,1304],"keyphrases":["electronic cash","Canada","low-value payments","Mondex","major banks","publicity campaign","global market","Canadian implementation","computer security","consumer privacy","monetary policy","payment mechanism","science and technology studies","sociotechnical system","short-term failures","long-term transformations"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1029","title":"Effect of insulation layer on transcribability and birefringence distribution in optical disk substrate","abstract":"As the need for information storage media with high storage density increases, digital video disks (DVDs) with smaller recording marks and thinner optical disk substrates than those of conventional DVDs are being required. Therefore, improving the replication quality of land-groove or pit structure and reducing the birefringence distribution are emerging as important criteria in the fabrication of high-density optical disk substrates. We control the transcribability and distribution of birefringence by inserting an insulation layer under the stamper during injection-compression molding of DVD RAM substrates. The effects of the insulation layer on the geometrical and optical properties, such as transcribability and birefringence distribution, are examined experimentally. The inserted insulation layer is found to be very effective in improving the quality of replication and leveling out the first peak of the gapwise birefringence distribution near the mold wall and reducing the average birefringence value, because the insulation layer retarded the growth of the solidified layer","tok_text":"effect of insul layer on transcrib and birefring distribut in optic disk substrat \n as the need for inform storag media with high storag densiti increas , digit video disk ( dvd ) with smaller record mark and thinner optic disk substrat than those of convent dvd are be requir . therefor , improv the replic qualiti of land-groov or pit structur and reduc the birefring distribut are emerg as import criteria in the fabric of high-dens optic disk substrat . we control the transcrib and distribut of birefring by insert an insul layer under the stamper dure injection-compress mold of dvd ram substrat . the effect of the insul layer on the geometr and optic properti , such as transcrib and birefring distribut , are examin experiment . the insert insul layer is found to be veri effect in improv the qualiti of replic and level out the first peak of the gapwis birefring distribut near the mold wall and reduc the averag birefring valu , becaus the insul layer retard the growth of the solidifi layer","ordered_present_kp":[62,25,39,10,100,125,155,185,209,301,319,333,416,545,558,585,653,856,892],"keyphrases":["insulation layer","transcribability","birefringence distribution","optical disk substrate","information storage media","high storage density","digital video disks","smaller recording marks","thinner optical disk substrates","replication quality","land-groove","pit structure","fabrication","stamper","injection-compression molding","DVD RAM substrates","optical properties","gapwise birefringence distribution","mold wall","geometrical properties","solidified layer growth retardation","polyimide thermal insulation layer"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","M"]} {"id":"1405","title":"Winning post [mail systems]","abstract":"Businesses that take their mail for granted can end up wasting money as well as opportunities. Mike Stecyk, VP of marketing and lines of business at Pitney Bowes, suggests strategies for making more of a great opportunity","tok_text":"win post [ mail system ] \n busi that take their mail for grant can end up wast money as well as opportun . mike stecyk , vp of market and line of busi at pitney bow , suggest strategi for make more of a great opportun","ordered_present_kp":[11,154,175],"keyphrases":["mail","Pitney Bowes","strategies","franking machines","folders","inserters","direct mail shots"],"prmu":["P","P","P","U","U","U","M"]} {"id":"1091","title":"Car-caravan snaking. 1. The influence of pintle pin friction","abstract":"A brief review of knowledge of car-caravan snaking is carried out. Against the background described, a fairly detailed mathematical model of a contemporary car-trailer system is constructed and a baseline set of parameter values is given. In reduced form, the model is shown to give results in accordance with literature. The properties of the baseline combination are explored using both linear and non-linear versions of the model. The influences of damping at the pintle joint and of several other design parameters on the stability of the linear system in the neighbourhood of the critical snaking speed are calculated and discussed. Coulomb friction damping at the pintle pin is then included and simulations are used to indicate the consequent amplitude-dependent behaviour. The friction damping, especially when its level has to be chosen by the user, is shown to give dangerous characteristics, despite having some capacity for stabilization of the snaking motions. It is concluded that pintle pin friction damping does not represent a satisfactory solution to the snaking problem. The paper sets the scene for the development of an improved solution","tok_text":"car-caravan snake . 1 . the influenc of pintl pin friction \n a brief review of knowledg of car-caravan snake is carri out . against the background describ , a fairli detail mathemat model of a contemporari car-trail system is construct and a baselin set of paramet valu is given . in reduc form , the model is shown to give result in accord with literatur . the properti of the baselin combin are explor use both linear and non-linear version of the model . the influenc of damp at the pintl joint and of sever other design paramet on the stabil of the linear system in the neighbourhood of the critic snake speed are calcul and discuss . coulomb friction damp at the pintl pin is then includ and simul are use to indic the consequ amplitude-depend behaviour . the friction damp , especi when it level ha to be chosen by the user , is shown to give danger characterist , despit have some capac for stabil of the snake motion . it is conclud that pintl pin friction damp doe not repres a satisfactori solut to the snake problem . the paper set the scene for the develop of an improv solut","ordered_present_kp":[0,40,173,206,553,732,595,639],"keyphrases":["car-caravan snaking","pintle pin friction","mathematical model","car-trailer system","linear system","critical snaking speed","Coulomb friction damping","amplitude-dependent behaviour"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1440","title":"Application of ultrasonic sensors in the process industry","abstract":"Continuous process monitoring in gaseous, liquid or molten media is a fundamental requirement for process control. Besides temperature and pressure other process parameters such as level, flow, concentration and conversion are of special interest. More qualified information obtained from new or better sensors can significantly enhance the process quality and thereby product properties. Ultrasonic sensors or sensor systems can contribute to this development. The state of the art of ultrasonic sensors and their advantages and disadvantages will be discussed. Commercial examples will be presented. Among others, applications in the food, chemical and pharmaceutical industries are described. Possibilities and limitations of ultrasonic process sensors are discussed","tok_text":"applic of ultrason sensor in the process industri \n continu process monitor in gaseou , liquid or molten media is a fundament requir for process control . besid temperatur and pressur other process paramet such as level , flow , concentr and convers are of special interest . more qualifi inform obtain from new or better sensor can significantli enhanc the process qualiti and therebi product properti . ultrason sensor or sensor system can contribut to thi develop . the state of the art of ultrason sensor and their advantag and disadvantag will be discuss . commerci exampl will be present . among other , applic in the food , chemic and pharmaceut industri are describ . possibl and limit of ultrason process sensor are discuss","ordered_present_kp":[33,52,137,358,642],"keyphrases":["process industry","continuous process monitoring","process control","process quality","pharmaceutical industries","ultrasonic sensors application","food industries","chemical industries","acoustic microsensors","ultrasonic measurements","ultrasonic attenuation","acoustic impedance","temperature measurement","pressure measurement","level measurement","distance measurement","flow measurement"],"prmu":["P","P","P","P","P","R","R","R","U","M","M","U","M","M","M","U","M"]} {"id":"856","title":"People who make a difference: mentors and role models","abstract":"The literature of gender issues in computing steadfastly and uniformly has advocated the use of mentors and role models (M&RM) for recruiting and retaining women in computer science. This paper, therefore, accepts the results of research studies and avoids reiterating details of the projects but offers instead a practical guide for using M&RM to recruit and retain women in computer science. The guide provides pragmatic advice, describing several different facets of the M&RM concept","tok_text":"peopl who make a differ : mentor and role model \n the literatur of gender issu in comput steadfastli and uniformli ha advoc the use of mentor and role model ( m&rm ) for recruit and retain women in comput scienc . thi paper , therefor , accept the result of research studi and avoid reiter detail of the project but offer instead a practic guid for use m&rm to recruit and retain women in comput scienc . the guid provid pragmat advic , describ sever differ facet of the m&rm concept","ordered_present_kp":[26,37,67,82,198],"keyphrases":["mentors","role models","gender issues","computing","computer science","women retention","women recruitment"],"prmu":["P","P","P","P","P","M","R"]} {"id":"813","title":"On generalized Gaussian quadratures for exponentials and their applications","abstract":"We introduce new families of Gaussian-type quadratures for weighted integrals of exponential functions and consider their applications to integration and interpolation of bandlimited functions. We use a generalization of a representation theorem due to Caratheodory to derive these quadratures. For each positive measure, the quadratures are parameterized by eigenvalues of the Toeplitz matrix constructed from the trigonometric moments of the measure. For a given accuracy epsilon , selecting an eigenvalue close to epsilon yields an approximate quadrature with that accuracy. To compute its weights and nodes, we present a new fast algorithm. These new quadratures can be used to approximate and integrate bandlimited functions, such as prolate spheroidal wave functions, and essentially bandlimited functions, such as Bessel functions. We also develop, for a given precision, an interpolating basis for bandlimited functions on an interval","tok_text":"on gener gaussian quadratur for exponenti and their applic \n we introduc new famili of gaussian-typ quadratur for weight integr of exponenti function and consid their applic to integr and interpol of bandlimit function . we use a gener of a represent theorem due to caratheodori to deriv these quadratur . for each posit measur , the quadratur are parameter by eigenvalu of the toeplitz matrix construct from the trigonometr moment of the measur . for a given accuraci epsilon , select an eigenvalu close to epsilon yield an approxim quadratur with that accuraci . to comput it weight and node , we present a new fast algorithm . these new quadratur can be use to approxim and integr bandlimit function , such as prolat spheroid wave function , and essenti bandlimit function , such as bessel function . we also develop , for a given precis , an interpol basi for bandlimit function on an interv","ordered_present_kp":[3,114,131,121,188,200,361,378,413,525,713,786],"keyphrases":["generalized Gaussian quadratures","weighted integrals","integration","exponential functions","interpolation","bandlimited functions","eigenvalues","Toeplitz matrix","trigonometric moments","approximation","prolate spheroidal wave functions","Bessel functions","Caratheodory representation theorem"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"738","title":"Playing for time [3G networks]","abstract":"The delays in rolling out 3G networks across Europe should not always be seen with a negative slant","tok_text":"play for time [ 3 g network ] \n the delay in roll out 3 g network across europ should not alway be seen with a neg slant","ordered_present_kp":[16,36,73],"keyphrases":["3G networks","delays","Europe","mobile operators"],"prmu":["P","P","P","U"]} {"id":"1328","title":"Tablet PCs on the way [publishing markets]","abstract":"Previews of hardware and software look promising for publishing markets","tok_text":"tablet pc on the way [ publish market ] \n preview of hardwar and softwar look promis for publish market","ordered_present_kp":[23,0],"keyphrases":["Tablet PC","publishing markets"],"prmu":["P","P"]} {"id":"1155","title":"A leaf sequencing algorithm to enlarge treatment field length in IMRT","abstract":"With MLC-based IMRT, the maximum usable field size is often smaller than the maximum field size for conventional treatments. This is due to the constraints of the overtravel distances of MLC leaves and\/or jaws. Using a new leaf sequencing algorithm, the usable IMRT field length (perpendicular to the MLC motion) can be mostly made equal to the full length of the MLC field without violating the upper jaw overtravel limit. For any given intensity pattern, a criterion was proposed to assess whether an intensity pattern can be delivered without violation of the jaw position constraints. If the criterion is met, the new algorithm will consider the jaw position constraints during the segmentation for the step and shoot delivery method. The strategy employed by the algorithm is to connect the intensity elements outside the jaw overtravel limits with those inside the jaw overtravel limits. Several methods were used to establish these connections during segmentation by modifying a previously published algorithm (areal algorithm), including changing the intensity level, alternating the leaf-sequencing direction, or limiting the segment field size. The algorithm was tested with 1000 random intensity patterns with dimensions of 21*27 cm\/sup 2\/, 800 intensity patterns with higher intensity outside the jaw overtravel limit, and three different types of clinical treatment plans that were undeliverable using a segmentation method from a commercial treatment planning system. The new algorithm achieved a success rate of 100% with these test patterns. For the 1000 random patterns, the new algorithm yields a similar average number of segments of 36.9+or-2.9 in comparison to 36.6+or-1.3 when using the areal algorithm. For the 800 patterns with higher intensities outside the jaw overtravel limits, the new algorithm results in an increase of 25% in the average number of segments compared to the areal algorithm. However, the areal algorithm fails to create deliverable segments for 90% of these patterns. Using a single isocenter, the new algorithm provides a solution to extend the usable IMRT field length from 21 to 27 cm for IMRT on a commercial linear accelerator using the step and shoot delivery method","tok_text":"a leaf sequenc algorithm to enlarg treatment field length in imrt \n with mlc-base imrt , the maximum usabl field size is often smaller than the maximum field size for convent treatment . thi is due to the constraint of the overtravel distanc of mlc leav and\/or jaw . use a new leaf sequenc algorithm , the usabl imrt field length ( perpendicular to the mlc motion ) can be mostli made equal to the full length of the mlc field without violat the upper jaw overtravel limit . for ani given intens pattern , a criterion wa propos to assess whether an intens pattern can be deliv without violat of the jaw posit constraint . if the criterion is met , the new algorithm will consid the jaw posit constraint dure the segment for the step and shoot deliveri method . the strategi employ by the algorithm is to connect the intens element outsid the jaw overtravel limit with those insid the jaw overtravel limit . sever method were use to establish these connect dure segment by modifi a previous publish algorithm ( areal algorithm ) , includ chang the intens level , altern the leaf-sequenc direct , or limit the segment field size . the algorithm wa test with 1000 random intens pattern with dimens of 21 * 27 cm \/ sup 2\/ , 800 intens pattern with higher intens outsid the jaw overtravel limit , and three differ type of clinic treatment plan that were undeliver use a segment method from a commerci treatment plan system . the new algorithm achiev a success rate of 100 % with these test pattern . for the 1000 random pattern , the new algorithm yield a similar averag number of segment of 36.9+or-2.9 in comparison to 36.6+or-1.3 when use the areal algorithm . for the 800 pattern with higher intens outsid the jaw overtravel limit , the new algorithm result in an increas of 25 % in the averag number of segment compar to the areal algorithm . howev , the areal algorithm fail to creat deliver segment for 90 % of these pattern . use a singl isocent , the new algorithm provid a solut to extend the usabl imrt field length from 21 to 27 cm for imrt on a commerci linear acceler use the step and shoot deliveri method","ordered_present_kp":[2,223,446,489,599,728,1387,1508,1885,1935,2053,35,816,452,1010,1073,1108,1161,1365],"keyphrases":["leaf sequencing algorithm","treatment field length","overtravel distances","upper jaw overtravel limit","jaw overtravel limits","intensity pattern","jaw position constraints","step and shoot delivery method","intensity elements","areal algorithm","leaf-sequencing direction","segment field size","random intensity patterns","segmentation method","commercial treatment planning system","random patterns","deliverable segments","single isocenter","commercial linear accelerator","usable intensity modulated radiation therapy field length","multileaf-based collimators intensity modulated radiation therapy","conformal radiation therapy","multileaf collimators jaws","multileaf collimators leaves"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M","M","U","M","M"]} {"id":"1110","title":"A hybrid model for smoke simulation","abstract":"A smoke simulation approach based on the integration of traditional particle systems and density functions is presented in this paper. By attaching a density function to each particle as its attribute, the diffusion of smoke can be described by the variation of particles' density functions, along with the effect on airflow by controlling particles' movement and fragmentation. In addition, a continuous density field for realistic rendering can be generated quickly through the look-up tables of particle's density functions. Compared with traditional particle systems, this approach can describe smoke diffusion, and provide a continuous density field for realistic rendering with much less computation. A quick rendering scheme is also presented in this paper as a useful preview tool for tuning appropriate parameters in the smoke model","tok_text":"a hybrid model for smoke simul \n a smoke simul approach base on the integr of tradit particl system and densiti function is present in thi paper . by attach a densiti function to each particl as it attribut , the diffus of smoke can be describ by the variat of particl ' densiti function , along with the effect on airflow by control particl ' movement and fragment . in addit , a continu densiti field for realist render can be gener quickli through the look-up tabl of particl 's densiti function . compar with tradit particl system , thi approach can describ smoke diffus , and provid a continu densiti field for realist render with much less comput . a quick render scheme is also present in thi paper as a use preview tool for tune appropri paramet in the smoke model","ordered_present_kp":[2,19,104,381,415,455],"keyphrases":["hybrid model","smoke simulation","density functions","continuous density field","rendering","look-up tables"],"prmu":["P","P","P","P","P","P"]} {"id":"545","title":"Interaction and presence in the clinical relationship: virtual reality (VR) as communicative medium between patient and therapist","abstract":"The great potential offered by virtual reality (VR) to clinical psychologists derives prevalently from the central role, in psychotherapy, occupied by the imagination and by memory. These two elements, which are fundamental in our life, present absolute and relative limits to the individual potential. Using VR as an advanced imaginal system, an experience that is able to reduce the gap existing between imagination and reality, it is possible to transcend these limits. In this sense, VR can improve the efficacy of a psychological therapy for its capability of reducing the distinction between the computer's reality and the conventional reality. Two are the core characteristics of this synthetic imaginal experience: the perceptual illusion of nonmediation and the possibility of building and sharing a common ground. In this sense, experiencing presence in a clinical virtual environment (VE), such as a shared virtual hospital, requires more than reproduction of the physical features of external reality. It requires the creation and sharing of the cultural web that makes meaningful, and therefore visible, both people and objects populating the environment. The paper outlines a framework for supporting the development and tuning of clinically oriented VR systems","tok_text":"interact and presenc in the clinic relationship : virtual realiti ( vr ) as commun medium between patient and therapist \n the great potenti offer by virtual realiti ( vr ) to clinic psychologist deriv preval from the central role , in psychotherapi , occupi by the imagin and by memori . these two element , which are fundament in our life , present absolut and rel limit to the individu potenti . use vr as an advanc imagin system , an experi that is abl to reduc the gap exist between imagin and realiti , it is possibl to transcend these limit . in thi sens , vr can improv the efficaci of a psycholog therapi for it capabl of reduc the distinct between the comput 's realiti and the convent realiti . two are the core characterist of thi synthet imagin experi : the perceptu illus of nonmedi and the possibl of build and share a common ground . in thi sens , experienc presenc in a clinic virtual environ ( ve ) , such as a share virtual hospit , requir more than reproduct of the physic featur of extern realiti . it requir the creation and share of the cultur web that make meaning , and therefor visibl , both peopl and object popul the environ . the paper outlin a framework for support the develop and tune of clinic orient vr system","ordered_present_kp":[50,235,265,279,13,595,886,928],"keyphrases":["presence","virtual reality","psychotherapy","imagination","memory","psychological therapy","clinical virtual environment","shared virtual hospital","patient-therapist communication","clinical psychology"],"prmu":["P","P","P","P","P","P","P","P","M","R"]} {"id":"992","title":"Cross-entropy and rare events for maximal cut and partition problems","abstract":"We show how to solve the maximal cut and partition problems using a randomized algorithm based on the cross-entropy method. For the maximal cut problem, the proposed algorithm employs an auxiliary Bernoulli distribution, which transforms the original deterministic network into an associated stochastic one, called the associated stochastic network (ASN). Each iteration of the randomized algorithm for the ASN involves the following two phases: (1) generation of random cuts using a multidimensional Ber(p) distribution and calculation of the associated cut lengths (objective functions) and some related quantities, such as rare-event probabilities; (2) updating the parameter vector p on the basis of the data collected in the first phase. We show that the Ber(p) distribution converges in distribution to a degenerated one, Ber(p\/sub d\/*), p\/sub d\/* = (p\/sub d\/,\/sub 1\/, p\/sub d,n\/) in the sense that some elements of p\/sub d\/*, will be unities and the rest zeros. The unity elements of p\/sub d\/* uniquely define a cut which will be taken as the estimate of the maximal cut. A similar approach is used for the partition problem. Supporting numerical results are given as well. Our numerical studies suggest that for the maximal cut and partition problems the proposed algorithm typically has polynomial complexity in the size of the network","tok_text":"cross-entropi and rare event for maxim cut and partit problem \n we show how to solv the maxim cut and partit problem use a random algorithm base on the cross-entropi method . for the maxim cut problem , the propos algorithm employ an auxiliari bernoulli distribut , which transform the origin determinist network into an associ stochast one , call the associ stochast network ( asn ) . each iter of the random algorithm for the asn involv the follow two phase : ( 1 ) gener of random cut use a multidimension ber(p ) distribut and calcul of the associ cut length ( object function ) and some relat quantiti , such as rare-ev probabl ; ( 2 ) updat the paramet vector p on the basi of the data collect in the first phase . we show that the ber(p ) distribut converg in distribut to a degener one , ber(p \/ sub d\/ * ) , p \/ sub d\/ * = ( p \/ sub d\/,\/sub 1\/ , p \/ sub d , n\/ ) in the sens that some element of p \/ sub d\/ * , will be uniti and the rest zero . the uniti element of p \/ sub d\/ * uniqu defin a cut which will be taken as the estim of the maxim cut . a similar approach is use for the partit problem . support numer result are given as well . our numer studi suggest that for the maxim cut and partit problem the propos algorithm typic ha polynomi complex in the size of the network","ordered_present_kp":[244,293,352,477,625,1117,1246,183,47,123],"keyphrases":["partition problems","randomized algorithm","maximal cut problems","Bernoulli distribution","deterministic network","associated stochastic network","random cuts","probability","numerical results","polynomial complexity","cross entropy method","rare event simulation","combinatorial optimization","importance sampling"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","M","U","U"]} {"id":"83","title":"A distributed mobile agent framework for maintaining persistent distance education","abstract":"Mobile agent techniques involve distributed control if communication is required among different types of agents, especially when mobile agents can migrate from station to station. This technique can be implemented in a distributed distance learning environment, which allows students or instructors to login from anywhere to a central server in an education center while still retaining the look-and-feel of personal setups. In this research paper, we propose a distributed agent framework along with its communication messages to facilitate mobile personal agents, which serve three different groups of distance education users: instructors, students, and system administrators. We propose an agent communication framework as well as agent evolution states of mobile agents. The communication architecture and message transmission protocols are illustrated. The system is implemented on the Windows platform to support nomadic accessibility of remote distance learning users. Personal data also migrate with the mobile agents, allowing users to maintain accessibility to some extent even when the Internet connection is temperately disconnected. Using user-friendly personal agents, a distance education platform can include different tools to meet different needs for users","tok_text":"a distribut mobil agent framework for maintain persist distanc educ \n mobil agent techniqu involv distribut control if commun is requir among differ type of agent , especi when mobil agent can migrat from station to station . thi techniqu can be implement in a distribut distanc learn environ , which allow student or instructor to login from anywher to a central server in an educ center while still retain the look-and-feel of person setup . in thi research paper , we propos a distribut agent framework along with it commun messag to facilit mobil person agent , which serv three differ group of distanc educ user : instructor , student , and system administr . we propos an agent commun framework as well as agent evolut state of mobil agent . the commun architectur and messag transmiss protocol are illustr . the system is implement on the window platform to support nomad access of remot distanc learn user . person data also migrat with the mobil agent , allow user to maintain access to some extent even when the internet connect is temper disconnect . use user-friendli person agent , a distanc educ platform can includ differ tool to meet differ need for user","ordered_present_kp":[2,47,98,356,480,775,1066],"keyphrases":["distributed mobile agent framework","persistent distance education","distributed control","central server","distributed agent framework","message transmission protocols","user-friendly personal agents"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1254","title":"Supporting unified interface to wrapper generator in integrated information retrieval","abstract":"Given the ever-increasing scale and diversity of information and applications on the Internet, improving the technology of information retrieval is an urgent research objective. Retrieved information is either semi-structured or unstructured in format and its sources are extremely heterogeneous. In consequence, the task of efficiently gathering and extracting information from documents can be both difficult and tedious. Given this variety of sources and formats, many choose to use mediator\/wrapper architecture, but its use demands a fast means of generating efficient wrappers. In this paper, we present a design for an automatic eXtensible Markup Language (XML)-based framework with which to generate wrappers rapidly. Wrappers created with this framework support a unified interface for a meta-search information retrieval system based on the Internet Search Service using the Common Object Request Broker Architecture (CORBA) standard. Greatly advantaged by the compatibility of CORBA and XML, a user can quickly and easily develop information-gathering applications, such as a meta-search engine or any other information source retrieval method. The two main things our design provides are a method of wrapper generation that is fast, simple, and efficient, and a wrapper generator that is CORBA and XML-compliant and that supports a unified interface","tok_text":"support unifi interfac to wrapper gener in integr inform retriev \n given the ever-increas scale and divers of inform and applic on the internet , improv the technolog of inform retriev is an urgent research object . retriev inform is either semi-structur or unstructur in format and it sourc are extrem heterogen . in consequ , the task of effici gather and extract inform from document can be both difficult and tediou . given thi varieti of sourc and format , mani choos to use mediat \/ wrapper architectur , but it use demand a fast mean of gener effici wrapper . in thi paper , we present a design for an automat extens markup languag ( xml)-base framework with which to gener wrapper rapidli . wrapper creat with thi framework support a unifi interfac for a meta-search inform retriev system base on the internet search servic use the common object request broker architectur ( corba ) standard . greatli advantag by the compat of corba and xml , a user can quickli and easili develop information-gath applic , such as a meta-search engin or ani other inform sourc retriev method . the two main thing our design provid are a method of wrapper gener that is fast , simpl , and effici , and a wrapper gener that is corba and xml-compliant and that support a unifi interfac","ordered_present_kp":[8,26,43,135,609,883,1026],"keyphrases":["unified interface","wrapper generator","integrated information retrieval","Internet","automatic eXtensible Markup Language","CORBA","meta-search engine"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1211","title":"Hybrid decision tree","abstract":"In this paper, a hybrid learning approach named hybrid decision tree (HDT) is proposed. HDT simulates human reasoning by using symbolic learning to do qualitative analysis and using neural learning to do subsequent quantitative analysis. It generates the trunk of a binary HDT according to the binary information gain ratio criterion in an instance space defined by only original unordered attributes. If unordered attributes cannot further distinguish training examples falling into a leaf node whose diversity is beyond the diversity-threshold, then the node is marked as a dummy node. After all those dummy nodes are marked, a specific feedforward neural network named FANNC that is trained in an instance space defined by only original ordered attributes is exploited to accomplish the learning task. Moreover, this paper distinguishes three kinds of incremental learning tasks. Two incremental learning procedures designed for example-incremental learning with different storage requirements are provided, which enables HDT to deal gracefully with data sets where new data are frequently appended. Also a hypothesis-driven constructive induction mechanism is provided, which enables HDT to generate compact concept descriptions","tok_text":"hybrid decis tree \n in thi paper , a hybrid learn approach name hybrid decis tree ( hdt ) is propos . hdt simul human reason by use symbol learn to do qualit analysi and use neural learn to do subsequ quantit analysi . it gener the trunk of a binari hdt accord to the binari inform gain ratio criterion in an instanc space defin by onli origin unord attribut . if unord attribut can not further distinguish train exampl fall into a leaf node whose divers is beyond the diversity-threshold , then the node is mark as a dummi node . after all those dummi node are mark , a specif feedforward neural network name fannc that is train in an instanc space defin by onli origin order attribut is exploit to accomplish the learn task . moreov , thi paper distinguish three kind of increment learn task . two increment learn procedur design for example-increment learn with differ storag requir are provid , which enabl hdt to deal grace with data set where new data are frequent append . also a hypothesis-driven construct induct mechan is provid , which enabl hdt to gener compact concept descript","ordered_present_kp":[0,37,118,132,151,174,201,268,578,610,773,872,934,987],"keyphrases":["hybrid decision tree","hybrid learning approach","reasoning","symbolic learning","qualitative analysis","neural learning","quantitative analysis","binary information gain ratio criterion","feedforward neural network","FANNC","incremental learning","storage requirements","data sets","hypothesis-driven constructive induction"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"644","title":"Three-dimensional spiral MR imaging: application to renal multiphase contrast-enhanced angiography","abstract":"A fast MR pulse sequence with spiral in-plane readout and conventional 3D partition encoding was developed for multiphase contrast-enhanced magnetic resonance angiography (CE-MRA) of the renal vasculature. Compared to a standard multiphase 3D CE-MRA with FLASH readout, an isotropic in-plane spatial resolution of 1.4*1.4 mm\/sup 2\/ over 2.0*1.4 mm\/sup 2\/ could be achieved with a temporal resolution of 6 sec. The theoretical gain of spatial resolution by using the spiral pulse sequence and the performance in the presence of turbulent flow was evaluated in phantom measurements. Multiphase 3D CE-MRA of the renal arteries was performed in five healthy volunteers using both techniques. A deblurring technique was used to correct the spiral raw data. Thereby, the off-resonance frequencies were determined by minimizing the imaginary part of the data in image space. The chosen correction algorithm was able to reduce image blurring substantially in all MRA phases. The image quality of the spiral CE-MRA pulse sequence was comparable to that of the FLASH CE-MRA with increased spatial resolution and a 25% reduced contrast-to-noise ratio. Additionally, artifacts specific to spiral MRI could be observed which had no impact on the assessment of the renal arteries","tok_text":"three-dimension spiral mr imag : applic to renal multiphas contrast-enhanc angiographi \n a fast mr puls sequenc with spiral in-plan readout and convent 3d partit encod wa develop for multiphas contrast-enhanc magnet reson angiographi ( ce-mra ) of the renal vasculatur . compar to a standard multiphas 3d ce-mra with flash readout , an isotrop in-plan spatial resolut of 1.4 * 1.4 mm \/ sup 2\/ over 2.0 * 1.4 mm \/ sup 2\/ could be achiev with a tempor resolut of 6 sec . the theoret gain of spatial resolut by use the spiral puls sequenc and the perform in the presenc of turbul flow wa evalu in phantom measur . multiphas 3d ce-mra of the renal arteri wa perform in five healthi volunt use both techniqu . a deblur techniqu wa use to correct the spiral raw data . therebi , the off-reson frequenc were determin by minim the imaginari part of the data in imag space . the chosen correct algorithm wa abl to reduc imag blur substanti in all mra phase . the imag qualiti of the spiral ce-mra puls sequenc wa compar to that of the flash ce-mra with increas spatial resolut and a 25 % reduc contrast-to-nois ratio . addit , artifact specif to spiral mri could be observ which had no impact on the assess of the renal arteri","ordered_present_kp":[43,707,117,152,252,777,954,1079,352],"keyphrases":["renal multiphase contrast-enhanced angiography","spiral in-plane readout","3D partition encoding","renal vasculature","spatial resolution","deblurring","off-resonance frequencies","image quality","reduced contrast-to-noise ratio","3D spiral MRI","flow artifacts","fast pulse sequence","image reconstruction","FLASH sequence"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R","M","R"]} {"id":"601","title":"Recent researches of human science on railway systems","abstract":"This paper presents research of human science on railway systems at RTRI. They are roughly divided into two categories: research to improve safety and those to improve comfort. On the former subject, for the safeguard against accidents caused by human errors, we have promoted studies of psychological aptitude test, various research to evaluate train drivers' working conditions and environments, and new investigations to minimize the risk of passenger casualties at train accidents. On the latter subject, we have developed new methods to evaluate the riding comfort including that of tilt train, and started research on the improvement of railway facilities for the aged and the disabled from the viewpoint of universal design","tok_text":"recent research of human scienc on railway system \n thi paper present research of human scienc on railway system at rtri . they are roughli divid into two categori : research to improv safeti and those to improv comfort . on the former subject , for the safeguard against accid caus by human error , we have promot studi of psycholog aptitud test , variou research to evalu train driver ' work condit and environ , and new investig to minim the risk of passeng casualti at train accid . on the latter subject , we have develop new method to evalu the ride comfort includ that of tilt train , and start research on the improv of railway facil for the age and the disabl from the viewpoint of univers design","ordered_present_kp":[19,116,35,272,286,324,374,473,551,579,628],"keyphrases":["human science","railway systems","RTRI","accidents","human errors","psychological aptitude test","train drivers' working conditions","train accidents","riding comfort","tilt train","railway facilities","safety improvement","comfort improvement","train drivers' working environments","passenger casualties risk minimisation","aged persons","disabled persons","sight impaired","wakefulness level","ergonomics"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","R","R","M","M","M","U","U","U"]} {"id":"759","title":"Mathematical properties of dominant AHP and concurrent convergence method","abstract":"This study discusses the mathematical structure of the dominant AHP and the concurrent convergence method which were originally developed by Kinoshita and Nakanishi. They introduced a new concept of a regulating alternative into an analyzing tool for a simple evaluation problem with a criterion set and an alternative set. Although the original idea of the dominant AHP and the concurrent convergence method is unique, the dominant AHP and the concurrent convergence method are not sufficiently analyzed in mathematical theory. This study shows that the dominant AHP consists of a pair of evaluation rules satisfying a certain property of overall evaluation vectors. This study also shows that the convergence of concurrent convergence method is guaranteed theoretically","tok_text":"mathemat properti of domin ahp and concurr converg method \n thi studi discuss the mathemat structur of the domin ahp and the concurr converg method which were origin develop by kinoshita and nakanishi . they introduc a new concept of a regul altern into an analyz tool for a simpl evalu problem with a criterion set and an altern set . although the origin idea of the domin ahp and the concurr converg method is uniqu , the domin ahp and the concurr converg method are not suffici analyz in mathemat theori . thi studi show that the domin ahp consist of a pair of evalu rule satisfi a certain properti of overal evalu vector . thi studi also show that the converg of concurr converg method is guarante theoret","ordered_present_kp":[21,35,605],"keyphrases":["dominant AHP","concurrent convergence method","overall evaluation vectors"],"prmu":["P","P","P"]} {"id":"1349","title":"Efficient simplicial reconstructions of manifolds from their samples","abstract":"An algorithm for manifold learning is presented. Given only samples of a finite-dimensional differentiable manifold and no a priori knowledge of the manifold's geometry or topology except for its dimension, the goal is to find a description of the manifold. The learned manifold must approximate the true manifold well, both geometrically and topologically, when the sampling density is sufficiently high. The proposed algorithm constructs a simplicial complex based on approximations to the tangent bundle of the manifold. An important property of the algorithm is that its complexity depends on the dimension of the manifold, rather than that of the embedding space. Successful examples are presented in the cases of learning curves in the plane, curves in space, and surfaces in space; in addition, a case when the algorithm fails is analyzed","tok_text":"effici simplici reconstruct of manifold from their sampl \n an algorithm for manifold learn is present . given onli sampl of a finite-dimension differenti manifold and no a priori knowledg of the manifold 's geometri or topolog except for it dimens , the goal is to find a descript of the manifold . the learn manifold must approxim the true manifold well , both geometr and topolog , when the sampl densiti is suffici high . the propos algorithm construct a simplici complex base on approxim to the tangent bundl of the manifold . an import properti of the algorithm is that it complex depend on the dimens of the manifold , rather than that of the embed space . success exampl are present in the case of learn curv in the plane , curv in space , and surfac in space ; in addit , a case when the algorithm fail is analyz","ordered_present_kp":[7,76,126,303,336,458,393],"keyphrases":["simplicial reconstructions","manifold learning","finite-dimensional differentiable manifold","learned manifold","true manifold","sampling density","simplicial complex"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1421","title":"Extracting linguistic DNA: NStein goes to work for UPI","abstract":"It's a tantalizing problem for categorization. United Press International (UPI) has more than 700 correspondents creating thousands of stories every week, running the gamut from business news to sports to entertainment to global coverage of America's war on terrorism. And while UPI and others news services have mechanisms for adding keywords and categorizing their content, UPI recognized a need to add more automation to the process. With the recent growth and improvement in tools for Computer-Aided Indexing (CAI), UPI undertook a process of looking at its needs and evaluating the many CAI tools out there. In the end, they chose technology from Montreal-based NStein Technologies. \"Our main objective was to acquire the best CAI tool to help improve our customers' access and interaction with our content,\" says Steve Sweet, CIO at UPI. \"We examined a number of solutions, and NStein's NServer suite clearly came out on top. The combination of speed, scalability, accuracy, and flexibility was what really sold us.\"","tok_text":"extract linguist dna : nstein goe to work for upi \n it 's a tantal problem for categor . unit press intern ( upi ) ha more than 700 correspond creat thousand of stori everi week , run the gamut from busi news to sport to entertain to global coverag of america 's war on terror . and while upi and other news servic have mechan for ad keyword and categor their content , upi recogn a need to add more autom to the process . with the recent growth and improv in tool for computer-aid index ( cai ) , upi undertook a process of look at it need and evalu the mani cai tool out there . in the end , they chose technolog from montreal-bas nstein technolog . \" our main object wa to acquir the best cai tool to help improv our custom ' access and interact with our content , \" say steve sweet , cio at upi . \" we examin a number of solut , and nstein 's nserver suit clearli came out on top . the combin of speed , scalabl , accuraci , and flexibl wa what realli sold us . \"","ordered_present_kp":[89,46,469,633],"keyphrases":["UPI","United Press International","Computer-Aided Indexing","NStein Technologies","electronic archive","wire service stories"],"prmu":["P","P","P","P","U","M"]} {"id":"872","title":"Shortchanging the future of information technology: the untapped resource","abstract":"Building on ideas from a virtual workshop and additional input from the scientific community, the CISE Directorate at the National Science Foundation established the Information Technology Workforce Program (ITWF) in March 2000 to support a broad set of scientific research studies focused on the under-representation of women and minorities in the information technology workforce. In this paper, we explore various approaches that the funded researchers are taking to address the problem of women in information technology. We begin with a brief history of the ITWF, and then focus on some of the research projects in terms of their goals, approaches, and expected outcomes","tok_text":"shortchang the futur of inform technolog : the untap resourc \n build on idea from a virtual workshop and addit input from the scientif commun , the cise director at the nation scienc foundat establish the inform technolog workforc program ( itwf ) in march 2000 to support a broad set of scientif research studi focus on the under-represent of women and minor in the inform technolog workforc . in thi paper , we explor variou approach that the fund research are take to address the problem of women in inform technolog . we begin with a brief histori of the itwf , and then focu on some of the research project in term of their goal , approach , and expect outcom","ordered_present_kp":[47,84,148,169,205,241,288,544],"keyphrases":["untapped resources","virtual workshop","CISE Directorate","National Science Foundation","Information Technology Workforce Program","ITWF","scientific research studies","history","information technology future","women under-representation"],"prmu":["P","P","P","P","P","P","P","P","R","R"]} {"id":"837","title":"Ten suggestions for a gender-equitable CS classroom","abstract":"Though considerable attention has been paid to the creation of a nurturing environment for women in the field of computer science, proposed solutions have primarily focused on activities outside of the classroom. This paper presents a list of suggestions for modifications to both the pedagogy and content of CS courses designed to make the CS classroom environment more inviting for women students","tok_text":"ten suggest for a gender-equit cs classroom \n though consider attent ha been paid to the creation of a nurtur environ for women in the field of comput scienc , propos solut have primarili focus on activ outsid of the classroom . thi paper present a list of suggest for modif to both the pedagogi and content of cs cours design to make the cs classroom environ more invit for women student","ordered_present_kp":[339,144,103,287,375],"keyphrases":["nurturing environment","computer science","pedagogy","CS classroom environment","women students","gender-equitable classroom","CS course content"],"prmu":["P","P","P","P","P","R","R"]} {"id":"1048","title":"Parallel and distributed Haskells","abstract":"Parallel and distributed languages specify computations on multiple processors and have a computation language to describe the algorithm, i.e. what to compute, and a coordination language to describe how to organise the computations across the processors. Haskell has been used as the computation language for a wide variety of parallel and distributed languages, and this paper is a comprehensive survey of implemented languages. It outlines parallel and distributed language concepts and classifies Haskell extensions using them. Similar example programs are used to illustrate and contrast the coordination languages, and the comparison is facilitated by the common computation language. A lazy language is not an obvious choice for parallel or distributed computation, and we address the question of why Haskell is a common functional computation language","tok_text":"parallel and distribut haskel \n parallel and distribut languag specifi comput on multipl processor and have a comput languag to describ the algorithm , i.e. what to comput , and a coordin languag to describ how to organis the comput across the processor . haskel ha been use as the comput languag for a wide varieti of parallel and distribut languag , and thi paper is a comprehens survey of implement languag . it outlin parallel and distribut languag concept and classifi haskel extens use them . similar exampl program are use to illustr and contrast the coordin languag , and the comparison is facilit by the common comput languag . a lazi languag is not an obviou choic for parallel or distribut comput , and we address the question of whi haskel is a common function comput languag","ordered_present_kp":[13,45,81,180,639,764],"keyphrases":["distributed Haskell","distributed languages","multiple processors","coordination language","lazy language","functional computation language","parallel Haskell","parallel languages","functional programming"],"prmu":["P","P","P","P","P","P","R","R","R"]} {"id":"1030","title":"Comparison of automated digital elevation model extraction results using along-track ASTER and across-track SPOT stereo images","abstract":"A digital elevation model (DEM) can be extracted automatically from stereo satellite images. During the past decade, the most common satellite data used to extract DEM was the across-track SPOT. Recently, the addition of along-track ASTER data, which can be downloaded freely, provides another attractive alternative to extract DEM data. This work compares the automated DEM extraction results using an ASTER stereo pair and a SPOT stereo pair over an area of hilly mountains in Drum Mountain, Utah, when compared to a USGS 7.5-min DEM standard product. The result shows that SPOT produces better DEM results in terms of accuracy and details, if the radiometric variations between the images, taken on subsequent satellite revolutions, are small. Otherwise, the ASTER stereo pair is a better choice because of simultaneous along-track acquisition during a single pass. Compared to the USGS 7.5-min DEM, the ASTER and the SPOT extracted DEMs have a standard deviation of 11.6 and 4.6 m, respectively","tok_text":"comparison of autom digit elev model extract result use along-track aster and across-track spot stereo imag \n a digit elev model ( dem ) can be extract automat from stereo satellit imag . dure the past decad , the most common satellit data use to extract dem wa the across-track spot . recent , the addit of along-track aster data , which can be download freeli , provid anoth attract altern to extract dem data . thi work compar the autom dem extract result use an aster stereo pair and a spot stereo pair over an area of hilli mountain in drum mountain , utah , when compar to a usg 7.5-min dem standard product . the result show that spot produc better dem result in term of accuraci and detail , if the radiometr variat between the imag , taken on subsequ satellit revolut , are small . otherwis , the aster stereo pair is a better choic becaus of simultan along-track acquisit dure a singl pass . compar to the usg 7.5-min dem , the aster and the spot extract dem have a standard deviat of 11.6 and 4.6 m , respect","ordered_present_kp":[14,308,78,165,466,707,852],"keyphrases":["automated digital elevation model extraction","across-track SPOT stereo images","stereo satellite images","along-track ASTER data","ASTER stereo pair","radiometric variations","simultaneous along-track acquisition","SPOT stereo image pair"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"1075","title":"Numerical simulation of information recovery in quantum computers","abstract":"Decoherence is the main problem to be solved before quantum computers can be built. To control decoherence, it is possible to use error correction methods, but these methods are themselves noisy quantum computation processes. In this work, we study the ability of Steane's and Shor's fault-tolerant recovering methods, as well as a modification of Steane's ancilla network, to correct errors in qubits. We test a way to measure correctly ancilla's fidelity for these methods, and state the possibility of carrying out an effective error correction through a noisy quantum channel, even using noisy error correction methods","tok_text":"numer simul of inform recoveri in quantum comput \n decoher is the main problem to be solv befor quantum comput can be built . to control decoher , it is possibl to use error correct method , but these method are themselv noisi quantum comput process . in thi work , we studi the abil of stean 's and shor 's fault-toler recov method , as well as a modif of stean 's ancilla network , to correct error in qubit . we test a way to measur correctli ancilla 's fidel for these method , and state the possibl of carri out an effect error correct through a noisi quantum channel , even use noisi error correct method","ordered_present_kp":[0,15,34,168,221,308,366,404,584,551],"keyphrases":["numerical simulation","information recovery","quantum computers","error correction methods","noisy quantum computation processes","fault-tolerant recovering methods","ancilla network","qubits","noisy quantum channel","noisy error correction methods","decoherence control","ancilla fidelity"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"1389","title":"The case for activity based management","abstract":"In today's stormy economic climate businesses need Activity Based Management (ABM) more than ever before. In an economic downturn it is a vital tool for pinpointing a business' most profitable customers, products, regions or channels, as well as uncovering the costs of individual business processes that may need to be improved in order to drive higher profit levels. Changes may be afoot in the ABM market, but Armstrong Laing Group CEO Mike Sherratt argues that businesses need specialists with an ABM focus to keep up with their requirements in such a climate. He looks at what benefits a `best-of-breed' ABM system can offer businesses and contends that businesses must choose carefully when going down the ABM route - and also ask themselves the question whether 'generalist' organisations will be able to deliver the best possible ABM solution","tok_text":"the case for activ base manag \n in today 's stormi econom climat busi need activ base manag ( abm ) more than ever befor . in an econom downturn it is a vital tool for pinpoint a busi ' most profit custom , product , region or channel , as well as uncov the cost of individu busi process that may need to be improv in order to drive higher profit level . chang may be afoot in the abm market , but armstrong la group ceo mike sherratt argu that busi need specialist with an abm focu to keep up with their requir in such a climat . he look at what benefit a ` best-of-bre ' abm system can offer busi and contend that busi must choos care when go down the abm rout - and also ask themselv the question whether ' generalist ' organis will be abl to deliv the best possibl abm solut","ordered_present_kp":[398,13],"keyphrases":["activity based management","Armstrong Laing Group","activity based costing","best-of-breed ABM"],"prmu":["P","P","R","R"]} {"id":"799","title":"Electronic reserves at University College London: understanding the needs of academic departments","abstract":"This article describes a recent project at University College London to explore the feasibility of providing a service to improve access to electronic course materials. Funded by the Higher Education Funding Council for England (HEFCE), the project was not simply to set up an electronic reserve. By undertaking a needs analysis of academic departments, the project was able to tailor the design of the new service appropriately. While new initiatives in libraries are often established using project funding, this work was unique in being research-led. It also involved collaboration between library and computing staff and learning technologists","tok_text":"electron reserv at univers colleg london : understand the need of academ depart \n thi articl describ a recent project at univers colleg london to explor the feasibl of provid a servic to improv access to electron cours materi . fund by the higher educ fund council for england ( hefc ) , the project wa not simpli to set up an electron reserv . by undertak a need analysi of academ depart , the project wa abl to tailor the design of the new servic appropri . while new initi in librari are often establish use project fund , thi work wa uniqu in be research-l . it also involv collabor between librari and comput staff and learn technologist","ordered_present_kp":[19,204,0,240,607,624],"keyphrases":["electronic reserves","University College London","electronic course materials","Higher Education Funding Council for England","computing staff","learning technologists","academic department needs"],"prmu":["P","P","P","P","P","P","R"]} {"id":"721","title":"The results of experimental studies of the reflooding of fuel-rod assemblies from above and problems for future investigations","abstract":"Problems in studying the reflooding of assemblies from above conducted at foreign and Russian experimental installations are considered. The efficiency of cooling and flow reversal under countercurrent flow of steam and water, as well as the scale effect are analyzed. The tasks for future experiments that are necessary for the development of modern correlations for the loss-of-coolant accident (LOCA) computer codes are stated","tok_text":"the result of experiment studi of the reflood of fuel-rod assembl from abov and problem for futur investig \n problem in studi the reflood of assembl from abov conduct at foreign and russian experiment instal are consid . the effici of cool and flow revers under countercurr flow of steam and water , as well as the scale effect are analyz . the task for futur experi that are necessari for the develop of modern correl for the loss-of-cool accid ( loca ) comput code are state","ordered_present_kp":[182,244,262,282,292],"keyphrases":["Russian experimental installations","flow reversal","countercurrent flow","steam","water","fuel-rod assemblies reflooding","cooling efficiency","loss-of-coolant accident computer codes","LOCA computer codes"],"prmu":["P","P","P","P","P","R","R","R","R"]} {"id":"764","title":"Lattice Boltzmann schemes for quantum applications","abstract":"We review the basic ideas behind the quantum lattice Boltzmann equation (LBE), and present a few thoughts on the possible use of such an equation for simulating quantum many-body problems on both (parallel) electronic and quantum computers","tok_text":"lattic boltzmann scheme for quantum applic \n we review the basic idea behind the quantum lattic boltzmann equat ( lbe ) , and present a few thought on the possibl use of such an equat for simul quantum many-bodi problem on both ( parallel ) electron and quantum comput","ordered_present_kp":[0,28,194,254],"keyphrases":["lattice Boltzmann schemes","quantum applications","quantum many-body problems","quantum computers","parallel computing"],"prmu":["P","P","P","P","R"]} {"id":"1331","title":"Enterprise content integration III: Agari Mediaware's Media Star","abstract":"Since we introduced the term Enterprise Content Integration (ECI) in January, the concept has gained momentum in the market. In addition to Context Media's Interchange Platform and Savantech's Photon Commerce, Agari Mediaware's Media Star is in the fray. It is a middleware platform that allows large media companies to integrate their digital systems with great flexibility","tok_text":"enterpris content integr iii : agari mediawar 's media star \n sinc we introduc the term enterpris content integr ( eci ) in januari , the concept ha gain momentum in the market . in addit to context media 's interchang platform and savantech 's photon commerc , agari mediawar 's media star is in the fray . it is a middlewar platform that allow larg media compani to integr their digit system with great flexibl","ordered_present_kp":[0,316],"keyphrases":["enterprise content integration","middleware","Agari Mediaware Media Star"],"prmu":["P","P","R"]} {"id":"1374","title":"Using technology to facilitate the design and delivery of warnings","abstract":"This paper describes several ways in which new technologies can assist in the design and delivery of warnings. There are four discussion points: (1) current product information can be delivered via the Internet; (2) computer software and hardware are available to assist in the design, construction, and production of visual and auditory warnings; (3) various detection devices can be used to recognize instances in which warnings might be delivered; and (4) a warning presentation can be modified to fit conditions and persons. Implications, example applications and future prospects of these points are described","tok_text":"use technolog to facilit the design and deliveri of warn \n thi paper describ sever way in which new technolog can assist in the design and deliveri of warn . there are four discuss point : ( 1 ) current product inform can be deliv via the internet ; ( 2 ) comput softwar and hardwar are avail to assist in the design , construct , and product of visual and auditori warn ; ( 3 ) variou detect devic can be use to recogn instanc in which warn might be deliv ; and ( 4 ) a warn present can be modifi to fit condit and person . implic , exampl applic and futur prospect of these point are describ","ordered_present_kp":[203,239,256,357,471],"keyphrases":["product information","Internet","computer software","auditory warnings","warning presentation","computer hardware"],"prmu":["P","P","P","P","P","R"]} {"id":"1459","title":"Wave propagation related to high-speed train. A scaled boundary FE-approach for unbounded domains","abstract":"Analysis of wave propagation in solid materials under moving loads is a topic of great interest in railway engineering. The objective of the paper is three-dimensional modelling of high-speed train related ground vibrations; in particular the question of how to account for the unbounded media is addressed. For efficient and accurate modelling of railway structural components taking the unbounded media into account, a hybrid method based on a combination of the conventional finite element method and scaled boundary finite element method is established. In the paper, element matrices and solution procedures for the scaled boundary finite element method (SBFEM) are derived. A non-linear finite element iteration scheme using Lagrange multipliers and coupling between the unbounded domain and the finite element domain are also discussed. Two numerical examples including one example demonstrating the dynamical response of a railroad section are presented to demonstrate the performance of the proposed method","tok_text":"wave propag relat to high-spe train . a scale boundari fe-approach for unbound domain \n analysi of wave propag in solid materi under move load is a topic of great interest in railway engin . the object of the paper is three-dimension model of high-spe train relat ground vibrat ; in particular the question of how to account for the unbound media is address . for effici and accur model of railway structur compon take the unbound media into account , a hybrid method base on a combin of the convent finit element method and scale boundari finit element method is establish . in the paper , element matric and solut procedur for the scale boundari finit element method ( sbfem ) are deriv . a non-linear finit element iter scheme use lagrang multipli and coupl between the unbound domain and the finit element domain are also discuss . two numer exampl includ one exampl demonstr the dynam respons of a railroad section are present to demonstr the perform of the propos method","ordered_present_kp":[0,114,175,243,333,234,390,525,591,610,734,884,903],"keyphrases":["wave propagation","solid materials","railway engineering","modelling","high-speed train related ground vibrations","unbounded media","railway structural components","scaled boundary finite element method","element matrices","solution procedures","Lagrange multipliers","dynamical response","railroad section","3D modelling","nonlinear finite element iteration scheme"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","M","M"]} {"id":"1088","title":"Parallel implicit predictor corrector methods","abstract":"The performance of parallel codes for the solution of initial value problems is usually strongly sensitive to the dimension of the continuous problem. This is due to the overhead related to the exchange of information among the processors and motivates the problem of minimizing the amount of communications. According to this principle, we define the so called Parallel Implicit Predictor Corrector Methods and in this class we derive A-stable, L-stable and numerically zero-stable formulas. The latter property refers to the zero-stability condition of a given formula when roundoff errors are introduced in its coefficients due to their representation in finite precision arithmetic. Some numerical experiment show the potentiality of this approach","tok_text":"parallel implicit predictor corrector method \n the perform of parallel code for the solut of initi valu problem is usual strongli sensit to the dimens of the continu problem . thi is due to the overhead relat to the exchang of inform among the processor and motiv the problem of minim the amount of commun . accord to thi principl , we defin the so call parallel implicit predictor corrector method and in thi class we deriv a-stabl , l-stabl and numer zero-st formula . the latter properti refer to the zero-st condit of a given formula when roundoff error are introduc in it coeffici due to their represent in finit precis arithmet . some numer experi show the potenti of thi approach","ordered_present_kp":[0,93,447,504,543,612],"keyphrases":["parallel implicit predictor corrector methods","initial value problems","numerically zero-stable formulas","zero-stability condition","roundoff errors","finite precision arithmetic"],"prmu":["P","P","P","P","P","P"]} {"id":"1269","title":"Minimizing the number of successor states in the stubborn set method","abstract":"Combinatorial explosion which occurs in parallel compositions of LTSs can be alleviated by letting the stubborn set method construct on-the-fly a reduced LTS that is CFFD- or CSP-equivalent to the actual parallel composition. This article considers the problem of minimizing the number of successor states of a given state in the reduced LTS. The problem can be solved by constructing an and\/or-graph with weighted vertices and by finding a set of vertices that satisfies a certain constraint such that no set of vertices satisfying the constraint has a smaller sum of weights. Without weights, the and\/or-graph can be constructed in low-degree polynomial time w.r.t. the length of the input of the problem. However, since actions can be nondeterministic and transitions can share target states, it is not known whether the weights are generally computable in polynomial time. Consequently, it is an open problem whether minimizing the number of successor states is as \"easy\" as minimizing the number of successor transitions","tok_text":"minim the number of successor state in the stubborn set method \n combinatori explos which occur in parallel composit of ltss can be allevi by let the stubborn set method construct on-the-fli a reduc lt that is cffd- or csp-equival to the actual parallel composit . thi articl consid the problem of minim the number of successor state of a given state in the reduc lt . the problem can be solv by construct an and \/ or-graph with weight vertic and by find a set of vertic that satisfi a certain constraint such that no set of vertic satisfi the constraint ha a smaller sum of weight . without weight , the and \/ or-graph can be construct in low-degre polynomi time w.r.t . the length of the input of the problem . howev , sinc action can be nondeterminist and transit can share target state , it is not known whether the weight are gener comput in polynomi time . consequ , it is an open problem whether minim the number of successor state is as \" easi \" as minim the number of successor transit","ordered_present_kp":[43,65,429,640,219],"keyphrases":["stubborn set method","combinatorial explosion","CSP-equivalence","weighted vertices","low-degree polynomial time"],"prmu":["P","P","P","P","P"]} {"id":"679","title":"Himalayan information system: a proposed model","abstract":"The information explosion and the development in information technology force us to develop information systems in various fields. The research on Himalaya has achieved phenomenal growth in recent years in India. The information requirements of Himalayan researchers are divergent in nature. In order to meet these divergent needs, all information generated in various Himalayan research institutions has to be collected and organized to facilitate free flow of information. This paper describes the need for a system for Himalayan information. It also presents the objectives of Himalayan information system (HIMIS). It discusses in brief the idea of setting up a HIMIS and explains its utility to the users. It appeals to the government for supporting the development of such system","tok_text":"himalayan inform system : a propos model \n the inform explos and the develop in inform technolog forc us to develop inform system in variou field . the research on himalaya ha achiev phenomen growth in recent year in india . the inform requir of himalayan research are diverg in natur . in order to meet these diverg need , all inform gener in variou himalayan research institut ha to be collect and organ to facilit free flow of inform . thi paper describ the need for a system for himalayan inform . it also present the object of himalayan inform system ( himi ) . it discuss in brief the idea of set up a himi and explain it util to the user . it appeal to the govern for support the develop of such system","ordered_present_kp":[47,80,217,229,558,664],"keyphrases":["information explosion","information technology","India","information requirements","HIMIS","government","Himalayan information system model","information network"],"prmu":["P","P","P","P","P","P","R","M"]} {"id":"917","title":"Efficient transitive closure reasoning in a combined class\/part\/containment hierarchy","abstract":"Class hierarchies form the backbone of many implemented knowledge representation and reasoning systems. They are used for inheritance, classification and transitive closure reasoning. Part hierarchies are also important in artificial intelligence. Other hierarchies, e.g. containment hierarchies, have received less attention in artificial intelligence. This paper presents an architecture and an implementation of a hierarchy reasoner that integrates a class hierarchy, a part hierarchy, and a containment hierarchy into one structure. In order to make an implemented reasoner useful, it needs to operate at least at speeds comparable to human reasoning. As real-world hierarchies are always large, special techniques need to be used to achieve this. We have developed a set of parallel algorithms and a data representation called maximally reduced tree cover for that purpose. The maximally reduced tree cover is an improvement of a materialized transitive closure representation which has appeared in the literature. Our experiments with a medical vocabulary show that transitive closure reasoning for combined class\/part\/containment hierarchies in near constant time is possible for a fixed hardware configuration","tok_text":"effici transit closur reason in a combin class \/ part \/ contain hierarchi \n class hierarchi form the backbon of mani implement knowledg represent and reason system . they are use for inherit , classif and transit closur reason . part hierarchi are also import in artifici intellig . other hierarchi , e.g. contain hierarchi , have receiv less attent in artifici intellig . thi paper present an architectur and an implement of a hierarchi reason that integr a class hierarchi , a part hierarchi , and a contain hierarchi into one structur . in order to make an implement reason use , it need to oper at least at speed compar to human reason . as real-world hierarchi are alway larg , special techniqu need to be use to achiev thi . we have develop a set of parallel algorithm and a data represent call maxim reduc tree cover for that purpos . the maxim reduc tree cover is an improv of a materi transit closur represent which ha appear in the literatur . our experi with a medic vocabulari show that transit closur reason for combin class \/ part \/ contain hierarchi in near constant time is possibl for a fix hardwar configur","ordered_present_kp":[7,127,76,229,56,756,781,801,887,958,972,1104,183,229,263,193],"keyphrases":["transitive closure reasoning","containment hierarchy","class hierarchy","knowledge representation","inheritance","classification","part hierarchy","part hierarchy","artificial intelligence","parallel algorithms","data representation","maximally reduced tree cover","materialized transitive closure representation","experiments","medical vocabulary","fixed hardware configuration","parallel reasoning","part hierarchies"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","P"]} {"id":"585","title":"Fuzzy system modeling in pharmacology: an improved algorithm","abstract":"In this paper, we propose an improved fuzzy system modeling algorithm to address some of the limitations of the existing approaches identified during our modeling with pharmacological data. This algorithm differs from the existing ones in its approach to the cluster validity problem (i.e., number of clusters), the projection schema (i.e., input membership assignment and rule determination), and significant input determination. The new algorithm is compared with the Bazoon-Turksen model, which is based on the well-known Sugeno-Yasukawa approach. The comparison was made in terms of predictive performance using two different data sets. The first comparison was with a two variable nonlinear function prediction problem and the second comparison was with a clinical pharmacokinetic modeling problem. It is shown that the proposed algorithm provides more precise predictions. Determining the degree of significance for each input variable, allows the user to distinguish their relative importance","tok_text":"fuzzi system model in pharmacolog : an improv algorithm \n in thi paper , we propos an improv fuzzi system model algorithm to address some of the limit of the exist approach identifi dure our model with pharmacolog data . thi algorithm differ from the exist one in it approach to the cluster valid problem ( i.e. , number of cluster ) , the project schema ( i.e. , input membership assign and rule determin ) , and signific input determin . the new algorithm is compar with the bazoon-turksen model , which is base on the well-known sugeno-yasukawa approach . the comparison wa made in term of predict perform use two differ data set . the first comparison wa with a two variabl nonlinear function predict problem and the second comparison wa with a clinic pharmacokinet model problem . it is shown that the propos algorithm provid more precis predict . determin the degre of signific for each input variabl , allow the user to distinguish their rel import","ordered_present_kp":[0,22,283,340,414,593,756],"keyphrases":["fuzzy system modeling","pharmacology","cluster validity problem","projection schema","significant input determination","predictive performance","pharmacokinetic modeling","fuzzy sets","fuzzy logic"],"prmu":["P","P","P","P","P","P","P","R","M"]} {"id":"111","title":"Modification for synchronization of Rossler and Chen chaotic systems","abstract":"Active control is an effective method for making two identical Rossler and Chen systems be synchronized. However, this method works only for a certain class of chaotic systems with known parameters both in drive systems and response systems. Modification based on Lyapunov stability theory is proposed in order to overcome this limitation. An adaptive synchronization controller, which can make the states of two identical Rossler and Chen systems globally asymptotically synchronized in the presence of system's unknown constant parameters, is derived. Especially, when some unknown parameters are positive, we can make the controller more simple, besides, the controller is independent of those positive uncertain parameters. At last, when the condition that arbitrary unknown parameters in two systems are identical constants is cancelled, we demonstrate that it is possible to synchronize two chaotic systems. All results are proved using a well-known Lyapunov stability theorem. Numerical simulations are given to validate the proposed synchronization approach","tok_text":"modif for synchron of rossler and chen chaotic system \n activ control is an effect method for make two ident rossler and chen system be synchron . howev , thi method work onli for a certain class of chaotic system with known paramet both in drive system and respons system . modif base on lyapunov stabil theori is propos in order to overcom thi limit . an adapt synchron control , which can make the state of two ident rossler and chen system global asymptot synchron in the presenc of system 's unknown constant paramet , is deriv . especi , when some unknown paramet are posit , we can make the control more simpl , besid , the control is independ of those posit uncertain paramet . at last , when the condit that arbitrari unknown paramet in two system are ident constant is cancel , we demonstr that it is possibl to synchron two chaotic system . all result are prove use a well-known lyapunov stabil theorem . numer simul are given to valid the propos synchron approach","ordered_present_kp":[10,34,56,258,289,357,444],"keyphrases":["synchronization","Chen chaotic systems","active control","response systems","Lyapunov stability theory","adaptive synchronization controller","global asymptotic synchronization","Rossler chaotic systems"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"1195","title":"Sharpening the estimate of the stability constant in the maximum-norm of the Crank-Nicolson scheme for the one-dimensional heat equation","abstract":"This paper is concerned with the stability constant C\/sub infinity \/ in the maximum-norm of the Crank-Nicolson scheme applied. to the one-dimensional heat equation. A well known result due to S.J. Serdyukova is that C\/sub infinity \/ < 23. In the present paper, by using a sharp resolvent estimate for the discrete Laplacian together with the Cauchy formula, it is shown that 3 or= 3, with the single exception of P(9,3), whose crossing number is 2","tok_text":"the cross number of p(n , 3 ) \n it is prove that the cross number of the gener petersen graph p(3k + h , 3 ) is k + h if h in { 0 , 2 } and k + 3 if h = 1 , for each k > or= 3 , with the singl except of p(9,3 ) , whose cross number is 2","ordered_present_kp":[4,73],"keyphrases":["crossing number","generalized Petersen graph"],"prmu":["P","P"]} {"id":"1376","title":"Enhanced product support through intelligent product manuals","abstract":"The scope of this paper is the provision of intelligent product support within the distributed Intranet\/Internet environment. From the point of view of user requirements, the limitations of conventional product manuals and methods of authoring them are first outlined. It is argued that enhanced product support requires new technology solutions both for product manuals and for their authoring and presentation. The concept and the architecture of intelligent product manuals are then discussed. A prototype system called ProARTWeb is presented to demonstrate advanced features of intelligent product manuals. Next, the problem of producing such manuals in a cost-effective way is addressed and a concurrent engineering approach to their authoring is proposed. An integrated environment for collaborative authoring called ProAuthor is described to illustrate the approach suggested and to show how consistent, up-to-date and user-oriented-product manuals can be designed. The solutions presented here enable product knowledge to be captured and delivered to users and developers of product manuals when, where and in the form they need it","tok_text":"enhanc product support through intellig product manual \n the scope of thi paper is the provis of intellig product support within the distribut intranet \/ internet environ . from the point of view of user requir , the limit of convent product manual and method of author them are first outlin . it is argu that enhanc product support requir new technolog solut both for product manual and for their author and present . the concept and the architectur of intellig product manual are then discuss . a prototyp system call proartweb is present to demonstr advanc featur of intellig product manual . next , the problem of produc such manual in a cost-effect way is address and a concurr engin approach to their author is propos . an integr environ for collabor author call proauthor is describ to illustr the approach suggest and to show how consist , up-to-d and user-oriented-product manual can be design . the solut present here enabl product knowledg to be captur and deliv to user and develop of product manual when , where and in the form they need it","ordered_present_kp":[97,31,40,520,675,934],"keyphrases":["intelligent product manuals","product manuals","intelligent product support","ProARTWeb","concurrent engineering","product knowledge","technical information"],"prmu":["P","P","P","P","P","P","U"]} {"id":"1032","title":"Satellite image collection optimization","abstract":"Imaging satellite systems represent a high capital cost. Optimizing the collection of images is critical for both satisfying customer orders and building a sustainable satellite operations business. We describe the functions of an operational, multivariable, time dynamic optimization system that maximizes the daily collection of satellite images. A graphical user interface allows the operator to quickly see the results of what if adjustments to an image collection plan. Used for both long range planning and daily collection scheduling of Space Imaging's IKONOS satellite, the satellite control and tasking (SCT) software allows collection commands to be altered up to 10 min before upload to the satellite","tok_text":"satellit imag collect optim \n imag satellit system repres a high capit cost . optim the collect of imag is critic for both satisfi custom order and build a sustain satellit oper busi . we describ the function of an oper , multivari , time dynam optim system that maxim the daili collect of satellit imag . a graphic user interfac allow the oper to quickli see the result of what if adjust to an imag collect plan . use for both long rang plan and daili collect schedul of space imag 's ikono satellit , the satellit control and task ( sct ) softwar allow collect command to be alter up to 10 min befor upload to the satellit","ordered_present_kp":[0,30,308,395,428,447,555],"keyphrases":["satellite image collection optimization","imaging satellite systems","graphical user interface","image collection plan","long range planning","daily collection scheduling","collection commands","multivariable time dynamic optimization system","Space Imaging IKONOS satellite","satellite control tasking software"],"prmu":["P","P","P","P","P","P","P","R","R","R"]} {"id":"1077","title":"Quantum learning and universal quantum matching machine","abstract":"Suppose that three kinds of quantum systems are given in some unknown states |f>\/sup (X)N\/, |g\/sub 1\/>\/sup (X)K\/, and |g\/sub 2\/>\/sup (X)K\/, and we want to decide which template state |g\/sub 1\/> or |g\/sub 2\/>, each representing the feature of the pattern class C\/sub 1\/ or C\/sub 2\/, respectively, is closest to the input feature state |f>. This is an extension of the pattern matching problem into the quantum domain. Assuming that these states are known a priori to belong to a certain parametric family of pure qubit systems, we derive two kinds of matching strategies. The first one is a semiclassical strategy that is obtained by the natural extension of conventional matching strategies and consists of a two-stage procedure: identification (estimation) of the unknown template states to design the classifier (learning process to train the classifier) and classification of the input system into the appropriate pattern class based on the estimated results. The other is a fully quantum strategy without any intermediate measurement, which we might call as the universal quantum matching machine. We present the Bayes optimal solutions for both strategies in the case of K=1, showing that there certainly exists a fully quantum matching procedure that is strictly superior to the straightforward semiclassical extension of the conventional matching strategy based on the learning process","tok_text":"quantum learn and univers quantum match machin \n suppos that three kind of quantum system are given in some unknown state |f>\/sup ( x)n\/ , |g \/ sub 1\/>\/sup ( x)k\/ , and |g \/ sub 2\/>\/sup ( x)k\/ , and we want to decid which templat state |g \/ sub 1\/ > or |g \/ sub 2\/ > , each repres the featur of the pattern class c \/ sub 1\/ or c \/ sub 2\/ , respect , is closest to the input featur state |f > . thi is an extens of the pattern match problem into the quantum domain . assum that these state are known a priori to belong to a certain parametr famili of pure qubit system , we deriv two kind of match strategi . the first one is a semiclass strategi that is obtain by the natur extens of convent match strategi and consist of a two-stag procedur : identif ( estim ) of the unknown templat state to design the classifi ( learn process to train the classifi ) and classif of the input system into the appropri pattern class base on the estim result . the other is a fulli quantum strategi without ani intermedi measur , which we might call as the univers quantum match machin . we present the bay optim solut for both strategi in the case of k=1 , show that there certainli exist a fulli quantum match procedur that is strictli superior to the straightforward semiclass extens of the convent match strategi base on the learn process","ordered_present_kp":[0,18,299,418,449,555,591,627,724,966,1087,1182,1254,591,816],"keyphrases":["quantum learning","universal quantum matching machine","pattern class","pattern matching problem","quantum domain","qubit systems","matching strategies","matching strategies","semiclassical strategy","two-stage procedure","learning process","quantum strategy","Bayes optimal solutions","quantum matching procedure","semiclassical extension","matching strategy"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"686","title":"Technology CAD of SiGe-heterojunction field effect transistors","abstract":"A 2D virtual wafer fabrication simulation suite has been employed for the technology CAD of SiGe channel heterojunction field effect transistors (HFETs). Complete fabrication process of SiGe p-HFETs has been simulated. The SiGe material parameters and mobility model were incorporated to simulate Si\/SiGe p-HFETs with a uniform germanium channel having an L\/sub eff\/ of 0.5 mu m. A significant improvement in linear transconductance is observed when compared to control-silicon p-MOSFETs","tok_text":"technolog cad of sige-heterojunct field effect transistor \n a 2d virtual wafer fabric simul suit ha been employ for the technolog cad of sige channel heterojunct field effect transistor ( hfet ) . complet fabric process of sige p-hfet ha been simul . the sige materi paramet and mobil model were incorpor to simul si \/ sige p-hfet with a uniform germanium channel have an l \/ sub eff\/ of 0.5 mu m. a signific improv in linear transconduct is observ when compar to control-silicon p-mosfet","ordered_present_kp":[0,22,17,205,260,279,419],"keyphrases":["technology CAD","SiGe","heterojunction field effect transistors","fabrication process","material parameters","mobility model","linear transconductance","uniform channel","0.5 micron"],"prmu":["P","P","P","P","P","P","P","R","M"]} {"id":"1296","title":"Development of visual design steering as an aid in large-scale multidisciplinary design optimization. I. Method development","abstract":"A modified paradigm of computational steering (CS), termed visual design steering (VDS), is developed in this paper. The VDS paradigm is applied to optimal design problems to provide a means for capturing and enabling designer insights. VDS allows a designer to make decisions before, during or after an analysis or optimization via a visual environment, in order to effectively steer the solution process. The objective of VDS is to obtain a better solution in less time through the use of designer knowledge and expertise. Using visual representations of complex systems in this manner enables human experience and judgement to be incorporated into the optimal design process at appropriate steps, rather than having traditional black box solvers return solutions from a prescribed input set. Part I of this paper focuses on the research issues pertaining to the Graph Morphing visualization method created to represent an n-dimensional optimization problem using 2-dimensional and 3-dimensional visualizations. Part II investigates the implementation of the VDS paradigm, using the graph morphing approach, to improve an optimal design process. Specifically, the following issues are addressed: impact of design variable changes on the optimal design space; identification of possible constraint redundancies; impact of constraint tolerances on the optimal solution: and smoothness of the objective function contours. It is demonstrated that graph morphing can effectively reduce the complexity and computational time associated with some optimization problems","tok_text":"develop of visual design steer as an aid in large-scal multidisciplinari design optim . i. method develop \n a modifi paradigm of comput steer ( cs ) , term visual design steer ( vd ) , is develop in thi paper . the vd paradigm is appli to optim design problem to provid a mean for captur and enabl design insight . vd allow a design to make decis befor , dure or after an analysi or optim via a visual environ , in order to effect steer the solut process . the object of vd is to obtain a better solut in less time through the use of design knowledg and expertis . use visual represent of complex system in thi manner enabl human experi and judgement to be incorpor into the optim design process at appropri step , rather than have tradit black box solver return solut from a prescrib input set . part i of thi paper focus on the research issu pertain to the graph morph visual method creat to repres an n-dimension optim problem use 2-dimension and 3-dimension visual . part ii investig the implement of the vd paradigm , use the graph morph approach , to improv an optim design process . specif , the follow issu are address : impact of design variabl chang on the optim design space ; identif of possibl constraint redund ; impact of constraint toler on the optim solut : and smooth of the object function contour . it is demonstr that graph morph can effect reduc the complex and comput time associ with some optim problem","ordered_present_kp":[11,44,129,239,569,589,859,904,1384,589,1139,1207,1237],"keyphrases":["visual design steering","large-scale multidisciplinary design optimization","computational steering","optimal design problems","visual representations","complex systems","complexity","graph morphing visualization method","n-dimensional optimization","design variable changes","constraint redundancies","constraint tolerances","computational time","designer decision making","3D visualizations","2D visualizations","objective function contour smoothness"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","R","M","M","R"]} {"id":"915","title":"A meteorological fuzzy expert system incorporating subjective user input","abstract":"We present a fuzzy expert system, MEDEX, for forecasting gale-force winds in the Mediterranean basin. The most successful local wind forecasting in this region is achieved by an expert human forecaster with access to numerical weather prediction products. That forecaster's knowledge is expressed as a set of 'rules-of-thumb'. Fuzzy set methodologies have proved well suited for encoding the forecaster's knowledge, and for accommodating the uncertainty inherent in the specification of rules, as well as in subjective and objective input. MEDEX uses fuzzy set theory in two ways: as a fuzzy rule base in the expert system, and for fuzzy pattern matching to select dominant wind circulation patterns as one input to the expert system. The system was developed, tuned, and verified over a two-year period, during which the weather conditions from 539 days were individually analyzed. Evaluations of MEDEX performance for both the onset and cessation of winter and summer winds are presented, and demonstrate that MEDEX has forecasting skill competitive with the US Navy's regional forecasting center in Rota, Spain","tok_text":"a meteorolog fuzzi expert system incorpor subject user input \n we present a fuzzi expert system , medex , for forecast gale-forc wind in the mediterranean basin . the most success local wind forecast in thi region is achiev by an expert human forecast with access to numer weather predict product . that forecast 's knowledg is express as a set of ' rules-of-thumb ' . fuzzi set methodolog have prove well suit for encod the forecast 's knowledg , and for accommod the uncertainti inher in the specif of rule , as well as in subject and object input . medex use fuzzi set theori in two way : as a fuzzi rule base in the expert system , and for fuzzi pattern match to select domin wind circul pattern as one input to the expert system . the system wa develop , tune , and verifi over a two-year period , dure which the weather condit from 539 day were individu analyz . evalu of medex perform for both the onset and cessat of winter and summer wind are present , and demonstr that medex ha forecast skill competit with the us navi 's region forecast center in rota , spain","ordered_present_kp":[2,42,98,141,267,350,562,469,597,644,680],"keyphrases":["meteorological fuzzy expert system","subjective user input","MEDEX","Mediterranean basin","numerical weather prediction products","rules-of-thumb","uncertainty","fuzzy set theory","fuzzy rule base","fuzzy pattern matching","wind circulation patterns","gale-force wind forecasting","subjective variables","rule specification"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","M","R"]} {"id":"587","title":"An improved self-organizing CPN-based fuzzy system with adaptive back-propagation algorithm","abstract":"This paper describes an improved self-organizing CPN-based (Counter-Propagation Network) fuzzy system. Two self-organizing algorithms IUSOCPN and ISSOCPN, being unsupervised and supervised respectively, are introduced. The idea is to construct the neural-fuzzy system with a two-phase hybrid learning algorithm, which utilizes a CPN-based nearest-neighbor clustering scheme for both structure learning and initial parameters setting, and a gradient descent method with adaptive learning rate for fine tuning the parameters. The obtained network can be used in the same way as a CPN to model and control dynamic systems, while it has a faster learning speed than the original back-propagation algorithm. The comparative results on the examples suggest that the method is fairly efficient in terms of simple structure, fast learning speed, and relatively high modeling accuracy","tok_text":"an improv self-organ cpn-base fuzzi system with adapt back-propag algorithm \n thi paper describ an improv self-organ cpn-base ( counter-propag network ) fuzzi system . two self-organ algorithm iusocpn and issocpn , be unsupervis and supervis respect , are introduc . the idea is to construct the neural-fuzzi system with a two-phas hybrid learn algorithm , which util a cpn-base nearest-neighbor cluster scheme for both structur learn and initi paramet set , and a gradient descent method with adapt learn rate for fine tune the paramet . the obtain network can be use in the same way as a cpn to model and control dynam system , while it ha a faster learn speed than the origin back-propag algorithm . the compar result on the exampl suggest that the method is fairli effici in term of simpl structur , fast learn speed , and rel high model accuraci","ordered_present_kp":[128,296,332,465,420,439],"keyphrases":["Counter-Propagation Network","neural-fuzzy system","hybrid learning","structure learning","initial parameters setting","gradient descent","self-organizing fuzzy system","back-propagation learning scheme"],"prmu":["P","P","P","P","P","P","R","R"]} {"id":"950","title":"Quantum sensitive dependence","abstract":"Wave functions of bounded quantum systems with time-independent potentials, being almost periodic functions, cannot have time asymptotics as in classical chaos. However, bounded quantum systems with time-dependent interactions, as used in quantum control, may have continuous spectrum and the rate of growth of observables is an issue of both theoretical and practical concern. Rates of growth in quantum mechanics are discussed by constructing quantities with the same physical meaning as those involved in the classical Lyapunov exponent. A generalized notion of quantum sensitive dependence is introduced and the mathematical structure of the operator matrix elements that correspond to different types of growth is characterized","tok_text":"quantum sensit depend \n wave function of bound quantum system with time-independ potenti , be almost period function , can not have time asymptot as in classic chao . howev , bound quantum system with time-depend interact , as use in quantum control , may have continu spectrum and the rate of growth of observ is an issu of both theoret and practic concern . rate of growth in quantum mechan are discuss by construct quantiti with the same physic mean as those involv in the classic lyapunov expon . a gener notion of quantum sensit depend is introduc and the mathemat structur of the oper matrix element that correspond to differ type of growth is character","ordered_present_kp":[0,24,41,67,101,132,152,201,234,476,586],"keyphrases":["quantum sensitive dependence","wave functions","bounded quantum systems","time-independent potentials","periodic functions","time asymptotics","classical chaos","time-dependent interactions","quantum control","classical Lyapunov exponent","operator matrix elements","quantum complexity"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","M"]} {"id":"113","title":"Quantum limit on computational time and speed","abstract":"We investigate if physical laws can impose limits on computational time and speed of a quantum computer built from elementary particles. We show that the product of the speed and the running time of a quantum computer is limited by the type of fundamental interactions present inside the system. This will help us to decide as to what type of interaction should be allowed in building quantum computers in achieving the desired speed","tok_text":"quantum limit on comput time and speed \n we investig if physic law can impos limit on comput time and speed of a quantum comput built from elementari particl . we show that the product of the speed and the run time of a quantum comput is limit by the type of fundament interact present insid the system . thi will help us to decid as to what type of interact should be allow in build quantum comput in achiev the desir speed","ordered_present_kp":[0,17,113,259],"keyphrases":["quantum limit","computational time","quantum computer","fundamental interactions","computational speed"],"prmu":["P","P","P","P","R"]} {"id":"1197","title":"Numerical behaviour of stable and unstable solitary waves","abstract":"In this paper we analyse the behaviour in time of the numerical approximations to solitary wave solutions of the generalized Benjamin-Bona-Mahony equation. This equation possesses an important property: the stability of these solutions depends on their velocity. We identify the error propagation mechanisms in both the stable and unstable case. In particular, we show that in the stable case, numerical methods that preserve some conserved quantities of the problem are more appropriate for the simulation of this kind of solutions","tok_text":"numer behaviour of stabl and unstabl solitari wave \n in thi paper we analys the behaviour in time of the numer approxim to solitari wave solut of the gener benjamin-bona-mahoni equat . thi equat possess an import properti : the stabil of these solut depend on their veloc . we identifi the error propag mechan in both the stabl and unstabl case . in particular , we show that in the stabl case , numer method that preserv some conserv quantiti of the problem are more appropri for the simul of thi kind of solut","ordered_present_kp":[0,29,105,150,290,396],"keyphrases":["numerical behaviour","unstable solitary waves","numerical approximations","generalized Benjamin-Bona-Mahony equation","error propagation mechanisms","numerical methods","stable solitary waves"],"prmu":["P","P","P","P","P","P","R"]} {"id":"928","title":"Weighted energy linear quadratic regulator vibration control of piezoelectric composite plates","abstract":"In this paper on finite element linear quadratic regulator (LQR) vibration control of smart piezoelectric composite plates, we propose the use of the total weighted energy method to select the weighting matrices. By constructing the optimal performance function as a relative measure of the total kinetic energy, strain energy and input energy of the system, only three design variables need to be considered to achieve a balance between the desired higher damping effect and lower input cost. Modal control analysis is used to interpret the effects of three energy weight factors on the damping ratios and modal voltages and it is shown that the modal damping effect will increase with the kinetic energy weight factor, approaching square root (2\/2) as the strain energy weight factor increases and decrease with the input energy weight factor. Numerical results agree well with those from the modal control analysis. Since the control problem is simplified to three design variables only, the computational cost will be greatly reduced and a more accurate structural control analysis becomes more attractive for large systems","tok_text":"weight energi linear quadrat regul vibrat control of piezoelectr composit plate \n in thi paper on finit element linear quadrat regul ( lqr ) vibrat control of smart piezoelectr composit plate , we propos the use of the total weight energi method to select the weight matric . by construct the optim perform function as a rel measur of the total kinet energi , strain energi and input energi of the system , onli three design variabl need to be consid to achiev a balanc between the desir higher damp effect and lower input cost . modal control analysi is use to interpret the effect of three energi weight factor on the damp ratio and modal voltag and it is shown that the modal damp effect will increas with the kinet energi weight factor , approach squar root ( 2\/2 ) as the strain energi weight factor increas and decreas with the input energi weight factor . numer result agre well with those from the modal control analysi . sinc the control problem is simplifi to three design variabl onli , the comput cost will be greatli reduc and a more accur structur control analysi becom more attract for larg system","ordered_present_kp":[98,35,159,219,260,293,339,360,495,530,620,777,863,1002,1053],"keyphrases":["vibration control","finite element linear quadratic regulator","smart piezoelectric composite plates","total weighted energy","weighting matrices","optimal performance function","total kinetic energy","strain energy","damping effect","modal control analysis","damping ratios","strain energy weight factor","numerical results","computational cost","structural control analysis"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1256","title":"High-speed consistency checking for hypothetical reasoning systems using inference path network","abstract":"Hypothetical reasoning is popular in fault diagnostics and design systems, but slow reasoning speed is its drawback. The goal of the current study is developing hypothetical reasoning based on an inference path network, which would overcome this drawback. In hypothetical reasoning systems based on an inference path network, there is much room for improvement regarding the computing costs of connotation processing and consistency checking. The authors of this study demonstrate improvement ideas regarding one of these problems, namely, consistency checking. First, the authors obtained necessary and sufficient conditions under which inconsistencies occur during hypothesis composition. Based on the obtained results, the authors proposed an algorithm for speeding up the process of consistency checking. Processing with this algorithm in its core consists of transforming the inference path network in such a way that inconsistencies do not occur during the hypothesis composition, under the condition of unchanged solution hypotheses. The efficiency of this algorithm was confirmed by tests","tok_text":"high-spe consist check for hypothet reason system use infer path network \n hypothet reason is popular in fault diagnost and design system , but slow reason speed is it drawback . the goal of the current studi is develop hypothet reason base on an infer path network , which would overcom thi drawback . in hypothet reason system base on an infer path network , there is much room for improv regard the comput cost of connot process and consist check . the author of thi studi demonstr improv idea regard one of these problem , name , consist check . first , the author obtain necessari and suffici condit under which inconsist occur dure hypothesi composit . base on the obtain result , the author propos an algorithm for speed up the process of consist check . process with thi algorithm in it core consist of transform the infer path network in such a way that inconsist do not occur dure the hypothesi composit , under the condit of unchang solut hypothes . the effici of thi algorithm wa confirm by test","ordered_present_kp":[27,105,0,54,149,617,638,722],"keyphrases":["high-speed consistency checking","hypothetical reasoning","inference path network","fault diagnostics","reasoning speed","inconsistencies","hypothesis composition","speed up"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1213","title":"A knowledge intensive multi-agent framework for cooperative\/collaborative design modeling and decision support of assemblies","abstract":"Multi-agent modeling has emerged as a promising discipline for dealing with the decision making process in distributed information system applications. One of such applications is the modeling of distributed design or manufacturing processes which can link up various designs or manufacturing processes to form a virtual consortium on a global basis. This paper proposes a novel knowledge intensive multi-agent cooperative\/collaborative framework for concurrent intelligent design and assembly planning, which integrates product design, design for assembly, assembly planning, assembly system design, and assembly simulation subjected to econo-technical evaluations. An AI protocol based method is proposed to facilitate the integration of intelligent agents for assembly design, planning, evaluation and simulation processes. A unified class of knowledge intensive Petri nets is defined using the OO knowledge-based Petri net approach and used as an AI protocol for handling both the integration and the negotiation problems among multi-agents. The detailed cooperative\/collaborative mechanism and algorithms are given based on the knowledge object cooperation formalisms. As such, the assembly-oriented design system can easily be implemented under the multi-agent-based knowledge-intensive Petri net framework with concurrent integration of multiple cooperative knowledge sources and software. Thus, product design and assembly planning can be carried out simultaneously and intelligently in an entirely computer-aided concurrent design and assembly planning system","tok_text":"a knowledg intens multi-ag framework for cooper \/ collabor design model and decis support of assembl \n multi-ag model ha emerg as a promis disciplin for deal with the decis make process in distribut inform system applic . one of such applic is the model of distribut design or manufactur process which can link up variou design or manufactur process to form a virtual consortium on a global basi . thi paper propos a novel knowledg intens multi-ag cooper \/ collabor framework for concurr intellig design and assembl plan , which integr product design , design for assembl , assembl plan , assembl system design , and assembl simul subject to econo-techn evalu . an ai protocol base method is propos to facilit the integr of intellig agent for assembl design , plan , evalu and simul process . a unifi class of knowledg intens petri net is defin use the oo knowledge-bas petri net approach and use as an ai protocol for handl both the integr and the negoti problem among multi-ag . the detail cooper \/ collabor mechan and algorithm are given base on the knowledg object cooper formal . as such , the assembly-ori design system can easili be implement under the multi-agent-bas knowledge-intens petri net framework with concurr integr of multipl cooper knowledg sourc and softwar . thu , product design and assembl plan can be carri out simultan and intellig in an entir computer-aid concurr design and assembl plan system","ordered_present_kp":[2,50,76,189,257,360,480,508,536,553,617,665,810,1053],"keyphrases":["knowledge intensive multi-agent framework","collaborative design modeling","decision support","distributed information system applications","distributed design","virtual consortium","concurrent intelligent design","assembly planning","product design","design for assembly","assembly simulation","AI protocol","knowledge intensive Petri nets","knowledge object cooperation","cooperative framework","agent negotiation"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"646","title":"Vibration control of structure by using tuned mass damper (development of system which suppress displacement of auxiliary mass)","abstract":"In vibration control of a structure by using an active tuned mass damper (ATMD), stroke of the auxiliary mass is so limited that it is difficult to control the vibration in the case of large disturbance input. In this paper, two methods are proposed for the problem. One of the methods is a switching control system by two types of controllers. One of the controllers is a normal controller under small relative displacement of the auxiliary mass, and the other is not effective only for first mode of vibration under large relative displacement of the auxiliary mass. New variable gain control system is constructed by switching these two controllers. The other method is the brake system. In active vibration control, it is necessary to use actuator for active control. By using the actuator, the proposed system puts on the brake to suppress displacement increase of the auxiliary mass under large disturbance input. Finally, the systems are designed and the effectiveness of the systems is confirmed by the simulation","tok_text":"vibrat control of structur by use tune mass damper ( develop of system which suppress displac of auxiliari mass ) \n in vibrat control of a structur by use an activ tune mass damper ( atmd ) , stroke of the auxiliari mass is so limit that it is difficult to control the vibrat in the case of larg disturb input . in thi paper , two method are propos for the problem . one of the method is a switch control system by two type of control . one of the control is a normal control under small rel displac of the auxiliari mass , and the other is not effect onli for first mode of vibrat under larg rel displac of the auxiliari mass . new variabl gain control system is construct by switch these two control . the other method is the brake system . in activ vibrat control , it is necessari to use actuat for activ control . by use the actuat , the propos system put on the brake to suppress displac increas of the auxiliari mass under larg disturb input . final , the system are design and the effect of the system is confirm by the simul","ordered_present_kp":[34,0,633,728,792,803,7],"keyphrases":["vibration control","controllers","tuned mass damper","variable gain control system","brake system","actuator","active control","auxiliary mass displacement suppression"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"603","title":"PGE helps customers reduce energy costs","abstract":"A new service from Portland General Electric (PGE, Portland, Oregon, US) is saving customers tens of thousands of dollars in energy costs. PGE created E-Manager to allow facility managers to analyze their energy consumption online at 15-minute intervals. Customers can go to the Web for complete data, powerful analysis tools and charts, helping them detect abnormal energy use and focus on costly problem areas","tok_text":"pge help custom reduc energi cost \n a new servic from portland gener electr ( pge , portland , oregon , us ) is save custom ten of thousand of dollar in energi cost . pge creat e-manag to allow facil manag to analyz their energi consumpt onlin at 15-minut interv . custom can go to the web for complet data , power analysi tool and chart , help them detect abnorm energi use and focu on costli problem area","ordered_present_kp":[54,95,177],"keyphrases":["Portland General Electric","Oregon","E-Manager","energy costs reduction","online energy consumption analysis","abnormal energy use detection"],"prmu":["P","P","P","M","R","R"]} {"id":"81","title":"A scalable and efficient systolic algorithm for the longest common subsequence problem","abstract":"A longest common subsequence (LCS) of two strings is a common subsequence of two strings of maximal length. The LCS problem is that of finding an LCS of two given strings and the length of the LCS. This problem has been the subject of much research because its solution can be applied in many areas. In this paper, a scalable and efficient systolic algorithm is presented. For two given strings of length m and n, where m>or=n, the algorithm can solve the LCS problem in m+2r-1 (respectively n+2r-1) time steps with r or = n , the algorithm can solv the lc problem in m+2r-1 ( respect n+2r-1 ) time step with r < n\/2 ( respect r < m\/2 ) processor . experiment result show that the algorithm can be faster on multicomput than all the previou systol algorithm for the same problem","ordered_present_kp":[21,46],"keyphrases":["systolic algorithm","longest common subsequence problem","scalable algorithm","parallel algorithms"],"prmu":["P","P","R","M"]} {"id":"1157","title":"Portal dose image prediction for dosimetric treatment verification in radiotherapy. II. An algorithm for wedged beams","abstract":"A method is presented for calculation of a two-dimensional function, T\/sub wedge\/(x,y), describing the transmission of a wedged photon beam through a patient. This in an extension of the method that we have published for open (nonwedged) fields [Med. Phys. 25, 830-840 (1998)]. Transmission functions for open fields are being used in our clinic for prediction of portal dose images (PDI, i.e., a dose distribution behind the patient in a plane normal to the beam axis), which are compared with PDIs measured with an electronic portal imaging device (EPID). The calculations are based on the planning CT scan of the patient and on the irradiation geometry as determined in the treatment planning process. Input data for the developed algorithm for wedged beams are derived from (the already available) measured input data set for transmission prediction in open beams, which is extended with only a limited set of measurements in the wedged beam. The method has been tested for a PDI plane at 160 cm from the focus, in agreement with the applied focus-to-detector distance of our fluoroscopic EPIDs. For low and high energy photon beams (6 and 23 MV) good agreement (~1%) has been found between calculated and measured transmissions for a slab and a thorax phantom","tok_text":"portal dose imag predict for dosimetr treatment verif in radiotherapi . ii . an algorithm for wedg beam \n a method is present for calcul of a two-dimension function , t \/ sub wedge\/(x , y ) , describ the transmiss of a wedg photon beam through a patient . thi in an extens of the method that we have publish for open ( nonwedg ) field [ med . phi . 25 , 830 - 840 ( 1998 ) ] . transmiss function for open field are be use in our clinic for predict of portal dose imag ( pdi , i.e. , a dose distribut behind the patient in a plane normal to the beam axi ) , which are compar with pdi measur with an electron portal imag devic ( epid ) . the calcul are base on the plan ct scan of the patient and on the irradi geometri as determin in the treatment plan process . input data for the develop algorithm for wedg beam are deriv from ( the alreadi avail ) measur input data set for transmiss predict in open beam , which is extend with onli a limit set of measur in the wedg beam . the method ha been test for a pdi plane at 160 cm from the focu , in agreement with the appli focus-to-detector distanc of our fluoroscop epid . for low and high energi photon beam ( 6 and 23 mv ) good agreement ( ~1 % ) ha been found between calcul and measur transmiss for a slab and a thorax phantom","ordered_present_kp":[0,29,57,142,219,663,702,1133,1264,598,897,1165],"keyphrases":["portal dose image prediction","dosimetric treatment verification","radiotherapy","two-dimensional function","wedged photon beam","electronic portal imaging devices","planning CT scan","irradiation geometry","open beams","high energy photon beams","23 MV","thorax phantom","transmission dosimetry","wedged beams algorithm","low energy photon beams","slab phantom","in vivo dosimetry","fluoroscopic CCD camera","pencil beam algorithm","CadPlan planning system","virtual wedges","6 MV"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","M","R","R","R","M","M","M","M","M","R"]} {"id":"1112","title":"Blending parametric patches with subdivision surfaces","abstract":"In this paper the problem of blending parametric surfaces using subdivision patches is discussed. A new approach, named removing-boundary, is presented to generate piecewise-smooth subdivision surfaces through discarding the outmost quadrilaterals of the open meshes derived by each subdivision step. Then the approach is employed both to blend parametric bicubic B-spline surfaces and to fill n-sided holes. It is easy to produce piecewise-smooth subdivision surfaces with both convex and concave corners on the boundary, and limit surfaces are guaranteed to be C\/sup 2\/ continuous on the boundaries except for a few singular points by the removing-boundary approach. Thus the blending method is very efficient and the blending surface generated is of good effect","tok_text":"blend parametr patch with subdivis surfac \n in thi paper the problem of blend parametr surfac use subdivis patch is discuss . a new approach , name removing-boundari , is present to gener piecewise-smooth subdivis surfac through discard the outmost quadrilater of the open mesh deriv by each subdivis step . then the approach is employ both to blend parametr bicub b-spline surfac and to fill n-side hole . it is easi to produc piecewise-smooth subdivis surfac with both convex and concav corner on the boundari , and limit surfac are guarante to be c \/ sup 2\/ continu on the boundari except for a few singular point by the removing-boundari approach . thu the blend method is veri effici and the blend surfac gener is of good effect","ordered_present_kp":[26,98,188,249,350],"keyphrases":["subdivision surfaces","subdivision patches","piecewise-smooth subdivision surfaces","quadrilaterals","parametric bicubic B-spline surfaces","parametric surfaces blending","piecewise smooth subdivision surfaces"],"prmu":["P","P","P","P","P","R","M"]} {"id":"547","title":"Excess energy [cooling system]","abstract":"The designers retrofitting a comfort cooling system to offices in Hertfordshire have been able to make use of the waste heat rejected. what's more they're now making it a standard solution for much larger projects","tok_text":"excess energi [ cool system ] \n the design retrofit a comfort cool system to offic in hertfordshir have been abl to make use of the wast heat reject . what 's more they 're now make it a standard solut for much larger project","ordered_present_kp":[54,132],"keyphrases":["comfort cooling system","waste heat","Nationwide Trust","air conditioning"],"prmu":["P","P","U","U"]} {"id":"990","title":"Pipelined broadcast with enhanced wormhole routers","abstract":"This paper proposes a pipelined broadcast that broadcasts a message of size m in O(m+n-1) time in an n-dimensional hypercube. It is based on the replication tree, which is derived from reachable sets. It has greatly improved performance compared to Ho-Kao's (1995) algorithm with the time of O(m[n\/log(n+1)]). The communication in the broadcast uses an all-port wormhole router with message replication capability. This paper includes the algorithm together with performance comparisons to previous schemes in a practical implementation","tok_text":"pipelin broadcast with enhanc wormhol router \n thi paper propos a pipelin broadcast that broadcast a messag of size m in o(m+n-1 ) time in an n-dimension hypercub . it is base on the replic tree , which is deriv from reachabl set . it ha greatli improv perform compar to ho-kao 's ( 1995 ) algorithm with the time of o(m[n \/ log(n+1 ) ] ) . the commun in the broadcast use an all-port wormhol router with messag replic capabl . thi paper includ the algorithm togeth with perform comparison to previou scheme in a practic implement","ordered_present_kp":[23,142,183,217,253,376,405,0],"keyphrases":["pipelined broadcast","enhanced wormhole routers","n-dimensional hypercube","replication tree","reachable sets","performance","all-port wormhole router","message replication capability","message broadcast","communication complexity","intermediate reception"],"prmu":["P","P","P","P","P","P","P","P","R","M","U"]} {"id":"1423","title":"P2P is dead, long live P2P","abstract":"Picture the problem: a sprawling multinational has hundreds of offices, thousands of workers, and countless amounts of intellectual property scattered here, there, everywhere. In Kuala Lumpur an executive needs to see an internally-generated report on oil futures in central Asia-but where is it? London? New York? Moscow? With a few clicks of the mouse-and the right P2P technology deployed in-house-that executive will find and retrieve the report. Without P2P that might be impossible-certainly it would be time-consuming-and, right there, the argument for P2P implementations inside enterprises becomes clear. Who are the players? No companies have managed to stake out clear leads and the fact is that the P2P marketplace now is up for grabs-but the exciting news is that a range of small and startup businesses are trying to grab turf and quite probably, if the analysts are right, a few of these now little-known companies will emerge as digital content stars within the next few years. Cases in point: Groove Networks, Avaki, WorldStreet, Yaga, NextPage, and Kontiki. Very different companies-their approach to the markets radically differ-but, say the analysts, each is worth a close look because among them they are defining the future of P2P","tok_text":"p2p is dead , long live p2p \n pictur the problem : a sprawl multin ha hundr of offic , thousand of worker , and countless amount of intellectu properti scatter here , there , everywher . in kuala lumpur an execut need to see an internally-gener report on oil futur in central asia-but where is it ? london ? new york ? moscow ? with a few click of the mouse-and the right p2p technolog deploy in-house-that execut will find and retriev the report . without p2p that might be impossible-certainli it would be time-consuming-and , right there , the argument for p2p implement insid enterpris becom clear . who are the player ? no compani have manag to stake out clear lead and the fact is that the p2p marketplac now is up for grabs-but the excit news is that a rang of small and startup busi are tri to grab turf and quit probabl , if the analyst are right , a few of these now little-known compani will emerg as digit content star within the next few year . case in point : groov network , avaki , worldstreet , yaga , nextpag , and kontiki . veri differ companies-their approach to the market radic differ-but , say the analyst , each is worth a close look becaus among them they are defin the futur of p2p","ordered_present_kp":[372,912,786,974,990,998,1019,1033,1012],"keyphrases":["P2P technology","businesses","digital content","Groove Networks","Avaki","WorldStreet","Yaga","NextPage","Kontiki","content owners"],"prmu":["P","P","P","P","P","P","P","P","P","M"]} {"id":"870","title":"Speaker identification from voice using neural networks","abstract":"The paper provides three different schemes for speaker identification of personnel from their voice using artificial neural networks. The first scheme recognizes speakers by employing the classical backpropagation algorithm pre-trained with known voice samples of the persons. The second scheme provides a framework for classifying the known training samples of the voice features using a hierarchical architecture realized with a self-organizing feature map neural net. The first scheme is highly robust as it is capable of identifying the personnel from their noisy voice samples, but because of its excessive training time it has limited applications for a large voice database. The second scheme though not so robust as the former, however, can classify an unknown voice sample to its nearest class. The time needed for classification by the first scheme is always unique irrespective of the voice sample. It is proportional to the number of feedforward layers in the network. The time-requirement of the second classification scheme, however, is not free from the voice features and is proportional to the number of 2D arrays traversed by the algorithm on the hierarchical structure. The third scheme is highly robust and mis-classification is as low as 0.2 per cent. The third scheme combines the composite benefits of a radial basis function neural net and backpropagation trained neural net","tok_text":"speaker identif from voic use neural network \n the paper provid three differ scheme for speaker identif of personnel from their voic use artifici neural network . the first scheme recogn speaker by employ the classic backpropag algorithm pre-train with known voic sampl of the person . the second scheme provid a framework for classifi the known train sampl of the voic featur use a hierarch architectur realiz with a self-organ featur map neural net . the first scheme is highli robust as it is capabl of identifi the personnel from their noisi voic sampl , but becaus of it excess train time it ha limit applic for a larg voic databas . the second scheme though not so robust as the former , howev , can classifi an unknown voic sampl to it nearest class . the time need for classif by the first scheme is alway uniqu irrespect of the voic sampl . it is proport to the number of feedforward layer in the network . the time-requir of the second classif scheme , howev , is not free from the voic featur and is proport to the number of 2d array travers by the algorithm on the hierarch structur . the third scheme is highli robust and mis-classif is as low as 0.2 per cent . the third scheme combin the composit benefit of a radial basi function neural net and backpropag train neural net","ordered_present_kp":[0,137,107,383,418,217,327,881,1036,1225,238,253],"keyphrases":["speaker identification","personnel","artificial neural networks","backpropagation algorithm","pre-training","known voice samples","classification","hierarchical architecture","self-organizing feature map","feedforward layers","2D arrays","radial basis function neural net"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"835","title":"Pioneering women in computer science","abstract":"Although their contributions are not well documented, women have played an important role in the development of computer science. A survey of women pioneers demonstrates their influence in designing and programming the first electronic computers and languages, while laying the groundwork for women's expanding involvement in science","tok_text":"pioneer women in comput scienc \n although their contribut are not well document , women have play an import role in the develop of comput scienc . a survey of women pioneer demonstr their influenc in design and program the first electron comput and languag , while lay the groundwork for women 's expand involv in scienc","ordered_present_kp":[0,229],"keyphrases":["pioneering women","electronic computers","computer science development","programming languages","history"],"prmu":["P","P","R","R","U"]} {"id":"1173","title":"A comprehensive chatter prediction model for face turning operation including tool wear effect","abstract":"Presents a three-dimensional mechanistic frequency domain chatter model for face turning processes, that can account for the effects of tool wear including process damping. New formulations are presented to model the variation in process damping forces along nonlinear tool geometries such as the nose radius. The underlying dynamic force model simulates the variation in the chip cross-sectional area by accounting for the displacements in the axial and radial directions. The model can be used to determine stability boundaries under various cutting conditions and different states of flank wear. Experimental results for different amounts of wear are provided as a validation for the model","tok_text":"a comprehens chatter predict model for face turn oper includ tool wear effect \n present a three-dimension mechanist frequenc domain chatter model for face turn process , that can account for the effect of tool wear includ process damp . new formul are present to model the variat in process damp forc along nonlinear tool geometri such as the nose radiu . the underli dynam forc model simul the variat in the chip cross-sect area by account for the displac in the axial and radial direct . the model can be use to determin stabil boundari under variou cut condit and differ state of flank wear . experiment result for differ amount of wear are provid as a valid for the model","ordered_present_kp":[13,39,61,90,222,474,523,583],"keyphrases":["chatter prediction model","face turning operation","tool wear effect","three-dimensional mechanistic frequency domain chatter model","process damping","radial directions","stability boundaries","flank wear","axial directions"],"prmu":["P","P","P","P","P","P","P","P","R"]} {"id":"1136","title":"Q-learning for risk-sensitive control","abstract":"We propose for risk-sensitive control of finite Markov chains a counterpart of the popular Q-learning algorithm for classical Markov decision processes. The algorithm is shown to converge with probability one to the desired solution. The proof technique is an adaptation of the o.d.e. approach for the analysis of stochastic approximation algorithms, with most of the work involved used for the analysis of the specific o.d.e.s that arise","tok_text":"q-learn for risk-sensit control \n we propos for risk-sensit control of finit markov chain a counterpart of the popular q-learn algorithm for classic markov decis process . the algorithm is shown to converg with probabl one to the desir solut . the proof techniqu is an adapt of the o.d. . approach for the analysi of stochast approxim algorithm , with most of the work involv use for the analysi of the specif o.d.e. that aris","ordered_present_kp":[71,12,119,141,248,317],"keyphrases":["risk-sensitive control","finite Markov chains","Q-learning algorithm","classical Markov decision processes","proof technique","stochastic approximation algorithms","algorithm convergence","reinforcement learning algorithms","dynamic programming","ordinary differential equations"],"prmu":["P","P","P","P","P","P","R","M","U","U"]} {"id":"563","title":"Getting the most out of intrusion detection systems","abstract":"Intrusion detection systems (IDS) can play a very valuable role in the defence of a network. However, it is important to understand not just what it will do (and how it does it) - but what it won't do (and why). This article does not go into the technical working of IDS in too much detail, rather it limits itself to a discussion of some of the capabilities and failings of the technology","tok_text":"get the most out of intrus detect system \n intrus detect system ( id ) can play a veri valuabl role in the defenc of a network . howev , it is import to understand not just what it will do ( and how it doe it ) - but what it wo n't do ( and whi ) . thi articl doe not go into the technic work of id in too much detail , rather it limit itself to a discuss of some of the capabl and fail of the technolog","ordered_present_kp":[20],"keyphrases":["intrusion detection systems","computer network security","network attacks","firewall"],"prmu":["P","M","M","U"]} {"id":"1272","title":"Global action rules in distributed knowledge systems","abstract":"Previously Z. Ras and J.M. Zytkow (2000) introduced and investigated query answering system based on distributed knowledge mining. The notion of an action rule was introduced by Z. Ras and A. Wieczorkowska (2000) and its application domain e-business was taken. In this paper, we generalize the notion of action rules in a similar way to handling global queries. Mainly, when values of attributes for a given customer, used in action rules, can not be easily changed by business user, definitions of these attributes are extracted from other sites of a distributed knowledge system. To be more precise, attributes at every site of a distributed knowledge system are divided into two sets: stable and flexible. Values of flexible attributes, for a given consumer, sometime can be changed and this change can be influenced and controlled by a business user. However, some of these changes (for instance to the attribute \"profit') can not be done directly to a chosen attribute. In this case, definitions of such an attribute in terms of other attributes have to be learned. These new definitions are used to construct action rules showing what changes in values of flexible attributes, for a given consumer, are needed in order to re-classify this consumer the way business user wants. But, business user may be either unable or unwilling to proceed with actions leading to such changes. In all such cases we may search for definitions of these flexible attributes looking at either local or remote sites for help","tok_text":"global action rule in distribut knowledg system \n previous z. ra and j.m. zytkow ( 2000 ) introduc and investig queri answer system base on distribut knowledg mine . the notion of an action rule wa introduc by z. ra and a. wieczorkowska ( 2000 ) and it applic domain e-busi wa taken . in thi paper , we gener the notion of action rule in a similar way to handl global queri . mainli , when valu of attribut for a given custom , use in action rule , can not be easili chang by busi user , definit of these attribut are extract from other site of a distribut knowledg system . to be more precis , attribut at everi site of a distribut knowledg system are divid into two set : stabl and flexibl . valu of flexibl attribut , for a given consum , sometim can be chang and thi chang can be influenc and control by a busi user . howev , some of these chang ( for instanc to the attribut \" profit ' ) can not be done directli to a chosen attribut . in thi case , definit of such an attribut in term of other attribut have to be learn . these new definit are use to construct action rule show what chang in valu of flexibl attribut , for a given consum , are need in order to re-classifi thi consum the way busi user want . but , busi user may be either unabl or unwil to proceed with action lead to such chang . in all such case we may search for definit of these flexibl attribut look at either local or remot site for help","ordered_present_kp":[0,112,7,398,140],"keyphrases":["global action rules","action rules","query answering system","distributed knowledge mining","attributes","e-commerce"],"prmu":["P","P","P","P","P","U"]} {"id":"1237","title":"High-performance numerical pricing methods","abstract":"The pricing of financial derivatives is an important field in finance and constitutes a major component of financial management applications. The uncertainty of future events often makes analytic approaches infeasible and, hence, time-consuming numerical simulations are required. In the Aurora Financial Management System, pricing is performed on the basis of lattice representations of stochastic multidimensional scenario processes using the Monte Carlo simulation and Backward Induction methods, the latter allowing for the exploitation of shared-memory parallelism. We present the parallelization of a Backward Induction numerical pricing kernel on a cluster of SMPs using HPF+, an extended version of High-Performance Fortran. Based on language extensions for specifying a hierarchical mapping of data onto an SMP cluster, the compiler generates a hybrid-parallel program combining distributed-memory and shared-memory parallelism. We outline the parallelization strategy adopted by the VFC compiler and present an experimental evaluation of the pricing kernel on an NEC SX-5 vector supercomputer and a Linux SMP cluster, comparing a pure MPI version to a hybrid-parallel MPI\/OpenMP version","tok_text":"high-perform numer price method \n the price of financi deriv is an import field in financ and constitut a major compon of financi manag applic . the uncertainti of futur event often make analyt approach infeas and , henc , time-consum numer simul are requir . in the aurora financi manag system , price is perform on the basi of lattic represent of stochast multidimension scenario process use the mont carlo simul and backward induct method , the latter allow for the exploit of shared-memori parallel . we present the parallel of a backward induct numer price kernel on a cluster of smp use hpf+ , an extend version of high-perform fortran . base on languag extens for specifi a hierarch map of data onto an smp cluster , the compil gener a hybrid-parallel program combin distributed-memori and shared-memori parallel . we outlin the parallel strategi adopt by the vfc compil and present an experiment evalu of the price kernel on an nec sx-5 vector supercomput and a linux smp cluster , compar a pure mpi version to a hybrid-parallel mpi \/ openmp version","ordered_present_kp":[47,122,267,19,398,419,550],"keyphrases":["pricing","finance","financial management","Aurora Financial Management System","Monte Carlo simulation","Backward Induction methods","numerical pricing kernel","stochastic processes","derivative pricing","investment strategies"],"prmu":["P","P","P","P","P","P","P","R","R","M"]} {"id":"627","title":"Comparison of non-stationary time series in the frequency domain","abstract":"In this paper we compare two nonstationary time series using nonparametric procedures. Evolutionary spectra are estimated for the two series. Randomization tests are performed on groups of spectral estimates for both related and independent time series. Simulation studies show that in certain cases the tests perform reasonably well. The tests are applied to observed geological and financial time series","tok_text":"comparison of non-stationari time seri in the frequenc domain \n in thi paper we compar two nonstationari time seri use nonparametr procedur . evolutionari spectra are estim for the two seri . random test are perform on group of spectral estim for both relat and independ time seri . simul studi show that in certain case the test perform reason well . the test are appli to observ geolog and financi time seri","ordered_present_kp":[91,119,192,228,262,283,392],"keyphrases":["nonstationary time series","nonparametric procedures","randomization tests","spectral estimates","independent time series","simulation","financial time series","evolutionary spectra estimation","related time series","lag window","time window","geological time series"],"prmu":["P","P","P","P","P","P","P","R","R","U","M","R"]} {"id":"58","title":"Robust speech recognition using probabilistic union models","abstract":"This paper introduces a new statistical approach, namely the probabilistic union model, for speech recognition involving partial, unknown frequency-band corruption. Partial frequency-band corruption accounts for the effect of a family of real-world noises. Previous methods based on the missing feature theory usually require the identity of the noisy bands. This identification can be difficult for unexpected noise with unknown, time-varying band characteristics. The new model combines the local frequency-band information based on the union of random events, to reduce the dependence of the model on information about the noise. This model partially accomplishes the target: offering robustness to partial frequency-band corruption, while requiring no information about the noise. This paper introduces the theory and implementation of the union model, and is focused on several important advances. These new developments include a new algorithm for automatic order selection, a generalization of the modeling principle to accommodate partial feature stream corruption, and a combination of the union model with conventional noise reduction techniques to deal with a mixture of stationary noise and unknown, nonstationary noise. For the evaluation, we used the TIDIGITS database for speaker-independent connected digit recognition. The utterances were corrupted by various types of additive noise, stationary or time-varying, assuming no knowledge about the noise characteristics. The results indicate that the new model offers significantly improved robustness in comparison to other models","tok_text":"robust speech recognit use probabilist union model \n thi paper introduc a new statist approach , name the probabilist union model , for speech recognit involv partial , unknown frequency-band corrupt . partial frequency-band corrupt account for the effect of a famili of real-world nois . previou method base on the miss featur theori usual requir the ident of the noisi band . thi identif can be difficult for unexpect nois with unknown , time-vari band characterist . the new model combin the local frequency-band inform base on the union of random event , to reduc the depend of the model on inform about the nois . thi model partial accomplish the target : offer robust to partial frequency-band corrupt , while requir no inform about the nois . thi paper introduc the theori and implement of the union model , and is focus on sever import advanc . these new develop includ a new algorithm for automat order select , a gener of the model principl to accommod partial featur stream corrupt , and a combin of the union model with convent nois reduct techniqu to deal with a mixtur of stationari nois and unknown , nonstationari nois . for the evalu , we use the tidigit databas for speaker-independ connect digit recognit . the utter were corrupt by variou type of addit nois , stationari or time-vari , assum no knowledg about the nois characterist . the result indic that the new model offer significantli improv robust in comparison to other model","ordered_present_kp":[0,27,898,45,963,1040,1086,1116,1164,1184,1267,1334,316,365,440,495,202],"keyphrases":["robust speech recognition","probabilistic union models","modeling","partial frequency-band corruption","missing feature theory","noisy bands","time-varying band characteristics","local frequency-band information","automatic order selection","partial feature stream corruption","noise reduction techniques","stationary noise","nonstationary noise","TIDIGITS database","speaker-independent connected digit recognition","additive noise","noise characteristics","partial real-world noise"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"949","title":"Reply to \"Comment on: Teleportation of an unknown state by W state\" [Phys. Lett. A 300 (2002) 324]","abstract":"In our letter (see ibid., vol. 296, p. 161 (2002)), the main question we consider is whether a general three-particle W state can be used to realize the teleportation of an unknown qubit state. We give the positive answer to this question in our letter, and show that W state can be used to realize to do that probabilistically. We also discuss how to do it in detail in our letter. In the previous comment (see ibid., vol. 300, p. 324 (2002)), authors check carefully the mathematics calculation of our letter, find and point out a simple mathematics error about normalization coefficient of Eq. (1). This mathematics error induces the incorrect probability calculation of Eq. (6), and also an incorrect claim in first part of our letter","tok_text":"repli to \" comment on : teleport of an unknown state by w state \" [ phi . lett . a 300 ( 2002 ) 324 ] \n in our letter ( see ibid . , vol . 296 , p. 161 ( 2002 ) ) , the main question we consid is whether a gener three-particl w state can be use to realiz the teleport of an unknown qubit state . we give the posit answer to thi question in our letter , and show that w state can be use to realiz to do that probabilist . we also discuss how to do it in detail in our letter . in the previou comment ( see ibid . , vol . 300 , p. 324 ( 2002 ) ) , author check care the mathemat calcul of our letter , find and point out a simpl mathemat error about normal coeffici of eq . ( 1 ) . thi mathemat error induc the incorrect probabl calcul of eq . ( 6 ) , and also an incorrect claim in first part of our letter","ordered_present_kp":[24,39,212,282,648,719],"keyphrases":["teleportation","unknown state","three-particle W state","qubit state","normalization coefficient","probability calculation"],"prmu":["P","P","P","P","P","P"]} {"id":"1407","title":"Soft options for software upgrades?","abstract":"Several new products claim to take the work out of installing software and patches, and even migrating operating systems. Software migration products fall into two broad categories. The drive imaging type is designed to make exact copies of a hard disk, either an entire drive or certain directories, so you can use it to back up data. The application management type is designed for more incremental upgrades and often provides additional features such as the ability to monitor or control users' access to applications","tok_text":"soft option for softwar upgrad ? \n sever new product claim to take the work out of instal softwar and patch , and even migrat oper system . softwar migrat product fall into two broad categori . the drive imag type is design to make exact copi of a hard disk , either an entir drive or certain directori , so you can use it to back up data . the applic manag type is design for more increment upgrad and often provid addit featur such as the abil to monitor or control user ' access to applic","ordered_present_kp":[16],"keyphrases":["software upgrades","software installation","Microsoft Windows","operating systems migration"],"prmu":["P","R","U","R"]} {"id":"1093","title":"A fuzzy logic approach to accommodate thermal stress and improve the start-up phase in combined cycle power plants","abstract":"Use of combined cycle power generation plant has increased dramatically over the last decade. A supervisory control approach based on a dynamic model is developed, which makes use of proportional-integral-derivative (PID), fuzzy logic and fuzzy PID schemes. The aim is to minimize the steam turbine plant start-up time, without violating maximum thermal stress limits. An existing start-up schedule provides the benchmark by which the performance of candidate controllers is assessed. Improvements regarding possible reduced start-up times and satisfaction of maximum thermal stress restrictions have been realized using the proposed control scheme","tok_text":"a fuzzi logic approach to accommod thermal stress and improv the start-up phase in combin cycl power plant \n use of combin cycl power gener plant ha increas dramat over the last decad . a supervisori control approach base on a dynam model is develop , which make use of proportional-integral-deriv ( pid ) , fuzzi logic and fuzzi pid scheme . the aim is to minim the steam turbin plant start-up time , without violat maximum thermal stress limit . an exist start-up schedul provid the benchmark by which the perform of candid control is assess . improv regard possibl reduc start-up time and satisfact of maximum thermal stress restrict have been realiz use the propos control scheme","ordered_present_kp":[83,188,2,227,324,417,457],"keyphrases":["fuzzy logic approach","combined cycle power plants","supervisory control","dynamic model","fuzzy PID schemes","maximum thermal stress limits","start-up schedule","PID control","steam turbine plant start-up time minimization"],"prmu":["P","P","P","P","P","P","P","R","R"]} {"id":"1442","title":"Using constructed types in C++ unions","abstract":"The C++ Standard states that a union type cannot have a member with a nontrivial constructor or destructor. While at first this seems unreasonable, further thought makes it clear why this is the case: The crux of the problem is that unions don't have built-in semantics for denoting when a member is the \"current\" member of the union. Therefore, the compiler can't know when it's appropriate to call constructors or destructors on the union members. Still, there are good reasons for wanting to use constructed object types in a union. For example, you might want to implement a scripting language with a single variable type that can either be an integer, a string, or a list. A union is the perfect candidate for implementing such a composite type, but the restriction on constructed union members may prevent you from using an existing string or list class (for example, from the STL) to provide the underlying functionality. Luckily, a feature of C++ called placement new can provide a workaround","tok_text":"use construct type in c++ union \n the c++ standard state that a union type can not have a member with a nontrivi constructor or destructor . while at first thi seem unreason , further thought make it clear whi thi is the case : the crux of the problem is that union do n't have built-in semant for denot when a member is the \" current \" member of the union . therefor , the compil ca n't know when it 's appropri to call constructor or destructor on the union member . still , there are good reason for want to use construct object type in a union . for exampl , you might want to implement a script languag with a singl variabl type that can either be an integ , a string , or a list . a union is the perfect candid for implement such a composit type , but the restrict on construct union member may prevent you from use an exist string or list class ( for exampl , from the stl ) to provid the underli function . luckili , a featur of c++ call placement new can provid a workaround","ordered_present_kp":[38,64,113,128,454,593,946],"keyphrases":["C++ Standard","union type","constructors","destructors","union members","scripting language","placement new"],"prmu":["P","P","P","P","P","P","P"]} {"id":"854","title":"A conference's impact on undergraduate female students","abstract":"In September of 2000, the 3rd Grace Hopper Celebration of Women in Computing was held in Cape Cod, Massachusetts. Along with a colleague from a nearby university, we accompanied seven of our female undergraduate students to this conference. This paper reports on how the conference experience immediately affected these students - what impressed them, what scared them, what it clarified for them. It also reports on how the context in which these students currently evaluate their ability, potential and opportunity in computer science is different now from what it was before the conference. Hopefully, by understanding their experience, we can gain some insight into things we can do for all of our undergraduate female students to better support their computer science and engineering education","tok_text":"a confer 's impact on undergradu femal student \n in septemb of 2000 , the 3rd grace hopper celebr of women in comput wa held in cape cod , massachusett . along with a colleagu from a nearbi univers , we accompani seven of our femal undergradu student to thi confer . thi paper report on how the confer experi immedi affect these student - what impress them , what scare them , what it clarifi for them . it also report on how the context in which these student current evalu their abil , potenti and opportun in comput scienc is differ now from what it wa befor the confer . hope , by understand their experi , we can gain some insight into thing we can do for all of our undergradu femal student to better support their comput scienc and engin educ","ordered_present_kp":[22,739,2],"keyphrases":["conference","undergraduate female students","engineering education","computer science education","gender issues"],"prmu":["P","P","P","R","U"]} {"id":"811","title":"Integration, the Web are key this season [tax]","abstract":"Integration and the Web are driving many of the enhancements planned by tax preparation software vendors for this coming season","tok_text":"integr , the web are key thi season [ tax ] \n integr and the web are drive mani of the enhanc plan by tax prepar softwar vendor for thi come season","ordered_present_kp":[],"keyphrases":["accounting packages","tax packages","software integration","Internet","CCH","TaxWorks","People's Choice","Visual Tax","GoSystem Tax RS","Drake","NetConnection","ATX","CPASoftware","Intuit","Petz","TaxSimple","RIA"],"prmu":["U","M","R","U","U","U","U","M","M","U","U","U","U","U","U","U","U"]} {"id":"1392","title":"Enlisting on-line residents: Expanding the boundaries of e-government in a Japanese rural township","abstract":"The purpose of this article is to analyze and learn from an unusual way in which local bureaucrats in a Japanese rural township are using the Internet to serve their constituents by enlisting the support of \"on-line residents.\" Successful e-government requires not only rethinking the potential uses of computer technology, but in adopting new patterns of decision-making, power sharing, and office management that many bureaucrats may not be predisposed to make. The main thesis of this article is that necessity and practicality can play a powerful motivational role in facilitating the incorporation of information technology (IT) at the level of local government. This case study of how bureaucrats in Towa-cho, a small, agricultural town in Northeastern Japan, have harnessed the Internet demonstrates clearly the fundamentals of building a successful e-government framework in this rural municipality, similar to many communities in Europe and North America today","tok_text":"enlist on-lin resid : expand the boundari of e-govern in a japanes rural township \n the purpos of thi articl is to analyz and learn from an unusu way in which local bureaucrat in a japanes rural township are use the internet to serv their constitu by enlist the support of \" on-lin resid . \" success e-govern requir not onli rethink the potenti use of comput technolog , but in adopt new pattern of decision-mak , power share , and offic manag that mani bureaucrat may not be predispos to make . the main thesi of thi articl is that necess and practic can play a power motiv role in facilit the incorpor of inform technolog ( it ) at the level of local govern . thi case studi of how bureaucrat in towa-cho , a small , agricultur town in northeastern japan , have har the internet demonstr clearli the fundament of build a success e-govern framework in thi rural municip , similar to mani commun in europ and north america today","ordered_present_kp":[7,45,59,159,216,399,414,432,698,857],"keyphrases":["on-line residents","e-government","Japanese rural township","local bureaucrats","Internet","decision-making","power sharing","office management","Towa-cho","rural municipality"],"prmu":["P","P","P","P","P","P","P","P","P","P"]} {"id":"782","title":"Community technology and democratic rationalization","abstract":"The objective of the paper is to explore questions of human agency and democratic process in the technical sphere through the example of \"virtual community.\" The formation of relatively stable long-term group associations (community in the broad sense of the term), is the scene on which a large share of human development occurs. As such it is a fundamental human value mobilizing diverse ideologies and sensitivities. The promise of realizing this value in a new domain naturally stirs up much excitement among optimistic observers of the Internet. At the same time, the eagerness to place hopes for community in a technical system flies in the face of an influential intellectual tradition of technology criticism. This eagerness seems even more naive in the light of the recent commercialization of so much Internet activity. Despite the widespread skepticism, we believe the growth of virtual community is significant for an inquiry into the democratization of technology. We show that conflicting answers to the central question of the present theoretical debate - Is community possible on computer networks? epsilon neralize from particular features of systems and software prevalent at different stages in the development of computer networking. We conclude that research should focus instead on how to design computer networks to better support community activities and values","tok_text":"commun technolog and democrat ration \n the object of the paper is to explor question of human agenc and democrat process in the technic sphere through the exampl of \" virtual commun . \" the format of rel stabl long-term group associ ( commun in the broad sens of the term ) , is the scene on which a larg share of human develop occur . as such it is a fundament human valu mobil divers ideolog and sensit . the promis of realiz thi valu in a new domain natur stir up much excit among optimist observ of the internet . at the same time , the eager to place hope for commun in a technic system fli in the face of an influenti intellectu tradit of technolog critic . thi eager seem even more naiv in the light of the recent commerci of so much internet activ . despit the widespread skeptic , we believ the growth of virtual commun is signific for an inquiri into the democrat of technolog . we show that conflict answer to the central question of the present theoret debat - is commun possibl on comput network ? epsilon neral from particular featur of system and softwar preval at differ stage in the develop of comput network . we conclud that research should focu instead on how to design comput network to better support commun activ and valu","ordered_present_kp":[0,21,88,104,128,167,204,314,362,379,484,577,624,645,741,902,994,994,1223],"keyphrases":["community technology","democratic rationalization","human agency","democratic process","technical sphere","virtual community","stable long-term group associations","human development","human value","diverse ideologies","optimistic observers","technical system","intellectual tradition","technology criticism","Internet activity","conflicting answers","computer networks","computer networks","community activities","computer networking"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"894","title":"Improved detection of lung nodules by using a temporal subtraction technique","abstract":"The authors evaluated the effect of a temporal subtraction technique for digital chest radiography with regard to the accuracy of detection of lung nodules. Twenty solitary lung nodules smaller than 30 mm in diameter, including 10 lung cancers and 10 benign nodules, were used. The nodules were grouped subjectively according to their subtlety. For nonnodular cases, 20 nodules without perceptible interval changes were selected. All chest radiographs were obtained by using a computed radiographic system, and temporal subtraction images were produced by using a program developed at the University of Chicago. The effect of the temporal subtraction image was evaluated by using an observer performance study, with use of receiver operating characteristic analysis. Observer performance with temporal subtraction images was substantially improved (A\/sub z\/ = 0.980 and 0.958), as compared with that without temporal subtraction images (A\/sub z\/ = 0.920 and 0.825) for the certified radiologists and radiology residents, respectively. The temporal subtraction technique clearly improved diagnostic accuracy for detecting lung nodules, especially subtle cases. In conclusion, the temporal subtraction technique is useful for improving detection accuracy for peripheral lung nodules on digital chest radiographs","tok_text":"improv detect of lung nodul by use a tempor subtract techniqu \n the author evalu the effect of a tempor subtract techniqu for digit chest radiographi with regard to the accuraci of detect of lung nodul . twenti solitari lung nodul smaller than 30 mm in diamet , includ 10 lung cancer and 10 benign nodul , were use . the nodul were group subject accord to their subtleti . for nonnodular case , 20 nodul without percept interv chang were select . all chest radiograph were obtain by use a comput radiograph system , and tempor subtract imag were produc by use a program develop at the univers of chicago . the effect of the tempor subtract imag wa evalu by use an observ perform studi , with use of receiv oper characterist analysi . observ perform with tempor subtract imag wa substanti improv ( a \/ sub z\/ = 0.980 and 0.958 ) , as compar with that without tempor subtract imag ( a \/ sub z\/ = 0.920 and 0.825 ) for the certifi radiologist and radiolog resid , respect . the tempor subtract techniqu clearli improv diagnost accuraci for detect lung nodul , especi subtl case . in conclus , the tempor subtract techniqu is use for improv detect accuraci for peripher lung nodul on digit chest radiograph","ordered_present_kp":[37,126,412,664,1157,944,920,1064,585,489,244],"keyphrases":["temporal subtraction technique","digital chest radiography","30 mm","perceptible interval changes","computed radiographic system","University of Chicago","observer performance","certified radiologists","radiology residents","subtle cases","peripheral lung nodules","improved lung nodules detection","medical diagnostic imaging"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","M"]} {"id":"1016","title":"A scalable model of cerebellar adaptive timing and sequencing: the recurrent slide and latch (RSL) model","abstract":"From the dawn of modern neural network theory, the mammalian cerebellum has been a favored object of mathematical modeling studies. Early studies focused on the fanout, convergence, thresholding, and learned weighting of perceptual-motor signals within the cerebellar cortex. This led to the still viable idea that the granule cell stage in the cerebellar cortex performs a sparse expansive recoding of the time-varying input vector. This recoding reveals and emphasizes combinations in a distributed representation that serves as a basis for the learned, state-dependent control actions engendered by cerebellar outputs to movement related centers. To make optimal use of available signals, the cerebellum must be able to sift the evolving state representation for the most reliable predictors of the need for control actions, and to use those predictors even if they appear only transiently and well in advance of the optimal time for initiating the control action. The paper proposes a modification to prior, population, models for cerebellar adaptive timing and sequencing. Since it replaces a population with a single element, the proposed RSL model is in one sense maximally efficient, and therefore optimal from the perspective of scalability","tok_text":"a scalabl model of cerebellar adapt time and sequenc : the recurr slide and latch ( rsl ) model \n from the dawn of modern neural network theori , the mammalian cerebellum ha been a favor object of mathemat model studi . earli studi focus on the fanout , converg , threshold , and learn weight of perceptual-motor signal within the cerebellar cortex . thi led to the still viabl idea that the granul cell stage in the cerebellar cortex perform a spars expans recod of the time-vari input vector . thi recod reveal and emphas combin in a distribut represent that serv as a basi for the learn , state-depend control action engend by cerebellar output to movement relat center . to make optim use of avail signal , the cerebellum must be abl to sift the evolv state represent for the most reliabl predictor of the need for control action , and to use those predictor even if they appear onli transient and well in advanc of the optim time for initi the control action . the paper propos a modif to prior , popul , model for cerebellar adapt time and sequenc . sinc it replac a popul with a singl element , the propos rsl model is in one sens maxim effici , and therefor optim from the perspect of scalabl","ordered_present_kp":[2,19,122,150,392,445,471,536],"keyphrases":["scalable model","cerebellar adaptive timing","neural network theory","mammalian cerebellum","granule cell stage","sparse expansive recoding","time-varying input vector","distributed representation","cerebellar sequencing","recurrent slide and latch model","recurrent network"],"prmu":["P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1053","title":"A static semantics for Haskell","abstract":"This paper gives a static semantics for Haskell 98, a non-strict purely functional programming language. The semantics formally specifies nearly all the details of the Haskell 98 type system, including the resolution of overloading, kind inference (including defaulting) and polymorphic recursion, the only major omission being a proper treatment of ambiguous overloading and its resolution. Overloading is translated into explicit dictionary passing, as in all current implementations of Haskell. The target language of this translation is a variant of the Girard-Reynolds polymorphic lambda calculus featuring higher order polymorphism. and explicit type abstraction and application in the term language. Translated programs can thus still be type checked, although the implicit version of this system is impredicative. A surprising result of this formalization effort is that the monomorphism restriction, when rendered in a system of inference rules, compromises the principal type property","tok_text":"a static semant for haskel \n thi paper give a static semant for haskel 98 , a non-strict pure function program languag . the semant formal specifi nearli all the detail of the haskel 98 type system , includ the resolut of overload , kind infer ( includ default ) and polymorph recurs , the onli major omiss be a proper treatment of ambigu overload and it resolut . overload is translat into explicit dictionari pass , as in all current implement of haskel . the target languag of thi translat is a variant of the girard-reynold polymorph lambda calculu featur higher order polymorph . and explicit type abstract and applic in the term languag . translat program can thu still be type check , although the implicit version of thi system is impred . a surpris result of thi formal effort is that the monomorph restrict , when render in a system of infer rule , compromis the princip type properti","ordered_present_kp":[2,64,186,222,233,267,391,528,560,589,630,679,798,846],"keyphrases":["static semantics","Haskell 98","type system","overloading","kind inference","polymorphic recursion","explicit dictionary passing","polymorphic lambda calculus","higher order polymorphism","explicit type abstraction","term language","type checking","monomorphism restriction","inference rules","nonstrict purely functional programming language","formal specification"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","M","M"]} {"id":"707","title":"Vector algebra proofs for geometry theorems","abstract":"Vector mathematics can generate simple and powerful proofs of theorems in plane geometry. These proofs can also be used to generalize plane geometry theorems to higher dimensions. We present three vector proofs that show the power of this technique. 1. For any quadrilateral, the sum of the squares of the diagonals is less than or equal to the sum of the squares of the sides. 2. The area of a quadrilateral is half the product of the diagonals multiplied by the sine of an included angle. 3. One quarter of all triangles are acute (Based upon the options detailed below, with respect to the relative lengths of the sides). This paper presents a set of examples of vector mathematics applied to geometry problems. Some of the most beautiful and sophisticated proofs in mathematics involve using multiple representations of the same data. By leveraging the advantages of each representation one finds new and useful mathematical facts","tok_text":"vector algebra proof for geometri theorem \n vector mathemat can gener simpl and power proof of theorem in plane geometri . these proof can also be use to gener plane geometri theorem to higher dimens . we present three vector proof that show the power of thi techniqu . 1 . for ani quadrilater , the sum of the squar of the diagon is less than or equal to the sum of the squar of the side . 2 . the area of a quadrilater is half the product of the diagon multipli by the sine of an includ angl . 3 . one quarter of all triangl are acut ( base upon the option detail below , with respect to the rel length of the side ) . thi paper present a set of exampl of vector mathemat appli to geometri problem . some of the most beauti and sophist proof in mathemat involv use multipl represent of the same data . by leverag the advantag of each represent one find new and use mathemat fact","ordered_present_kp":[44,106,767,282,15,0],"keyphrases":["vector algebra proofs","proofs","vector mathematics","plane geometry","quadrilateral","multiple representations"],"prmu":["P","P","P","P","P","P"]} {"id":"742","title":"Second term [International Telecommunication Union]","abstract":"Later this month Yoshio Utsumi is expected to be re-elected for a second four year term as secretary general of the International Telecommunication Union. Here he talks to Matthew May about getting involved in internet addressing, the prospects for 3g, the need for further reform of his organisation... and the translating telephone","tok_text":"second term [ intern telecommun union ] \n later thi month yoshio utsumi is expect to be re-elect for a second four year term as secretari gener of the intern telecommun union . here he talk to matthew may about get involv in internet address , the prospect for 3 g , the need for further reform of hi organis ... and the translat telephon","ordered_present_kp":[14,225,321,261],"keyphrases":["International Telecommunication Union","internet addressing","3G","translating telephone"],"prmu":["P","P","P","P"]} {"id":"1317","title":"Dynamic spectrum management for next-generation DSL systems","abstract":"The performance of DSL systems is severely constrained by crosstalk due to the electromagnetic coupling among the multiple twisted pairs making up a phone cable. In order to reduce performance loss arising from crosstalk, DSL systems are currently designed under the assumption of worst-case crosstalk scenarios leading to overly conservative DSL deployments. This article presents a new paradigm for DSL system design, which takes into account the multi-user aspects of the DSL transmission environment. Dynamic spectrum management (DSM) departs from the current design philosophy by enabling transceivers to autonomously and dynamically optimize their communication settings with respect to both the channel and the transmissions of neighboring systems. Along with this distributed optimization, when an additional degree of coordination becomes available for future DSL deployment, DSM will allow even greater improvement in DSL performance. Implementations are readily applicable without causing any performance degradation to the existing DSLs under static spectrum management. After providing an overview of the DSM concept, this article reviews two practical DSM methods: iterative water-filling, an autonomous distributed power control method enabling great improvement in performance, which can be implemented through software options in some existing ADSL and VDSL systems; and vectored-DMT, a coordinated transmission\/reception technique achieving crosstalk-free communication for DSL systems, which brings within reach the dream of providing universal Internet access at speeds close to 100 Mb\/s to 500 m on 1-2 lines and beyond 1 km on 2-4 lines. DSM-capable DSL thus enables the broadband age","tok_text":"dynam spectrum manag for next-gener dsl system \n the perform of dsl system is sever constrain by crosstalk due to the electromagnet coupl among the multipl twist pair make up a phone cabl . in order to reduc perform loss aris from crosstalk , dsl system are current design under the assumpt of worst-cas crosstalk scenario lead to overli conserv dsl deploy . thi articl present a new paradigm for dsl system design , which take into account the multi-us aspect of the dsl transmiss environ . dynam spectrum manag ( dsm ) depart from the current design philosophi by enabl transceiv to autonom and dynam optim their commun set with respect to both the channel and the transmiss of neighbor system . along with thi distribut optim , when an addit degre of coordin becom avail for futur dsl deploy , dsm will allow even greater improv in dsl perform . implement are readili applic without caus ani perform degrad to the exist dsl under static spectrum manag . after provid an overview of the dsm concept , thi articl review two practic dsm method : iter water-fil , an autonom distribut power control method enabl great improv in perform , which can be implement through softwar option in some exist adsl and vdsl system ; and vectored-dmt , a coordin transmiss \/ recept techniqu achiev crosstalk-fre commun for dsl system , which bring within reach the dream of provid univers internet access at speed close to 100 mb \/ s to 500 m on 1 - 2 line and beyond 1 km on 2 - 4 line . dsm-capabl dsl thu enabl the broadband age","ordered_present_kp":[397,118,156,177,0,572,713,933,1046,1066,1168,1206,1224,1241,1284,1367,1423],"keyphrases":["dynamic spectrum management","electromagnetic coupling","twisted pairs","phone cable","DSL system design","transceivers","distributed optimization","static spectrum management","iterative water-filling","autonomous distributed power control method","software options","VDSL systems","vectored-DMT","coordinated transmission\/reception","crosstalk-free communication","universal Internet access","500 m","DSL systems performance","data transmission","ADSL systems","broadband networks","100 Mbit\/s"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","M","R","M","M"]} {"id":"1352","title":"Elastically adaptive deformable models","abstract":"We present a technique for the automatic adaptation of a deformable model's elastic parameters within a Kalman filter framework for shape estimation applications. The novelty of the technique is that the model's elastic parameters are not constant, but spatio-temporally varying. The variation of the elastic parameters depends on the distance of the model from the data and the rate of change of this distance. Each pass of the algorithm uses physics-based modeling techniques to iteratively adjust both the geometric and the elastic degrees of freedom of the model in response to forces that are computed from the discrepancy between the model and the data. By augmenting the state equations of an extended Kalman filter to incorporate these additional variables, we are able to significantly improve the quality of the shape estimation. Therefore, the model's elastic parameters are always initialized to the same value and they are subsequently modified depending on the data and the noise distribution. We present results demonstrating the effectiveness of our method for both two-dimensional and three-dimensional data","tok_text":"elast adapt deform model \n we present a techniqu for the automat adapt of a deform model 's elast paramet within a kalman filter framework for shape estim applic . the novelti of the techniqu is that the model 's elast paramet are not constant , but spatio-tempor vari . the variat of the elast paramet depend on the distanc of the model from the data and the rate of chang of thi distanc . each pass of the algorithm use physics-bas model techniqu to iter adjust both the geometr and the elast degre of freedom of the model in respons to forc that are comput from the discrep between the model and the data . by augment the state equat of an extend kalman filter to incorpor these addit variabl , we are abl to significantli improv the qualiti of the shape estim . therefor , the model 's elast paramet are alway initi to the same valu and they are subsequ modifi depend on the data and the nois distribut . we present result demonstr the effect of our method for both two-dimension and three-dimension data","ordered_present_kp":[0,57,92,115,143,422,489,625,643],"keyphrases":["elastically adaptive deformable models","automatic adaptation","elastic parameters","Kalman filter framework","shape estimation","physics-based modeling techniques","elastic degrees of freedom","state equations","extended Kalman filter","geometric degrees of freedom"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"869","title":"An exactly solvable random satisfiability problem","abstract":"We introduce a new model for the generation of random satisfiability problems. It is an extension of the hyper-SAT model of Ricci-Tersenghi, Weigt and Zecchina (2001), which is a variant of the famous K-SAT model: it is extended to q-state variables and relates to a different choice of the statistical ensemble. The model has an exactly solvable statistic: the critical exponents and scaling functions of the SAT\/UNSAT transition are calculable at zero temperature, with no need of replicas, also with exact finite-size corrections. We also introduce an exact duality of the model, and show an analogy of thermodynamic properties with the random energy model of disordered spin system theory. Relations with error correcting codes are also discussed","tok_text":"an exactli solvabl random satisfi problem \n we introduc a new model for the gener of random satisfi problem . it is an extens of the hyper-sat model of ricci-tersenghi , weigt and zecchina ( 2001 ) , which is a variant of the famou k-sat model : it is extend to q-state variabl and relat to a differ choic of the statist ensembl . the model ha an exactli solvabl statist : the critic expon and scale function of the sat \/ unsat transit are calcul at zero temperatur , with no need of replica , also with exact finite-s correct . we also introduc an exact dualiti of the model , and show an analog of thermodynam properti with the random energi model of disord spin system theori . relat with error correct code are also discuss","ordered_present_kp":[3,133,262,313,504,549,600,630,653,692],"keyphrases":["exactly solvable random satisfiability problem","hyper-SAT model","q-state variables","statistical ensemble","exact finite-size corrections","exact duality","thermodynamic properties","random energy model","disordered spin system theory","error correcting codes"],"prmu":["P","P","P","P","P","P","P","P","P","P"]} {"id":"65","title":"The use of subtypes and stereotypes in the UML model","abstract":"Based on users' experiences of Version 1.3 of the Unified Modeling Language (UML) of the Object Management Group (OMG), a Request For Information in 1999 elicited several responses which were asked to identify \"problems\" but not to offer any solutions. One of these responses is examined for \"problems\" relating to the UML metamodel and here some solutions to the problems identified there are proposed. Specifically, we evaluate the metamodel relating to stereotypes versus subtypes; the various kinds of Classifier (particularly Types, Interfaces and Classes); the introduction of a new subtype for the whole part relationship; as well as identifying areas in the metamodel where the UML seems to have been used inappropriately in the very definition of the UML's metamodel","tok_text":"the use of subtyp and stereotyp in the uml model \n base on user ' experi of version 1.3 of the unifi model languag ( uml ) of the object manag group ( omg ) , a request for inform in 1999 elicit sever respons which were ask to identifi \" problem \" but not to offer ani solut . one of these respons is examin for \" problem \" relat to the uml metamodel and here some solut to the problem identifi there are propos . specif , we evalu the metamodel relat to stereotyp versu subtyp ; the variou kind of classifi ( particularli type , interfac and class ) ; the introduct of a new subtyp for the whole part relationship ; as well as identifi area in the metamodel where the uml seem to have been use inappropri in the veri definit of the uml 's metamodel","ordered_present_kp":[11,22,39,95,130,161,499,591],"keyphrases":["subtypes","stereotypes","UML model","Unified Modeling Language","Object Management Group","Request For Information","Classifier","whole part relationship"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"931","title":"Active vibration control of composite sandwich beams with piezoelectric extension-bending and shear actuators","abstract":"We have used quasi-static equations of piezoelectricity to derive a finite element formulation capable of modelling two different kinds of piezoelastically induced actuation in an adaptive composite sandwich beam. This formulation is made to couple certain piezoelectric constants to a transverse electric field to develop extension-bending actuation and shear-induced actuation. As an illustration, we present a sandwich model of three sublaminates: face\/core\/face. We develop a control scheme based on the linear quadratic regulator\/independent modal space control (LQR\/IMSC) method and use this to estimate the active stiffness and the active damping introduced by shear and extension-bending actuators. To assess the performance of each type of actuator, a dynamic response study is carried out in the modal domain. We observe that the shear actuator is more efficient in actively controlling the vibration than the extension-bending actuator for the same control effort","tok_text":"activ vibrat control of composit sandwich beam with piezoelectr extension-bend and shear actuat \n we have use quasi-stat equat of piezoelectr to deriv a finit element formul capabl of model two differ kind of piezoelast induc actuat in an adapt composit sandwich beam . thi formul is made to coupl certain piezoelectr constant to a transvers electr field to develop extension-bend actuat and shear-induc actuat . as an illustr , we present a sandwich model of three sublamin : face \/ core \/ face . we develop a control scheme base on the linear quadrat regul \/ independ modal space control ( lqr \/ imsc ) method and use thi to estim the activ stiff and the activ damp introduc by shear and extension-bend actuat . to assess the perform of each type of actuat , a dynam respons studi is carri out in the modal domain . we observ that the shear actuat is more effici in activ control the vibrat than the extension-bend actuat for the same control effort","ordered_present_kp":[110,52,153,209,239,306,332,366,392,442,466,538,570,637,657,83,366,763,803],"keyphrases":["piezoelectricity","shear actuators","quasi-static equations","finite element formulation","piezoelastically","adaptive composite sandwich beam","piezoelectric constants","transverse electric field","extension-bending actuation","extension-bending actuation","shear-induced actuation","sandwich model","sublaminates","linear quadratic regulator","modal space control","active stiffness","active damping","dynamic response","modal domain","finite element procedure","extension-bending actuators"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M","P"]} {"id":"974","title":"Extrapolation in Lie groups with approximated BCH-formula","abstract":"We present an extrapolation algorithm for the integration of differential equations in Lie groups which is a suitable generalization of the well-known GBS-algorithm for ODEs. Sufficiently accurate approximations to the BCH-formula are required to reach a given order. We give such approximations with a minimized number of commutators","tok_text":"extrapol in lie group with approxim bch-formula \n we present an extrapol algorithm for the integr of differenti equat in lie group which is a suitabl gener of the well-known gbs-algorithm for ode . suffici accur approxim to the bch-formula are requir to reach a given order . we give such approxim with a minim number of commut","ordered_present_kp":[12,27,101,174],"keyphrases":["Lie groups","approximated BCH-formula","differential equations","GBS-algorithm","geometric integration","extrapolation methods"],"prmu":["P","P","P","P","M","M"]} {"id":"137","title":"An efficient DIPIE algorithm for CAD of electrostatically actuated MEMS devices","abstract":"Pull-in parameters are important properties of electrostatic actuators. Efficient and accurate analysis tools that can capture these parameters for different design geometries, are therefore essential. Current simulation tools approach the pull-in state by iteratively adjusting the voltage applied across the actuator electrodes. The convergence rate of this scheme gradually deteriorates as the pull-in state is approached. Moreover, the convergence is inconsistent and requires many mesh and accuracy refinements to assure reliable predictions. As a result, the design procedure of electrostatically actuated MEMS devices can be time-consuming. In this paper a novel Displacement Iteration Pull-In Extraction (DIPIE) scheme is presented. The DIPIE scheme is shown to converge consistently and far more rapidly than the Voltage Iterations (VI) scheme (>100 times faster!). The DIPIE scheme requires separate mechanical and electrostatic field solvers. Therefore, it can be easily implemented in existing MOEMS CAD packages. Moreover, using the DIPIE scheme, the pull-in parameters extraction can be performed in a fully automated mode, and no user input for search bounds is required","tok_text":"an effici dipi algorithm for cad of electrostat actuat mem devic \n pull-in paramet are import properti of electrostat actuat . effici and accur analysi tool that can captur these paramet for differ design geometri , are therefor essenti . current simul tool approach the pull-in state by iter adjust the voltag appli across the actuat electrod . the converg rate of thi scheme gradual deterior as the pull-in state is approach . moreov , the converg is inconsist and requir mani mesh and accuraci refin to assur reliabl predict . as a result , the design procedur of electrostat actuat mem devic can be time-consum . in thi paper a novel displac iter pull-in extract ( dipi ) scheme is present . the dipi scheme is shown to converg consist and far more rapidli than the voltag iter ( vi ) scheme ( > 100 time faster ! ) . the dipi scheme requir separ mechan and electrostat field solver . therefor , it can be easili implement in exist moem cad packag . moreov , use the dipi scheme , the pull-in paramet extract can be perform in a fulli autom mode , and no user input for search bound is requir","ordered_present_kp":[10,936,36,67,36,198,350,862,638],"keyphrases":["DIPIE algorithm","electrostatically actuated MEMS devices","electrostatic actuators","pull-in parameters","design geometries","convergence rate","displacement iteration","electrostatic field solver","MOEMS CAD packages","displacement iteration pull-in extraction scheme","mechanical field solver","computer-aided design"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","M"]} {"id":"989","title":"A dynamic checkpoint scheduling scheme for fault tolerant distributed computing systems","abstract":"The selection of the optimal checkpointing interval has been a very critical issue in implementing checkpointing-recovery schemes for fault tolerant distributed systems. This paper presents a new scheme that allows a process to select the proper checkpointing interval dynamically. A process in the system evaluates the cost of checkpointing and possible rollback for each checkpointing interval and selects the proper time interval for the next checkpointing. Unlike the other schemes, the overhead incurred by both the checkpointing and rollback activities are considered for the cost evaluation, and the current communication pattern is reflected in the selection of the checkpointing interval. Moreover, the proposed scheme requires no extra message communication for the checkpointing interval selection and can easily be incorporated into the existing checkpointing coordination schemes","tok_text":"a dynam checkpoint schedul scheme for fault toler distribut comput system \n the select of the optim checkpoint interv ha been a veri critic issu in implement checkpointing-recoveri scheme for fault toler distribut system . thi paper present a new scheme that allow a process to select the proper checkpoint interv dynam . a process in the system evalu the cost of checkpoint and possibl rollback for each checkpoint interv and select the proper time interv for the next checkpoint . unlik the other scheme , the overhead incur by both the checkpoint and rollback activ are consid for the cost evalu , and the current commun pattern is reflect in the select of the checkpoint interv . moreov , the propos scheme requir no extra messag commun for the checkpoint interv select and can easili be incorpor into the exist checkpoint coordin scheme","ordered_present_kp":[2,50,94,588,617],"keyphrases":["dynamic checkpoint scheduling scheme","distributed computing systems","optimal checkpointing interval","cost evaluation","communication pattern","fault tolerant computing","rollback recovery"],"prmu":["P","P","P","P","P","R","M"]} {"id":"98","title":"Automating the compliance and supervision process","abstract":"New technology enables large broker\/dealers to supervise and ensure compliance across multiple branches and managers","tok_text":"autom the complianc and supervis process \n new technolog enabl larg broker \/ dealer to supervis and ensur complianc across multipl branch and manag","ordered_present_kp":[10,24,68],"keyphrases":["compliance","supervision","brokers","risk management"],"prmu":["P","P","P","M"]} {"id":"898","title":"Influence of advertising expenses on the characteristics of functioning of an insurance company","abstract":"The basic characteristics of the functioning of an insurance company, including the average capital, ruin and survival probabilities, and the conditional time before ruin, are examined with allowance for advertising expenses","tok_text":"influenc of advertis expens on the characterist of function of an insur compani \n the basic characterist of the function of an insur compani , includ the averag capit , ruin and surviv probabl , and the condit time befor ruin , are examin with allow for advertis expens","ordered_present_kp":[154,178,203],"keyphrases":["average capital","survival probabilities","conditional time","advertising expenses influence","insurance company functioning characteristics","ruin probabilities"],"prmu":["P","P","P","R","R","R"]} {"id":"820","title":"Yet some more complexity results for default logic","abstract":"We identify several new tractable subsets and several new intractable simple cases for reasoning in the propositional version of Reiter's default logic. The majority of our findings are related to brave reasoning. By making some intuitive observations, most classes that we identify can be derived quite easily from some subsets of default logic already known in the literature. Some of the subsets we discuss are subclasses of the so-called \"extended logic programs\". All the tractable subsets presented in this paper can be recognized in linear time","tok_text":"yet some more complex result for default logic \n we identifi sever new tractabl subset and sever new intract simpl case for reason in the proposit version of reiter 's default logic . the major of our find are relat to brave reason . by make some intuit observ , most class that we identifi can be deriv quit easili from some subset of default logic alreadi known in the literatur . some of the subset we discuss are subclass of the so-cal \" extend logic program \" . all the tractabl subset present in thi paper can be recogn in linear time","ordered_present_kp":[124,33,14,442,71],"keyphrases":["complexity results","default logic","tractable subsets","reasoning","extended logic programs","complexity classes","nonmonotonic reasoning"],"prmu":["P","P","P","P","P","R","M"]} {"id":"865","title":"Setup cost and lead time reductions on stochastic inventory models with a service level constraint","abstract":"The stochastic inventory models analyzed in this paper explore the problem of lead time associated with setup cost reductions for the continuous review and periodic review inventory models. For these two models with a mixture of backorders and lost sales, we respectively assume that their mean and variance of the lead time demand and protection interval (i.e., lead time plus review period) demand are known, but their probability distributions are unknown. We develop a minimax distribution free procedure to find the optimal solution-for each case","tok_text":"setup cost and lead time reduct on stochast inventori model with a servic level constraint \n the stochast inventori model analyz in thi paper explor the problem of lead time associ with setup cost reduct for the continu review and period review inventori model . for these two model with a mixtur of backord and lost sale , we respect assum that their mean and varianc of the lead time demand and protect interv ( i.e. , lead time plu review period ) demand are known , but their probabl distribut are unknown . we develop a minimax distribut free procedur to find the optim solution-for each case","ordered_present_kp":[186,15,35,67,231,300,312,376,397,480,525],"keyphrases":["lead time reductions","stochastic inventory models","service level constraint","setup cost reductions","periodic review inventory models","backorders","lost sales","lead time demand","protection interval","probability distributions","minimax distribution free procedure","continuous review inventory models"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1436","title":"Modelling tomographic cone-beam projection data from a polyhedral phantom","abstract":"Analytical phantoms are used to generate projection data for testing reconstruction accuracy in computed axial tomography. A circular source locus (equivalent to rotating specimen with a fixed source) provides insufficient data for 'exact' reconstruction in cone-beam transmission tomography, thus phantom data are useful for studying the consequent errors and also for investigating alternative scanning loci and reconstruction techniques. We present an algorithm that can compute phantom cone-beam projection data from a phantom comprising geometrically defined polyhedra. Each polyhedron is defined as a set of polygons enclosing a volume of fixed linear attenuation coefficient. The algorithm works by projecting each polygon in turn onto the modelled detector array, which accumulates the product of source to polygon intersection distance (for the rays intersecting each detector element), linear attenuation coefficient and sign of projected polygon area (indicating whether rays enter or exit the polyhedron at this face). The phantom data are rotated according to the projection angle, whilst the source location and detector plane remain fixed. Polyhedra can be of simple geometric form, or complex surfaces derived from 3D images of real specimens. This algorithm is illustrated using a phantom comprising 989 238 polygons, representing an iso-surface generated from a microtomographic reconstruction of a piece of walrus tusk","tok_text":"model tomograph cone-beam project data from a polyhedr phantom \n analyt phantom are use to gener project data for test reconstruct accuraci in comput axial tomographi . a circular sourc locu ( equival to rotat specimen with a fix sourc ) provid insuffici data for ' exact ' reconstruct in cone-beam transmiss tomographi , thu phantom data are use for studi the consequ error and also for investig altern scan loci and reconstruct techniqu . we present an algorithm that can comput phantom cone-beam project data from a phantom compris geometr defin polyhedra . each polyhedron is defin as a set of polygon enclos a volum of fix linear attenu coeffici . the algorithm work by project each polygon in turn onto the model detector array , which accumul the product of sourc to polygon intersect distanc ( for the ray intersect each detector element ) , linear attenu coeffici and sign of project polygon area ( indic whether ray enter or exit the polyhedron at thi face ) . the phantom data are rotat accord to the project angl , whilst the sourc locat and detector plane remain fix . polyhedra can be of simpl geometr form , or complex surfac deriv from 3d imag of real specimen . thi algorithm is illustr use a phantom compris 989 238 polygon , repres an iso-surfac gener from a microtomograph reconstruct of a piec of walru tusk","ordered_present_kp":[6,46,119,143,289,397,535,628,1278,1318],"keyphrases":["tomographic cone-beam projection data","polyhedral phantom","reconstruction accuracy","computed axial tomography","cone-beam transmission tomography","alternative scanning loci","geometrically defined polyhedra","linear attenuation coefficient","microtomographic reconstruction","walrus tusk","reconstruction software accuracy","X-ray attenuation","cumulative pixel array","interpolation","geometry file"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","M","M","U","U"]} {"id":"978","title":"On Implicit Euler for high-order high-index DAEs","abstract":"The Implicit Euler method is seldom used to solve differential-algebraic equations (DAEs) of differential index r >or= 3, since the method in general fails to converge in the first r - 2 steps after a change of stepsize. However, if the differential equation is of order d = r - 1 >or= 1, an alternative variable-step version of the Euler method can be shown uniformly convergent. For d = r - 1, this variable-step method is equivalent to the Implicit Euler except for the first r - 2 steps after a change of stepsize. Generalization to DAEs with differential equations of order d > r - 1 >or= 1, and to variable-order formulas is discussed","tok_text":"on implicit euler for high-ord high-index dae \n the implicit euler method is seldom use to solv differential-algebra equat ( dae ) of differenti index r > or= 3 , sinc the method in gener fail to converg in the first r - 2 step after a chang of stepsiz . howev , if the differenti equat is of order d = r - 1 > or= 1 , an altern variable-step version of the euler method can be shown uniformli converg . for d = r - 1 , thi variable-step method is equival to the implicit euler except for the first r - 2 step after a chang of stepsiz . gener to dae with differenti equat of order d > r - 1 > or= 1 , and to variable-ord formula is discuss","ordered_present_kp":[52,96,196,424,608,134],"keyphrases":["Implicit Euler method","differential-algebraic equations","differential index","convergence","variable-step method","variable-order formulas","stepsize change","linear multistep method","backward differentiation formula","initial value problem"],"prmu":["P","P","P","P","P","P","R","M","M","U"]} {"id":"616","title":"An overview of modems","abstract":"This paper describes cursory glance of different types of modems classified for application, range, line type, operating mode, synchronizing mode, modulation, etc., highly useful for all engineering students of communication, electrical, computer science and information technology students. This paper also describes the standards and protocols used and the future trend","tok_text":"an overview of modem \n thi paper describ cursori glanc of differ type of modem classifi for applic , rang , line type , oper mode , synchron mode , modul , etc . , highli use for all engin student of commun , electr , comput scienc and inform technolog student . thi paper also describ the standard and protocol use and the futur trend","ordered_present_kp":[15,108,120,132,148,183,236,290,303],"keyphrases":["modems","line type","operating mode","synchronizing mode","modulation","engineering students","information technology students","standards","protocols","communication students","electrical students","computer science students"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"69","title":"Sensitivity calibration of ultrasonic detectors based using ADD diagrams","abstract":"The paper considers basic problems related to utilization of ADD diagrams in calibrating sensitivity of ultrasonic detectors. We suggest that a convenient tool for solving such problems can be the software package ADD Universal. Version 2.1 designed for plotting individual ADD diagrams for normal and slanted transducers. The software is compatible with the contemporary operational system Windows-95(98). Reference signals for calibration are generated in a sample with cylindrical holes","tok_text":"sensit calibr of ultrason detector base use add diagram \n the paper consid basic problem relat to util of add diagram in calibr sensit of ultrason detector . we suggest that a conveni tool for solv such problem can be the softwar packag add univers . version 2.1 design for plot individu add diagram for normal and slant transduc . the softwar is compat with the contemporari oper system windows-95(98 ) . refer signal for calibr are gener in a sampl with cylindr hole","ordered_present_kp":[44,17,222,315,363,7,456,406,0],"keyphrases":["sensitivity calibration","calibration","ultrasonic detectors","ADD diagrams","software package","slanted transducers","contemporary operational system Windows-95(98","reference signals","cylindrical holes","normal transducers","ultrasonic testing"],"prmu":["P","P","P","P","P","P","P","P","P","R","M"]} {"id":"653","title":"Indexing-neglected and poorly understood","abstract":"The growth of the Internet has highlighted the use of machine indexing. The difficulties in using the Internet as a searching device can be frustrating. The use of the term \"python\" is given as an example. Machine indexing is noted as \"rotten\" and human indexing as \"capricious.\" The problem seems to be a lack of a theoretical foundation for the art of indexing. What librarians have learned over the last hundred years has yet to yield a consistent approach to what really works best in preparing index terms and in the ability of our customers to search the various indexes. An attempt is made to consider the elements of indexing, their pros and cons. The argument is made that machine indexing is far too prolific in its production of index terms. Neither librarians nor computer programmers have made much progress to improve Internet indexing. Human indexing has had the same problems for over fifty years","tok_text":"indexing-neglect and poorli understood \n the growth of the internet ha highlight the use of machin index . the difficulti in use the internet as a search devic can be frustrat . the use of the term \" python \" is given as an exampl . machin index is note as \" rotten \" and human index as \" caprici . \" the problem seem to be a lack of a theoret foundat for the art of index . what librarian have learn over the last hundr year ha yet to yield a consist approach to what realli work best in prepar index term and in the abil of our custom to search the variou index . an attempt is made to consid the element of index , their pro and con . the argument is made that machin index is far too prolif in it product of index term . neither librarian nor comput programm have made much progress to improv internet index . human index ha had the same problem for over fifti year","ordered_present_kp":[59,92,147,496,272],"keyphrases":["Internet","machine indexing","searching","human indexing","index terms"],"prmu":["P","P","P","P","P"]} {"id":"1206","title":"The MAGNeT toolkit: design, implementation and evaluation","abstract":"The current trend in constructing high-performance computing systems is to connect a large number of machines via a fast interconnect or a large-scale network such as the Internet. This approach relies on the performance of the interconnect (or Internet) to enable fast, large-scale distributed computing. A detailed understanding of the communication traffic is required in order to optimize the operation of the entire system. Network researchers traditionally monitor traffic in the network to gain the insight necessary to optimize network operations. Recent work suggests additional insight can be obtained by also monitoring traffic at the application level. The Monitor for Application-Generated Network Traffic toolkit (MAGNeT) we describe here monitors application traffic patterns in production systems, thus enabling more highly optimized networks and interconnects for the next generation of high-performance computing systems","tok_text":"the magnet toolkit : design , implement and evalu \n the current trend in construct high-perform comput system is to connect a larg number of machin via a fast interconnect or a large-scal network such as the internet . thi approach reli on the perform of the interconnect ( or internet ) to enabl fast , large-scal distribut comput . a detail understand of the commun traffic is requir in order to optim the oper of the entir system . network research tradit monitor traffic in the network to gain the insight necessari to optim network oper . recent work suggest addit insight can be obtain by also monitor traffic at the applic level . the monitor for application-gener network traffic toolkit ( magnet ) we describ here monitor applic traffic pattern in product system , thu enabl more highli optim network and interconnect for the next gener of high-perform comput system","ordered_present_kp":[83,208,642,523,159,83,4],"keyphrases":["MAGNeT","high-performance computing systems","high-performance computing","interconnects","Internet","optimized networks","Monitor for Application-Generated Network Traffic toolkit","network protocol","traffic characterization","virtual supercomputing","computational grids"],"prmu":["P","P","P","P","P","P","P","M","M","U","M"]} {"id":"1243","title":"HPF\/JA: extensions of High Performance Fortran for accelerating real-world applications","abstract":"This paper presents a set of extensions on High Performance Fortran (HPF) to make it more usable for parallelizing real-world production codes. HPF has been effective for programs that a compiler can automatically optimize efficiently. However, once the compiler cannot, there have been no ways for the users to explicitly parallelize or optimize their programs. In order to resolve the situation, we have developed a set of HPF extensions (HPF\/JA) to give the users more control over sophisticated parallelization and communication optimizations. They include parallelization of loops with complicated reductions, asynchronous communication, user-controllable shadow, and communication pattern reuse for irregular remote data accesses. Preliminary experiments have proved that the extensions are effective at increasing HPF's usability","tok_text":"hpf \/ ja : extens of high perform fortran for acceler real-world applic \n thi paper present a set of extens on high perform fortran ( hpf ) to make it more usabl for parallel real-world product code . hpf ha been effect for program that a compil can automat optim effici . howev , onc the compil can not , there have been no way for the user to explicitli parallel or optim their program . in order to resolv the situat , we have develop a set of hpf extens ( hpf \/ ja ) to give the user more control over sophist parallel and commun optim . they includ parallel of loop with complic reduct , asynchron commun , user-control shadow , and commun pattern reus for irregular remot data access . preliminari experi have prove that the extens are effect at increas hpf 's usabl","ordered_present_kp":[21,0,239,554],"keyphrases":["HPF","High Performance Fortran","compiler","parallelization of loops","parallel processing","data parallel language","supercomputer","parallel programming"],"prmu":["P","P","P","P","M","M","U","R"]} {"id":"94","title":"Gearing up for CLS bank","abstract":"Continuous-Linked Settlement, a dream of the foreign-exchange community for years, may finally become a reality by the end of 2002","tok_text":"gear up for cl bank \n continuous-link settlement , a dream of the foreign-exchang commun for year , may final becom a realiti by the end of 2002","ordered_present_kp":[22,66],"keyphrases":["continuous-linked settlement","foreign-exchange"],"prmu":["P","P"]} {"id":"985","title":"Local activity criteria for discrete-map CNN","abstract":"Discrete-time CNN systems are studied in this paper by the application of Chua's local activity principle. These systems are locally active everywhere except for one isolated parameter value. As a result, nonhomogeneous spatiotemporal patterns may be induced by any initial setting of the CNN system when the strength of the system diffusion coupling exceeds a critical threshold. The critical coupling coefficient can be derived from the loaded cell impedance of the CNN system. Three well-known 1D map CNN's (namely, the logistic map CNN, the magnetic vortex pinning map CNN, and the spiral wave reproducing map CNN) are introduced to illustrate the applications of the local activity principle. In addition, we use the cell impedance to demonstrate the period-doubling scenario in the logistic and the magnetic vortex pinning maps","tok_text":"local activ criteria for discrete-map cnn \n discrete-tim cnn system are studi in thi paper by the applic of chua 's local activ principl . these system are local activ everywher except for one isol paramet valu . as a result , nonhomogen spatiotempor pattern may be induc by ani initi set of the cnn system when the strength of the system diffus coupl exce a critic threshold . the critic coupl coeffici can be deriv from the load cell imped of the cnn system . three well-known 1d map cnn 's ( name , the logist map cnn , the magnet vortex pin map cnn , and the spiral wave reproduc map cnn ) are introduc to illustr the applic of the local activ principl . in addit , we use the cell imped to demonstr the period-doubl scenario in the logist and the magnet vortex pin map","ordered_present_kp":[44,0,25,426,108,227,382,506,527,563,708],"keyphrases":["local activity criteria","discrete-map CNN","discrete-time CNN systems","Chua's local activity principle","nonhomogeneous spatiotemporal patterns","critical coupling coefficient","loaded cell impedance","logistic map CNN","magnetic vortex pinning map CNN","spiral wave reproducing map CNN","period-doubling","difference equation"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","U"]} {"id":"552","title":"Anatomy of the coupling query in a Web warehouse","abstract":"To populate a data warehouse specifically designed for Web data, i.e. Web warehouse, it is imperative to harness relevant documents from the Web. In this paper, we describe a query mechanism called coupling query to glean relevant Web data in the context of our Web warehousing system called Warehouse Of Web Data (WHOWEDA). A coupling query may be used for querying both HTML and XML documents. Important features of our query mechanism are the ability to query metadata, content, internal and external (hyperlink) structure of Web documents based on partial knowledge, ability to express constraints on tag attributes and tagless segment of data, ability to express conjunctive as well as disjunctive query conditions compactly, ability to control execution of a Web query and preservation of the topological structure of hyperlinked documents in the query results. We also discuss how to formulate a query graphically and in textual form using a coupling graph and coupling text, respectively","tok_text":"anatomi of the coupl queri in a web warehous \n to popul a data warehous specif design for web data , i.e. web warehous , it is imper to har relev document from the web . in thi paper , we describ a queri mechan call coupl queri to glean relev web data in the context of our web wareh system call warehous of web data ( whoweda ) . a coupl queri may be use for queri both html and xml document . import featur of our queri mechan are the abil to queri metadata , content , intern and extern ( hyperlink ) structur of web document base on partial knowledg , abil to express constraint on tag attribut and tagless segment of data , abil to express conjunct as well as disjunct queri condit compactli , abil to control execut of a web queri and preserv of the topolog structur of hyperlink document in the queri result . we also discuss how to formul a queri graphic and in textual form use a coupl graph and coupl text , respect","ordered_present_kp":[15,32,58,296,380,451,462,516,537,586,603,665,756,776,905],"keyphrases":["coupling query","Web warehouse","data warehouse","Warehouse Of Web Data","XML documents","metadata","content","Web documents","partial knowledge","tag attributes","tagless segment","disjunctive query conditions","topological structure","hyperlinked documents","coupling text","HTML documents","internal structure","external structure","conjunctive query conditions","execution control","graphical query formulation","textual query formulation"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R","R","R","R"]} {"id":"1107","title":"A knowledge-navigation system for dimensional metrology","abstract":"Geometric dimensioning and tolerancing (GD&T) is a method to specify the dimensions and form of a part so that it will meet its design intent. GD&T is difficult to master for two main reasons. First, it is based on complex 3D geometric entities and relationships. Second, the geometry is associated with a large, diverse knowledge base of dimensional metrology with many interconnections. This paper describes an approach to create a dimensional metrology knowledge base that is organized around a set of key concepts and to represent those concepts as virtual objects that can be navigated with interactive, computer visualization techniques to access the associated knowledge. The approach can enable several applications. First is the application to convey the definition and meaning of GD&T over a broad range of tolerance types. Second is the application to provide a visualization of dimensional metrology knowledge within a control hierarchy of the inspection process. Third is the application to show the coverage of interoperability standards to enable industry to make decisions on standards development and harmonization efforts. A prototype system has been implemented to demonstrate the principles involved in the approach","tok_text":"a knowledge-navig system for dimension metrolog \n geometr dimens and toleranc ( gd&t ) is a method to specifi the dimens and form of a part so that it will meet it design intent . gd&t is difficult to master for two main reason . first , it is base on complex 3d geometr entiti and relationship . second , the geometri is associ with a larg , divers knowledg base of dimension metrolog with mani interconnect . thi paper describ an approach to creat a dimension metrolog knowledg base that is organ around a set of key concept and to repres those concept as virtual object that can be navig with interact , comput visual techniqu to access the associ knowledg . the approach can enabl sever applic . first is the applic to convey the definit and mean of gd&t over a broad rang of toler type . second is the applic to provid a visual of dimension metrolog knowledg within a control hierarchi of the inspect process . third is the applic to show the coverag of interoper standard to enabl industri to make decis on standard develop and harmon effort . a prototyp system ha been implement to demonstr the principl involv in the approach","ordered_present_kp":[50,69,29,614,959,898],"keyphrases":["dimensional metrology","geometric dimensioning","tolerancing","visualization","inspection","interoperability standards","knowledge navigation","manufacturing training","VRML","Web"],"prmu":["P","P","P","P","P","P","R","U","U","U"]} {"id":"1142","title":"Fast and accurate leaf verification for dynamic multileaf collimation using an electronic portal imaging device","abstract":"A prerequisite for accurate dose delivery of IMRT profiles produced with dynamic multileaf collimation (DMLC) is highly accurate leaf positioning. In our institution, leaf verification for DMLC was initially done with film and ionization chamber. To overcome the limitations of these methods, a fast, accurate and two-dimensional method for daily leaf verification, using our CCD-camera based electronic portal imaging device (EPID), has been developed. This method is based on a flat field produced with a 0.5 cm wide sliding gap for each leaf pair. Deviations in gap widths are detected as deviations in gray scale value profiles derived from the EPID images, and not by directly assessing leaf positions in the images. Dedicated software was developed to reduce the noise level in the low signal images produced with the narrow gaps. The accuracy of this quality assurance procedure was tested by introducing known leaf position errors. It was shown that errors in leaf gap as small as 0.01-0.02 cm could be detected, which is certainly adequate to guarantee accurate dose delivery of DMLC treatments, even for strongly modulated beam profiles. Using this method, it was demonstrated that both short and long term reproducibility in leaf positioning were within 0.01 cm (1 sigma ) for all gantry angles, and that the effect of gravity was negligible","tok_text":"fast and accur leaf verif for dynam multileaf collim use an electron portal imag devic \n a prerequisit for accur dose deliveri of imrt profil produc with dynam multileaf collim ( dmlc ) is highli accur leaf posit . in our institut , leaf verif for dmlc wa initi done with film and ioniz chamber . to overcom the limit of these method , a fast , accur and two-dimension method for daili leaf verif , use our ccd-camera base electron portal imag devic ( epid ) , ha been develop . thi method is base on a flat field produc with a 0.5 cm wide slide gap for each leaf pair . deviat in gap width are detect as deviat in gray scale valu profil deriv from the epid imag , and not by directli assess leaf posit in the imag . dedic softwar wa develop to reduc the nois level in the low signal imag produc with the narrow gap . the accuraci of thi qualiti assur procedur wa test by introduc known leaf posit error . it wa shown that error in leaf gap as small as 0.01 - 0.02 cm could be detect , which is certainli adequ to guarante accur dose deliveri of dmlc treatment , even for strongli modul beam profil . use thi method , it wa demonstr that both short and long term reproduc in leaf posit were within 0.01 cm ( 1 sigma ) for all gantri angl , and that the effect of graviti wa neglig","ordered_present_kp":[9,30,60,107,281,355,407,540,559,581,615,755,777,887,1081,202,1226],"keyphrases":["accurate leaf verification","dynamic multileaf collimation","electronic portal imaging device","accurate dose delivery","leaf positioning","ionization chamber","two-dimensional method","CCD-camera based electronic portal imaging device","sliding gap","leaf pair","gap widths","gray scale value profiles","noise level","signal images","leaf position errors","modulated beam profiles","gantry angles","intensity modulated radiation therapy profiles","electronic portal imaging device images"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M","R"]} {"id":"1283","title":"UPSILON: universal programming system with incomplete lazy object notation","abstract":"This paper presents a new model of computation that differs from prior models in that it emphasizes data over flow control, has no named variables and has an object-oriented flavor. We prove that this model is a complete and confluent acceptable programming system and has a usable type theory. A new data synchronization primitive is introduced in order to achieve the above properties. Subtle variations of the model are shown to fall short of having all these necessary properties","tok_text":"upsilon : univers program system with incomplet lazi object notat \n thi paper present a new model of comput that differ from prior model in that it emphas data over flow control , ha no name variabl and ha an object-ori flavor . we prove that thi model is a complet and confluent accept program system and ha a usabl type theori . a new data synchron primit is introduc in order to achiev the abov properti . subtl variat of the model are shown to fall short of have all these necessari properti","ordered_present_kp":[0,10,209,18,311,337,38],"keyphrases":["UPSILON","universal programming system","programming system","incomplete lazy object notation","object-oriented flavor","usable type theory","data synchronization primitive"],"prmu":["P","P","P","P","P","P","P"]} {"id":"693","title":"Lifting factorization of discrete W transform","abstract":"A general method is proposed to factor the type-IV discrete W transform (DWT-IV) into lifting steps and additions. Then, based on the relationships among various types of DWTs, four types of DWTs are factored into lifting steps and additions. After approximating the lifting matrices, we get four types of new integer DWTs (IntDWT-I, IntDWT-II, IntDWT-III, and IntDWT-IV) which are floating-point multiplication free. Integer-to-integer transforms (II-DWT), which approximate to DWT, are also proposed. Fast algorithms are given for the new transforms and their computational complexities are analyzed","tok_text":"lift factor of discret w transform \n a gener method is propos to factor the type-iv discret w transform ( dwt-iv ) into lift step and addit . then , base on the relationship among variou type of dwt , four type of dwt are factor into lift step and addit . after approxim the lift matric , we get four type of new integ dwt ( intdwt-i , intdwt-ii , intdwt-iii , and intdwt-iv ) which are floating-point multipl free . integer-to-integ transform ( ii-dwt ) , which approxim to dwt , are also propos . fast algorithm are given for the new transform and their comput complex are analyz","ordered_present_kp":[0,106,275,556],"keyphrases":["lifting factorization","DWT","lifting matrices","computational complexity","discrete wavelet transform","integer transforms","data compression","feature extraction","multiframe detection","filter bank","lossless coding schemes","mobile devices","integer arithmetic","mobile computing"],"prmu":["P","P","P","P","M","R","U","U","U","U","U","U","M","M"]} {"id":"1182","title":"Optimization of the memory weighting function in stochastic functional self-organized sorting performed by a team of autonomous mobile agents","abstract":"The activity of a team of autonomous mobile agents formed by identical \"robot-like-ant\" individuals capable of performing a random walk through an environment that are able to recognize and move different \"objects\" is modeled. The emergent desired behavior is a distributed sorting and clustering based only on local information and a memory register that records the past objects encountered. An optimum weighting function for the memory registers is theoretically derived. The optimum time-dependent weighting function allows sorting and clustering of the randomly distributed objects in the shortest time. By maximizing the average speed of a texture feature (the contrast) we check the central assumption, the intermediate steady-states hypothesis, of our theoretical result. It is proved that the algorithm optimization based on maximum speed variation of the contrast feature gives relationships similar to the theoretically derived annealing law","tok_text":"optim of the memori weight function in stochast function self-organ sort perform by a team of autonom mobil agent \n the activ of a team of autonom mobil agent form by ident \" robot-like- \" individu capabl of perform a random walk through an environ that are abl to recogn and move differ \" object \" is model . the emerg desir behavior is a distribut sort and cluster base onli on local inform and a memori regist that record the past object encount . an optimum weight function for the memori regist is theoret deriv . the optimum time-depend weight function allow sort and cluster of the randomli distribut object in the shortest time . by maxim the averag speed of a textur featur ( the contrast ) we check the central assumpt , the intermedi steady-st hypothesi , of our theoret result . it is prove that the algorithm optim base on maximum speed variat of the contrast featur give relationship similar to the theoret deriv anneal law","ordered_present_kp":[94,218,13,68,359,812],"keyphrases":["memory weighting function","sorting","autonomous mobile agents","random walk","clustering","algorithm optimization"],"prmu":["P","P","P","P","P","P"]} {"id":"106","title":"Quantum Zeno subspaces","abstract":"The quantum Zeno effect is recast in terms of an adiabatic theorem when the measurement is described as the dynamical coupling to another quantum system that plays the role of apparatus. A few significant examples are proposed and their practical relevance discussed. We also focus on decoherence-free subspaces","tok_text":"quantum zeno subspac \n the quantum zeno effect is recast in term of an adiabat theorem when the measur is describ as the dynam coupl to anoth quantum system that play the role of apparatu . a few signific exampl are propos and their practic relev discuss . we also focu on decoherence-fre subspac","ordered_present_kp":[0,71,121,96,273],"keyphrases":["quantum Zeno subspaces","adiabatic theorem","measurement","dynamical coupling","decoherence-free subspaces"],"prmu":["P","P","P","P","P"]} {"id":"945","title":"Testing statistical bounds on entanglement using quantum chaos","abstract":"Previous results indicate that while chaos can lead to substantial entropy production, thereby maximizing dynamical entanglement, this still falls short of maximality. Random matrix theory modeling of composite quantum systems, investigated recently, entails a universal distribution of the eigenvalues of the reduced density matrices. We demonstrate that these distributions are realized in quantized chaotic systems by using a model of two coupled and kicked tops. We derive an explicit statistical universal bound on entanglement, which is also valid for the case of unequal dimensionality of the Hilbert spaces involved, and show that this describes well the bounds observed using composite quantized chaotic systems such as coupled tops","tok_text":"test statist bound on entangl use quantum chao \n previou result indic that while chao can lead to substanti entropi product , therebi maxim dynam entangl , thi still fall short of maxim . random matrix theori model of composit quantum system , investig recent , entail a univers distribut of the eigenvalu of the reduc densiti matric . we demonstr that these distribut are realiz in quantiz chaotic system by use a model of two coupl and kick top . we deriv an explicit statist univers bound on entangl , which is also valid for the case of unequ dimension of the hilbert space involv , and show that thi describ well the bound observ use composit quantiz chaotic system such as coupl top","ordered_present_kp":[5,22,34,108,134,188,218,271,313,383,438,564],"keyphrases":["statistical bounds","entanglement","quantum chaos","entropy production","maximality","random matrix theory","composite quantum systems","universal distribution","reduced density matrices","quantized chaotic systems","kicked tops","Hilbert spaces"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"143","title":"An automated irradiation device for use in cyclotrons","abstract":"Two cyclotrons are being operated at IPEN-CNEN\/SP: one model CV-28, capable of accelerating protons with energies up to 24 MeV and beam currents up to 30 mu A, and three other particles; the other one, model Cyclone 30, accelerates protons with energy of 30 MeV and currents up to 350 mu A. Both have the objective of irradiating targets both for radioisotope production for use in nuclear medicine and general research. The development of irradiating systems completely automatized was the objective of this work, always aiming to reduce the radiation exposition dose to the workers and to increase the reliability of use of these systems","tok_text":"an autom irradi devic for use in cyclotron \n two cyclotron are be oper at ipen-cnen \/ sp : one model cv-28 , capabl of acceler proton with energi up to 24 mev and beam current up to 30 mu a , and three other particl ; the other one , model cyclon 30 , acceler proton with energi of 30 mev and current up to 350 mu a. both have the object of irradi target both for radioisotop product for use in nuclear medicin and gener research . the develop of irradi system complet automat wa the object of thi work , alway aim to reduc the radiat exposit dose to the worker and to increas the reliabl of use of these system","ordered_present_kp":[3,33,101,127,240,364,395,415,528],"keyphrases":["automated irradiation device","cyclotrons","CV-28","protons","Cyclone 30","radioisotope production","nuclear medicine","general research","radiation exposition dose"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"900","title":"Mathematical models of functioning of an insurance company with allowance for the rate of return","abstract":"Models of the functioning of insurance companies are suggested, when the free capital increases from interest at a certain rate. The basic characteristics of the capital of a company are studied in the stationary regime","tok_text":"mathemat model of function of an insur compani with allow for the rate of return \n model of the function of insur compani are suggest , when the free capit increas from interest at a certain rate . the basic characterist of the capit of a compani are studi in the stationari regim","ordered_present_kp":[0,145,169,264],"keyphrases":["mathematical models","free capital increase","interest","stationary regime","insurance company functioning","return rate allowance"],"prmu":["P","P","P","P","R","R"]} {"id":"592","title":"Approximation theory of fuzzy systems based upon genuine many-valued implications - SISO cases","abstract":"It is proved that the single input and single output (SISO) fuzzy systems based upon genuine many-valued implications are universal approximators. It is shown theoretically that fuzzy control systems based upon genuine many-valued implications are equivalent to those based upon t-norm implications, the general approach to construct fuzzy systems is given. It is also shown that defuzzifier based upon center of areas is not appropriate to the fuzzy systems based upon genuine many-valued implications","tok_text":"approxim theori of fuzzi system base upon genuin many-valu implic - siso case \n it is prove that the singl input and singl output ( siso ) fuzzi system base upon genuin many-valu implic are univers approxim . it is shown theoret that fuzzi control system base upon genuin many-valu implic are equival to those base upon t-norm implic , the gener approach to construct fuzzi system is given . it is also shown that defuzzifi base upon center of area is not appropri to the fuzzi system base upon genuin many-valu implic","ordered_present_kp":[68,49,19,190],"keyphrases":["fuzzy systems","many-valued implications","SISO","universal approximator","single input and single output fuzzy systems","Boolean implication"],"prmu":["P","P","P","P","R","M"]} {"id":"11","title":"Does social capital determine innovation? To what extent?","abstract":"This paper deals with two questions: Does social capital determine innovation in manufacturing firms? If it is the case, to what extent? To deal with these questions, we review the literature on innovation in order to see how social capital came to be added to the other forms of capital as an explanatory variable of innovation. In doing so, we have been led to follow the dominating view of the literature on social capital and innovation which claims that social capital cannot be captured through a single indicator, but that it actually takes many different forms that must be accounted for. Therefore, to the traditional explanatory variables of innovation, we have added five forms of structural social capital (business network assets, information network assets, research network assets, participation assets, and relational assets) and one form of cognitive social capital (reciprocal trust). In a context where empirical investigations regarding the relations between social capital and innovation are still scanty, this paper makes contributions to the advancement of knowledge in providing new evidence regarding the impact and the extent of social capital on innovation at the two decisionmaking stages considered in this study","tok_text":"doe social capit determin innov ? to what extent ? \n thi paper deal with two question : doe social capit determin innov in manufactur firm ? if it is the case , to what extent ? to deal with these question , we review the literatur on innov in order to see how social capit came to be ad to the other form of capit as an explanatori variabl of innov . in do so , we have been led to follow the domin view of the literatur on social capit and innov which claim that social capit can not be captur through a singl indic , but that it actual take mani differ form that must be account for . therefor , to the tradit explanatori variabl of innov , we have ad five form of structur social capit ( busi network asset , inform network asset , research network asset , particip asset , and relat asset ) and one form of cognit social capit ( reciproc trust ) . in a context where empir investig regard the relat between social capit and innov are still scanti , thi paper make contribut to the advanc of knowledg in provid new evid regard the impact and the extent of social capit on innov at the two decisionmak stage consid in thi studi","ordered_present_kp":[26,123,692,713,736,761,782,812,668,834],"keyphrases":["innovation","manufacturing firms","structural social capital","business network assets","information network assets","research network assets","participation assets","relational assets","cognitive social capital","reciprocal trust","two-stage decision-making process","degree of radicalness"],"prmu":["P","P","P","P","P","P","P","P","P","P","U","M"]} {"id":"54","title":"Controls help harmonic spray do OK removing residues","abstract":"Looks at how innovative wafer-cleaning equipment hit the market in a timely fashion thanks in part to controls maker Rockwell Automation","tok_text":"control help harmon spray do ok remov residu \n look at how innov wafer-clean equip hit the market in a time fashion thank in part to control maker rockwel autom","ordered_present_kp":[13,65,147],"keyphrases":["harmonic spray","wafer-cleaning equipment","Rockwell Automation","residues removal","PSI machine","Allen-Bradley ControlLogix automation control platform","motion control","Allen-Bradley 1336 Plus II variable frequency ac drives"],"prmu":["P","P","P","R","U","M","M","U"]} {"id":"858","title":"Recruiting and retaining women in undergraduate computing majors","abstract":"This paper recommends methods for increasing female participation in undergraduate computer science. The recommendations are based on recent and on-going research into the gender gap in computer science and related disciplines. They are intended to work in tandem with the Computing Research Association's recommendations for graduate programs to promote a general increase in women's participation in computing professions. Most of the suggestions offered could improve the educational environment for both male and female students. However, general improvements are likely to be of particular benefit to women because women in our society do not generally receive the same level of support that men receive for entering and persisting in this field","tok_text":"recruit and retain women in undergradu comput major \n thi paper recommend method for increas femal particip in undergradu comput scienc . the recommend are base on recent and on-go research into the gender gap in comput scienc and relat disciplin . they are intend to work in tandem with the comput research associ 's recommend for graduat program to promot a gener increas in women 's particip in comput profess . most of the suggest offer could improv the educ environ for both male and femal student . howev , gener improv are like to be of particular benefit to women becaus women in our societi do not gener receiv the same level of support that men receiv for enter and persist in thi field","ordered_present_kp":[28,93,199,122],"keyphrases":["undergraduate computing majors","female participation","computer science","gender gap","women retention","women recruitment"],"prmu":["P","P","P","P","M","R"]} {"id":"1363","title":"Heuristics for single-pass welding task sequencing","abstract":"Welding task sequencing is a prerequisite in the offline programming of robot arc welding. Single-pass welding task sequencing can be modelled as a modified travelling salesman problem. Owing to the difficulty of the resulting arc-routing problems, effective local search heuristics are developed. Computational speed becomes important because robot arc welding is often part of an automated process-planning procedure. Generating a reasonable solution in an acceptable time is necessary for effective automated process planning. Several different heuristics are proposed for solving the welding task-sequencing problem considering both productivity and the potential for welding distortion. Constructive heuristics based on the nearest neighbour concept and tabu search heuristics are developed and enhanced using improvement procedures. The effectiveness of the heuristics developed is tested and verified on actual welded structure problems and random problems","tok_text":"heurist for single-pass weld task sequenc \n weld task sequenc is a prerequisit in the offlin program of robot arc weld . single-pass weld task sequenc can be model as a modifi travel salesman problem . owe to the difficulti of the result arc-rout problem , effect local search heurist are develop . comput speed becom import becaus robot arc weld is often part of an autom process-plan procedur . gener a reason solut in an accept time is necessari for effect autom process plan . sever differ heurist are propos for solv the weld task-sequenc problem consid both product and the potenti for weld distort . construct heurist base on the nearest neighbour concept and tabu search heurist are develop and enhanc use improv procedur . the effect of the heurist develop is test and verifi on actual weld structur problem and random problem","ordered_present_kp":[12,607,86,104,169,264,299,367,564,592,637,667,821,795],"keyphrases":["single-pass welding task sequencing","offline programming","robot arc welding","modified travelling salesman problem","local search heuristics","computational speed","automated process-planning procedure","productivity","welding distortion","constructive heuristics","nearest neighbour concept","tabu search heuristics","welded structure problems","random problems"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1326","title":"Verona Lastre: consolidation provides opening for a new plate vendor","abstract":"Fewer companies than ever are manufacturing CTP plates. The market has become globalized, with just four big firms dominating the picture. To the Samor Group, however, globalization looked like an opportunity; it reasoned that many a national and local distributor would welcome a small, competitive, regional manufacturer. A couple of years ago it formed a company, Verona Lastre, to exploit that opportunity. Now Vela, as it's familiarly called, has launched its line of high-quality thermal plates and is busily lining up dealers in Europe and the Americas","tok_text":"verona lastr : consolid provid open for a new plate vendor \n fewer compani than ever are manufactur ctp plate . the market ha becom global , with just four big firm domin the pictur . to the samor group , howev , global look like an opportun ; it reason that mani a nation and local distributor would welcom a small , competit , region manufactur . a coupl of year ago it form a compani , verona lastr , to exploit that opportun . now vela , as it 's familiarli call , ha launch it line of high-qual thermal plate and is busili line up dealer in europ and the america","ordered_present_kp":[0,435,100],"keyphrases":["Verona Lastre","CTP plates","Vela"],"prmu":["P","P","P"]} {"id":"773","title":"Topology-reducing surface simplification using a discrete solid representation","abstract":"This paper presents a new approach for generating coarse-level approximations of topologically complex models. Dramatic topology reduction is achieved by converting a 3D model to and from a volumetric representation. Our approach produces valid, error-bounded models and supports the creation of approximations that do not interpenetrate the original model, either being completely contained in the input solid or bounding it. Several simple to implement versions of our approach are presented and discussed. We show that these methods perform significantly better than other surface-based approaches when simplifying topologically-rich models such as scene parts and complex mechanical assemblies","tok_text":"topology-reduc surfac simplif use a discret solid represent \n thi paper present a new approach for gener coarse-level approxim of topolog complex model . dramat topolog reduct is achiev by convert a 3d model to and from a volumetr represent . our approach produc valid , error-bound model and support the creation of approxim that do not interpenetr the origin model , either be complet contain in the input solid or bound it . sever simpl to implement version of our approach are present and discuss . we show that these method perform significantli better than other surface-bas approach when simplifi topologically-rich model such as scene part and complex mechan assembl","ordered_present_kp":[105,130,36,0,199,222,271,637,652],"keyphrases":["topology-reducing surface simplification","discrete solid representation","coarse-level approximations","topologically complex models","3D model","volumetric representation","error-bounded models","scene parts","complex mechanical assemblies"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"736","title":"The year of the racehorse [China Telecom]","abstract":"Does China really offer the telecoms industry a route out of the telecoms slump? According to the Chinese government it has yet to receive a single application from foreign companies looking to invest in the country's domestic telecoms sector since the country joined the World Trade Organisation","tok_text":"the year of the racehors [ china telecom ] \n doe china realli offer the telecom industri a rout out of the telecom slump ? accord to the chines govern it ha yet to receiv a singl applic from foreign compani look to invest in the countri 's domest telecom sector sinc the countri join the world trade organis","ordered_present_kp":[27,72,27],"keyphrases":["China","China Telecom","telecoms industry","foreign investment","China Netcom","China Unicorn"],"prmu":["P","P","P","R","M","M"]} {"id":"1062","title":"Fidelity of quantum teleportation through noisy channels","abstract":"We investigate quantum teleportation through noisy quantum channels by solving analytically and numerically a master equation in the Lindblad form. We calculate the fidelity as a function of decoherence rates and angles of a state to be teleported. It is found that the average fidelity and the range of states to be accurately teleported depend on types of noises acting on quantum channels. If the quantum channels are subject to isotropic noise, the average fidelity decays to 1\/2, which is smaller than the best possible value of 2\/3 obtained only by the classical communication. On the other hand, if the noisy quantum channel is modeled by a single Lindblad operator, the average fidelity is always greater than 2\/3","tok_text":"fidel of quantum teleport through noisi channel \n we investig quantum teleport through noisi quantum channel by solv analyt and numer a master equat in the lindblad form . we calcul the fidel as a function of decoher rate and angl of a state to be teleport . it is found that the averag fidel and the rang of state to be accur teleport depend on type of nois act on quantum channel . if the quantum channel are subject to isotrop nois , the averag fidel decay to 1\/2 , which is smaller than the best possibl valu of 2\/3 obtain onli by the classic commun . on the other hand , if the noisi quantum channel is model by a singl lindblad oper , the averag fidel is alway greater than 2\/3","ordered_present_kp":[0,9,87,93,539,625,422],"keyphrases":["fidelity","quantum teleportation","noisy quantum channels","quantum channels","isotropic noise","classical communication","Lindblad operator","analytical solution","numerical solution","Alice","Bob","sender","recipient","dual classical channels","eigenstate"],"prmu":["P","P","P","P","P","P","P","M","M","U","U","U","U","M","U"]} {"id":"1027","title":"Extracting straight road structure in urban environments using IKONOS satellite imagery","abstract":"We discuss a fully automatic technique for extracting roads in urban environments. The method has its bases in a vegetation mask derived from multispectral IKONOS data and in texture derived from panchromatic IKONOS data. These two techniques together are used to distinguish road pixels. We then move from individual pixels to an object-based representation that allows reasoning on a higher level. Recognition of individual segments and intersections and the relationships among them are used to determine underlying road structure and to then logically hypothesize the existence of additional road network components. We show results on an image of San Diego, California. The object-based processing component may be adapted to utilize other basis techniques as well, and could be used to build a road network in any scene having a straight-line structured topology","tok_text":"extract straight road structur in urban environ use ikono satellit imageri \n we discuss a fulli automat techniqu for extract road in urban environ . the method ha it base in a veget mask deriv from multispectr ikono data and in textur deriv from panchromat ikono data . these two techniqu togeth are use to distinguish road pixel . we then move from individu pixel to an object-bas represent that allow reason on a higher level . recognit of individu segment and intersect and the relationship among them are use to determin underli road structur and to then logic hypothes the exist of addit road network compon . we show result on an imag of san diego , california . the object-bas process compon may be adapt to util other basi techniqu as well , and could be use to build a road network in ani scene have a straight-lin structur topolog","ordered_present_kp":[8,34,52,90,176,228,246,319,371,593,644,673,811],"keyphrases":["straight road structure","urban environments","IKONOS satellite imagery","fully automatic technique","vegetation mask","texture","panchromatic IKONOS data","road pixels","object-based representation","road network components","San Diego","object-based processing component","straight-line structured topology","higher level reasoning","individual segment recognition","high-resolution imagery","large-scale feature extraction","vectorized road network"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","M","M","M"]} {"id":"70","title":"IT security issues: the need for end user oriented research","abstract":"Considerable attention has been given to the technical and policy issues involved with IT security issues in recent years. The growth of e-commerce and the Internet, as well as widely publicized hacker attacks, have brought IT security into prominent focus and routine corporate attention. Yet, much more research is needed from the end user (EU) perspective. This position paper is a call for such research and outlines some possible directions of interest","tok_text":"it secur issu : the need for end user orient research \n consider attent ha been given to the technic and polici issu involv with it secur issu in recent year . the growth of e-commerc and the internet , as well as wide public hacker attack , have brought it secur into promin focu and routin corpor attent . yet , much more research is need from the end user ( eu ) perspect . thi posit paper is a call for such research and outlin some possibl direct of interest","ordered_present_kp":[0,29,174,192,226],"keyphrases":["IT security","end user oriented research","e-commerce","Internet","hacker attacks","information technology research","end user computing"],"prmu":["P","P","P","P","P","M","M"]} {"id":"122","title":"A formal framework for viewpoint consistency","abstract":"Multiple viewpoint models of system development are becoming increasingly important. Each viewpoint offers a different perspective on the target system and system development involves parallel refinement of the multiple views. Viewpoint related approaches have been considered in a number of different guises by a spectrum of researchers. Our work particularly focuses on the use of viewpoints in open distributed processing (ODP) which is an ISO\/ITU standardisation framework. The requirements of viewpoint modelling in ODP are very broad and, hence, demanding. Multiple viewpoints, though, prompt the issue of consistency between viewpoints. This paper describes a very general interpretation of consistency which we argue is broad enough to meet the requirements of consistency in ODP. We present a formal framework for this general interpretation; highlight basic properties of the interpretation and locate restricted classes of consistency. Strategies for checking consistency are also investigated. Throughout we illustrate our theory using the formal description technique LOTOS. Thus, the paper also characterises the nature of and options for consistency checking in LOTOS","tok_text":"a formal framework for viewpoint consist \n multipl viewpoint model of system develop are becom increasingli import . each viewpoint offer a differ perspect on the target system and system develop involv parallel refin of the multipl view . viewpoint relat approach have been consid in a number of differ guis by a spectrum of research . our work particularli focus on the use of viewpoint in open distribut process ( odp ) which is an iso \/ itu standardis framework . the requir of viewpoint model in odp are veri broad and , henc , demand . multipl viewpoint , though , prompt the issu of consist between viewpoint . thi paper describ a veri gener interpret of consist which we argu is broad enough to meet the requir of consist in odp . we present a formal framework for thi gener interpret ; highlight basic properti of the interpret and locat restrict class of consist . strategi for check consist are also investig . throughout we illustr our theori use the formal descript techniqu loto . thu , the paper also characteris the natur of and option for consist check in loto","ordered_present_kp":[43,2,23,70,392,417,435,1056,963,988],"keyphrases":["formal framework","viewpoint consistency","multiple viewpoint models","system development","open distributed processing","ODP","ISO\/ITU standardisation framework","formal description technique","LOTOS","consistency checking","development models","process algebra"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","M"]} {"id":"961","title":"Modular and visual specification of hybrid systems: an introduction to HyCharts","abstract":"Visual description techniques are particularly important for the design of hybrid systems, because specifications of such systems usually have to be discussed between engineers from a number of different disciplines. Modularity is vital for hybrid systems not only because it allows to handle large systems, but also because it permits to think in terms of components, which is familiar to engineers. Based on two different interpretations for hierarchic graphs and on a clear hybrid computation model, we develop HyCharts. HyCharts consist of two modular visual formalisms, one for the specification of the architecture and one for the specification of the behavior of hybrid systems. The operators on hierarchic graphs enable us to give a surprisingly simple denotational semantics for many concepts known from statechart-like formalisms. Due to a very general composition operator, HyCharts can easily be composed with description techniques from other engineering disciplines. Such heterogeneous system specifications seem to be particularly appropriate for hybrid systems because of their interdisciplinary character","tok_text":"modular and visual specif of hybrid system : an introduct to hychart \n visual descript techniqu are particularli import for the design of hybrid system , becaus specif of such system usual have to be discuss between engin from a number of differ disciplin . modular is vital for hybrid system not onli becaus it allow to handl larg system , but also becaus it permit to think in term of compon , which is familiar to engin . base on two differ interpret for hierarch graph and on a clear hybrid comput model , we develop hychart . hychart consist of two modular visual formal , one for the specif of the architectur and one for the specif of the behavior of hybrid system . the oper on hierarch graph enabl us to give a surprisingli simpl denot semant for mani concept known from statechart-lik formal . due to a veri gener composit oper , hychart can easili be compos with descript techniqu from other engin disciplin . such heterogen system specif seem to be particularli appropri for hybrid system becaus of their interdisciplinari charact","ordered_present_kp":[12,29,61,71,387,458,488,739,926],"keyphrases":["visual specification","hybrid systems","HyCharts","visual description techniques","components","hierarchic graphs","hybrid computation model","denotational semantics","heterogeneous system specifications","modular specification","statechart","formal specification"],"prmu":["P","P","P","P","P","P","P","P","P","R","U","R"]} {"id":"924","title":"Dynamic testing of inflatable structures using smart materials","abstract":"In this paper we present experimental investigations of the vibration testing of an inflated, thin-film torus using smart materials. Lightweight, inflatable structures are very attractive in satellite applications. However, the lightweight, flexible and highly damped nature of inflated structures poses difficulties in ground vibration testing. In this study, we show that polyvinylidene fluoride (PVDF) patches and recently developed macro-fiber composite actuators may be used as sensors and actuators in identifying modal parameters. Both smart materials can be integrated unobtrusively into the skin of a torus or space device forming an attractive testing arrangement. The addition of actuators and PVDF sensors to the torus does not significantly interfere with the suspension modes of a free-free boundary condition, and can be considered an integral part of the inflated structure. The results indicate the potential of using smart materials to measure and control the dynamic response of inflated structures","tok_text":"dynam test of inflat structur use smart materi \n in thi paper we present experiment investig of the vibrat test of an inflat , thin-film toru use smart materi . lightweight , inflat structur are veri attract in satellit applic . howev , the lightweight , flexibl and highli damp natur of inflat structur pose difficulti in ground vibrat test . in thi studi , we show that polyvinyliden fluorid ( pvdf ) patch and recent develop macro-fib composit actuat may be use as sensor and actuat in identifi modal paramet . both smart materi can be integr unobtrus into the skin of a toru or space devic form an attract test arrang . the addit of actuat and pvdf sensor to the toru doe not significantli interfer with the suspens mode of a free-fre boundari condit , and can be consid an integr part of the inflat structur . the result indic the potenti of use smart materi to measur and control the dynam respons of inflat structur","ordered_present_kp":[127,34,211,14,323,648,428,498,582,739,890],"keyphrases":["inflated structures","smart materials","thin-film torus","satellite applications","ground vibration testing","macro-fiber composite actuators","modal parameters","space device","PVDF sensors","boundary condition","dynamic response","polyvinylidene fluoride patches","Kapton torus"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","M"]} {"id":"1046","title":"A suggestion of fractional-order controller for flexible spacecraft attitude control","abstract":"A controller design method for flexible spacecraft attitude control is proposed. The system is first described by a partial differential equation with internal damping. Then the frequency response is analyzed, and the three basic characteristics of the flexible system, namely, average function, lower bound and upper bound are defined. On this basis, a fractional-order controller is proposed, which functions as phase stabilization control for lower frequency and smoothly enters to amplitude stabilization at higher frequency by proper amplitude attenuation. It is shown that the equivalent damping ratio increases in proportion to the square of frequency","tok_text":"a suggest of fractional-ord control for flexibl spacecraft attitud control \n a control design method for flexibl spacecraft attitud control is propos . the system is first describ by a partial differenti equat with intern damp . then the frequenc respons is analyz , and the three basic characterist of the flexibl system , name , averag function , lower bound and upper bound are defin . on thi basi , a fractional-ord control is propos , which function as phase stabil control for lower frequenc and smoothli enter to amplitud stabil at higher frequenc by proper amplitud attenu . it is shown that the equival damp ratio increas in proport to the squar of frequenc","ordered_present_kp":[13,40,185,215,238,458,520,612],"keyphrases":["fractional-order controller","flexible spacecraft attitude control","partial differential equation","internal damping","frequency response","phase stabilization control","amplitude stabilization","damping ratio"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1003","title":"Lob's theorem as a limitation on mechanism","abstract":"We argue that Lob's Theorem implies a limitation on mechanism. Specifically, we argue, via an application of a generalized version of Lob's Theorem, that any particular device known by an observer to be mechanical cannot be used as an epistemic authority (of a particular type) by that observer: either the belief-set of such an authority is not mechanizable or, if it is, there is no identifiable formal system of which the observer can know (or truly believe) it to be the theorem-set. This gives, we believe, an important and hitherto unnoticed connection between mechanism and the use of authorities by human-like epistemic agents","tok_text":"lob 's theorem as a limit on mechan \n we argu that lob 's theorem impli a limit on mechan . specif , we argu , via an applic of a gener version of lob 's theorem , that ani particular devic known by an observ to be mechan can not be use as an epistem author ( of a particular type ) by that observ : either the belief-set of such an author is not mechaniz or , if it is , there is no identifi formal system of which the observ can know ( or truli believ ) it to be the theorem-set . thi give , we believ , an import and hitherto unnot connect between mechan and the use of author by human-lik epistem agent","ordered_present_kp":[20,243,311,393,469,583],"keyphrases":["limitation on mechanism","epistemic authority","belief-set","formal system","theorem-set","human-like epistemic agents","Lob Theorem"],"prmu":["P","P","P","P","P","P","R"]} {"id":"881","title":"Is diversity in computing a moral matter?","abstract":"We have presented an ethical argument that takes into consideration the subtleties of the issue surrounding under-representation in computing. We should emphasize that there is nothing subtle about overt, unfair discrimination. Where such injustice occurs, we condemn it. Our concern is that discrimination need not be explicit or overt. It need not be individual-to-individual. Rather, it can be subtly built into social practices and social institutions. Our analysis raises ethical questions about aspects of computing that drive women away, aspects that can be changed in ways that improve the profession and access to the profession. We hope that computing will move towards these improvements","tok_text":"is divers in comput a moral matter ? \n we have present an ethic argument that take into consider the subtleti of the issu surround under-represent in comput . we should emphas that there is noth subtl about overt , unfair discrimin . where such injustic occur , we condemn it . our concern is that discrimin need not be explicit or overt . it need not be individual-to-individu . rather , it can be subtli built into social practic and social institut . our analysi rais ethic question about aspect of comput that drive women away , aspect that can be chang in way that improv the profess and access to the profess . we hope that comput will move toward these improv","ordered_present_kp":[58,215,417,436,520],"keyphrases":["ethical argument","unfair discrimination","social practices","social institutions","women","computing under-representation"],"prmu":["P","P","P","P","P","R"]} {"id":"1347","title":"A maximum-likelihood surface estimator for dense range data","abstract":"Describes how to estimate 3D surface models from dense sets of noisy range data taken from different points of view, i.e., multiple range maps. The proposed method uses a sensor model to develop an expression for the likelihood of a 3D surface, conditional on a set of noisy range measurements. Optimizing this likelihood with respect to the model parameters provides an unbiased and efficient estimator. The proposed numerical algorithms make this estimation computationally practical for a wide variety of circumstances. The results from this method compare favorably with state-of-the-art approaches that rely on the closest-point or perpendicular distance metric, a convenient heuristic that produces biased solutions and fails completely when surfaces are not sufficiently smooth, as in the case of complex scenes or noisy range measurements. Empirical results on both simulated and real ladar data demonstrate the effectiveness of the proposed method for several different types of problems. Furthermore, the proposed method offers a general framework that can accommodate extensions to include surface priors, more sophisticated noise models, and other sensing modalities, such as sonar or synthetic aperture radar","tok_text":"a maximum-likelihood surfac estim for dens rang data \n describ how to estim 3d surfac model from dens set of noisi rang data taken from differ point of view , i.e. , multipl rang map . the propos method use a sensor model to develop an express for the likelihood of a 3d surfac , condit on a set of noisi rang measur . optim thi likelihood with respect to the model paramet provid an unbias and effici estim . the propos numer algorithm make thi estim comput practic for a wide varieti of circumst . the result from thi method compar favor with state-of-the-art approach that reli on the closest-point or perpendicular distanc metric , a conveni heurist that produc bias solut and fail complet when surfac are not suffici smooth , as in the case of complex scene or noisi rang measur . empir result on both simul and real ladar data demonstr the effect of the propos method for sever differ type of problem . furthermor , the propos method offer a gener framework that can accommod extens to includ surfac prior , more sophist nois model , and other sens modal , such as sonar or synthet apertur radar","ordered_present_kp":[2,38,76,109,209,646,666,749,299,817,1071,1080],"keyphrases":["maximum-likelihood surface estimator","dense range data","3D surface models","noisy range data","sensor model","noisy range measurements","heuristic","biased solutions","complex scenes","real ladar data","sonar","synthetic aperture radar","unbiased estimator","simulated ladar data","surface reconstruction","surface fitting","optimal estimation","parameter estimation","Bayesian estimation","registration","calibration"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R","M","M","R","R","M","U","U"]} {"id":"1302","title":"Dynamics of the firing probability of noisy integrate-and-fire neurons","abstract":"Cortical neurons in vivo undergo a continuous bombardment due to synaptic activity, which acts as a major source of noise. We investigate the effects of the noise filtering by synapses with various levels of realism on integrate-and-fire neuron dynamics. The noise input is modeled by white (for instantaneous synapses) or colored (for synapses with a finite relaxation time) noise. Analytical results for the modulation of firing probability in response to an oscillatory input current are obtained by expanding a Fokker-Planck equation for small parameters of the problem-when both the amplitude of the modulation is small compared to the background firing rate and the synaptic time constant is small compared to the membrane time constant. We report the detailed calculations showing that if a synaptic decay time constant is included in the synaptic current model, the firing-rate modulation of the neuron due to an oscillatory input remains finite in the high-frequency limit with no phase lag. In addition, we characterize the low-frequency behavior and the behavior of the high-frequency limit for intermediate decay times. We also characterize the effects of introducing a rise time to the synaptic currents and the presence of several synaptic receptors with different kinetics. In both cases, we determine, using numerical simulations, an effective decay time constant that describes the neuronal response completely","tok_text":"dynam of the fire probabl of noisi integrate-and-fir neuron \n cortic neuron in vivo undergo a continu bombard due to synapt activ , which act as a major sourc of nois . we investig the effect of the nois filter by synaps with variou level of realism on integrate-and-fir neuron dynam . the nois input is model by white ( for instantan synaps ) or color ( for synaps with a finit relax time ) nois . analyt result for the modul of fire probabl in respons to an oscillatori input current are obtain by expand a fokker-planck equat for small paramet of the problem-when both the amplitud of the modul is small compar to the background fire rate and the synapt time constant is small compar to the membran time constant . we report the detail calcul show that if a synapt decay time constant is includ in the synapt current model , the firing-r modul of the neuron due to an oscillatori input remain finit in the high-frequ limit with no phase lag . in addit , we character the low-frequ behavior and the behavior of the high-frequ limit for intermedi decay time . we also character the effect of introduc a rise time to the synapt current and the presenc of sever synapt receptor with differ kinet . in both case , we determin , use numer simul , an effect decay time constant that describ the neuron respons complet","ordered_present_kp":[13,29,62,117,199,509,650,694,934,1161,1230],"keyphrases":["firing probability","noisy integrate-and-fire neurons","cortical neurons","synaptic activity","noise filtering","Fokker-Planck equation","synaptic time constant","membrane time constant","phase lag","synaptic receptors","numerical simulation","white noise","colored noise"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"757","title":"Ultrafast compound imaging for 2-D motion vector estimation: application to transient elastography","abstract":"This paper describes a new technique for two-dimensional (2-D) imaging of the motion vector at a very high frame rate with ultrasound. Its potential is experimentally demonstrated for transient elastography. But, beyond this application, it also could be promising for color flow and reflectivity imaging. To date, only axial displacements induced in human tissues by low-frequency vibrators were measured during transient elastography. The proposed technique allows us to follow both axial and lateral displacements during the shear wave propagation and thus should improve Young's modulus image reconstruction. The process is a combination of several ideas well-known in ultrasonic imaging: ultra-fast imaging, multisynthetic aperture beamforming, 1-D speckle tracking, and compound imaging. Classical beamforming in the transmit mode is replaced here by a single plane wave insonification increasing the frame rate by at least a factor of 128. The beamforming is achieved only in the receive mode on two independent subapertures. Comparison of successive frames by a classical 1-D speckle tracking algorithm allows estimation of displacements along two different directions linked to the subapertures beams. The variance of the estimates is finally improved by tilting the emitting plane wave at each insonification, thus allowing reception of successive decorrelated speckle patterns","tok_text":"ultrafast compound imag for 2-d motion vector estim : applic to transient elastographi \n thi paper describ a new techniqu for two-dimension ( 2-d ) imag of the motion vector at a veri high frame rate with ultrasound . it potenti is experiment demonstr for transient elastographi . but , beyond thi applic , it also could be promis for color flow and reflect imag . to date , onli axial displac induc in human tissu by low-frequ vibrat were measur dure transient elastographi . the propos techniqu allow us to follow both axial and later displac dure the shear wave propag and thu should improv young 's modulu imag reconstruct . the process is a combin of sever idea well-known in ultrason imag : ultra-fast imag , multisynthet apertur beamform , 1-d speckl track , and compound imag . classic beamform in the transmit mode is replac here by a singl plane wave insonif increas the frame rate by at least a factor of 128 . the beamform is achiev onli in the receiv mode on two independ subapertur . comparison of success frame by a classic 1-d speckl track algorithm allow estim of displac along two differ direct link to the subapertur beam . the varianc of the estim is final improv by tilt the emit plane wave at each insonif , thu allow recept of success decorrel speckl pattern","ordered_present_kp":[0,64,184,205,350,403,380,531,554,594,681,715,1258,844],"keyphrases":["ultrafast compound imaging","transient elastography","high frame rate","ultrasound","reflectivity imaging","axial displacements","human tissues","lateral displacements","shear wave propagation","Young's modulus image reconstruction","ultrasonic imaging","multisynthetic aperture beamforming","single plane wave insonification","decorrelated speckle patterns","2D motion vector estimation","two-dimensional imaging","2D imaging","colour flow imaging","1D speckle tracking algorithm"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","M","R","M","M","M"]} {"id":"712","title":"Waiting-time distribution of a discrete-time multiserver queue with correlated arrivals and deterministic service times: D-MAP\/D\/k system","abstract":"We derive the waiting-time distribution of a discrete-time multiserver queue with correlated arrivals and deterministic (or constant) service times. We show that the procedure for obtaining the waiting-time distribution of a multiserver queue is reduced to that of a single-server queue. We present a complete solution to the waiting-time distribution of D-MAP\/D\/k queue together with some computational results","tok_text":"waiting-tim distribut of a discrete-tim multiserv queue with correl arriv and determinist servic time : d-map \/ d \/ k system \n we deriv the waiting-tim distribut of a discrete-tim multiserv queue with correl arriv and determinist ( or constant ) servic time . we show that the procedur for obtain the waiting-tim distribut of a multiserv queue is reduc to that of a single-serv queue . we present a complet solut to the waiting-tim distribut of d-map \/ d \/ k queue togeth with some comput result","ordered_present_kp":[0,27,61,78,104],"keyphrases":["waiting-time distribution","discrete-time multiserver queue","correlated arrivals","deterministic service times","D-MAP\/D\/k system","Markovian arrival process"],"prmu":["P","P","P","P","P","M"]} {"id":"839","title":"Women in computing: what brings them to it, what keeps them in it?","abstract":"Career stereotyping and misperceptions about the nature of computing are substantive reasons for the under representation of women in professional computing careers. In this study, 15 women who have work experience in several aspects of computing were asked about their reasons for entering computing, what they liked about working in computing, and what they disliked. While there are many common threads, there are also individual differences. Common reasons for choosing computing as a career included: exposure to computing in a setting which enabled them to see the versatility of computers; the influence of someone close to them; personal abilities which they perceived to be appropriate for a career in computing; and characteristics of such careers which appealed to them. Generally, women working in the field enjoy the work they are doing. Dislikes arising from their work experiences are more likely to be associated with people and politics than with the work they do-and they would like to have more female colleagues","tok_text":"women in comput : what bring them to it , what keep them in it ? \n career stereotyp and mispercept about the natur of comput are substant reason for the under represent of women in profession comput career . in thi studi , 15 women who have work experi in sever aspect of comput were ask about their reason for enter comput , what they like about work in comput , and what they dislik . while there are mani common thread , there are also individu differ . common reason for choos comput as a career includ : exposur to comput in a set which enabl them to see the versatil of comput ; the influenc of someon close to them ; person abil which they perceiv to be appropri for a career in comput ; and characterist of such career which appeal to them . gener , women work in the field enjoy the work they are do . dislik aris from their work experi are more like to be associ with peopl and polit than with the work they do-and they would like to have more femal colleagu","ordered_present_kp":[67,624,888,88,0,181],"keyphrases":["women","career stereotyping","misperceptions","professional computing careers","personal abilities","politics"],"prmu":["P","P","P","P","P","P"]} {"id":"804","title":"Voltage control methods with grid connected wind turbines: a tutorial review","abstract":"Within electricity grid networks it is conventional for large-scale central generators to both provide power and control grid node voltage. Therefore when wind turbines replace conventional power stations on a substantial scale, they must not only generate power, but also control grid node voltages. This paper reviews the basic principles of voltage control for tutorial benefit and then considers application of grid-connected wind turbines for voltage control. The most widely used contemporary wind turbine types are considered and further detail is given for determining the range of variables that allow control","tok_text":"voltag control method with grid connect wind turbin : a tutori review \n within electr grid network it is convent for large-scal central gener to both provid power and control grid node voltag . therefor when wind turbin replac convent power station on a substanti scale , they must not onli gener power , but also control grid node voltag . thi paper review the basic principl of voltag control for tutori benefit and then consid applic of grid-connect wind turbin for voltag control . the most wide use contemporari wind turbin type are consid and further detail is given for determin the rang of variabl that allow control","ordered_present_kp":[79,117,27,0],"keyphrases":["voltage control","grid connected wind turbines","electricity grid networks","large-scale central generators","grid node voltages control","reactive power","direct drive","variable speed","offshore wind park","squirrel cage induction generator","doubly fed induction generator","direct drive synchronous generator","weak grid","converter rating"],"prmu":["P","P","P","P","R","M","U","M","M","M","M","M","M","U"]} {"id":"841","title":"Becoming a computer scientist","abstract":"The focus of this report is pipeline shrinkage for women in computer science. We describe the situation for women at all stages of training in computer science, from the precollege level through graduate school. Because many of the problems discussed are related to the lack of role models for women who are in the process of becoming computer scientists, we also concern ourselves with the status of women faculty members. We not only describe the problems, but also make specific recommendations for change and encourage further study of those problems whose solutions are not yet well understood","tok_text":"becom a comput scientist \n the focu of thi report is pipelin shrinkag for women in comput scienc . we describ the situat for women at all stage of train in comput scienc , from the precolleg level through graduat school . becaus mani of the problem discuss are relat to the lack of role model for women who are in the process of becom comput scientist , we also concern ourselv with the statu of women faculti member . we not onli describ the problem , but also make specif recommend for chang and encourag further studi of those problem whose solut are not yet well understood","ordered_present_kp":[53,74,83,282,396],"keyphrases":["pipeline shrinkage","women","computer science","role models","women faculty members"],"prmu":["P","P","P","P","P"]} {"id":"1086","title":"Some recent advances in validated methods for IVPs for ODEs","abstract":"Compared to standard numerical methods for initial value problems (IVPs) for ordinary differential equations (ODEs), validated methods (often called interval methods) for IVPs for ODEs have two important advantages: if they return a solution to a problem, then (1) the problem is guaranteed to have a unique solution, and (2) an enclosure of the true solution is produced. We present a brief overview of interval Taylor series (ITS) methods for IVPs for ODEs and discuss some recent advances in the theory of validated methods for IVPs for ODEs. In particular, we discuss an interval Hermite-Obreschkoff (IHO) scheme for computing rigorous bounds on the solution of an IVP for an ODE, the stability of ITS and IHO methods, and a new perspective on the wrapping effect, where we interpret the problem of reducing the wrapping effect as one of finding a more stable scheme for advancing the solution","tok_text":"some recent advanc in valid method for ivp for ode \n compar to standard numer method for initi valu problem ( ivp ) for ordinari differenti equat ( ode ) , valid method ( often call interv method ) for ivp for ode have two import advantag : if they return a solut to a problem , then ( 1 ) the problem is guarante to have a uniqu solut , and ( 2 ) an enclosur of the true solut is produc . we present a brief overview of interv taylor seri ( it ) method for ivp for ode and discuss some recent advanc in the theori of valid method for ivp for ode . in particular , we discuss an interv hermite-obreschkoff ( iho ) scheme for comput rigor bound on the solut of an ivp for an ode , the stabil of it and iho method , and a new perspect on the wrap effect , where we interpret the problem of reduc the wrap effect as one of find a more stabl scheme for advanc the solut","ordered_present_kp":[22,89,120,182,421,740],"keyphrases":["validated methods","initial value problems","ordinary differential equations","interval methods","interval Taylor series","wrapping effect","interval Hermite-Obreschkoff scheme","QR algorithm"],"prmu":["P","P","P","P","P","P","R","U"]} {"id":"1457","title":"A discontinuous Galerkin method for transient analysis of wave propagation in unbounded domains","abstract":"A technique based on the discontinuous Galerkin finite element method is developed and applied to the derivation of an absorbing boundary condition for the analysis of transient wave propagation. The condition is exact in that only discretization error is involved. Furthermore, the computational cost associated with use of the condition is an order of magnitude lower than for conditions based on Green functions. The time-stepping scheme resulting from an implicit method in conjunction with this boundary condition appears to be unconditionally stable","tok_text":"a discontinu galerkin method for transient analysi of wave propag in unbound domain \n a techniqu base on the discontinu galerkin finit element method is develop and appli to the deriv of an absorb boundari condit for the analysi of transient wave propag . the condit is exact in that onli discret error is involv . furthermor , the comput cost associ with use of the condit is an order of magnitud lower than for condit base on green function . the time-step scheme result from an implicit method in conjunct with thi boundari condit appear to be uncondit stabl","ordered_present_kp":[109,33,232,190,69,289,332,449,481],"keyphrases":["transient analysis","unbounded domains","discontinuous Galerkin finite element method","absorbing boundary condition","transient wave propagation","discretization error","computational cost","time-stepping scheme","implicit method","unconditional stability"],"prmu":["P","P","P","P","P","P","P","P","P","M"]} {"id":"1412","title":"Arbortext: enabler of multichannel publishing","abstract":"A company has a document-say, dosage instructions for a prescription drug or a troubleshooting sheet for a DVD drive. That document starts its life in a predictable format, probably Microsoft Word or WordPerfect, but then-to meet the needs of readers who nowadays demand access via multiple devices-the material has to be translated into many more formats: HTML, PageMaker, or Quark, possibly RTF, almost certainly PDF, and nowadays, next-generation devices (cell phones, handheld computers) also impose their own requirements. And what if, suddenly, the dosage levels change or new workarounds emerge to handle DVD problems? That's when a company should put in a call to Arbortext, a 20-year-old Ann Arbor, Michigan-based company that exists to solve a single problem: helping clients automate multichannel publishing","tok_text":"arbortext : enabl of multichannel publish \n a compani ha a document-say , dosag instruct for a prescript drug or a troubleshoot sheet for a dvd drive . that document start it life in a predict format , probabl microsoft word or wordperfect , but then-to meet the need of reader who nowaday demand access via multipl devices-th materi ha to be translat into mani more format : html , pagemak , or quark , possibl rtf , almost certainli pdf , and nowaday , next-gener devic ( cell phone , handheld comput ) also impos their own requir . and what if , suddenli , the dosag level chang or new workaround emerg to handl dvd problem ? that 's when a compani should put in a call to arbortext , a 20-year-old ann arbor , michigan-bas compani that exist to solv a singl problem : help client autom multichannel publish","ordered_present_kp":[455,0,21],"keyphrases":["Arbortext","multichannel publishing","next-generation devices","document format","content assets"],"prmu":["P","P","P","R","U"]} {"id":"797","title":"Adaptive wavelet methods. II. Beyond the elliptic case","abstract":"This paper is concerned with the design and analysis of adaptive wavelet methods for systems of operator equations. Its main accomplishment is to extend the range of applicability of the adaptive wavelet-based method developed previously for symmetric positive definite problems to indefinite or unsymmetric systems of operator equations. This is accomplished by first introducing techniques (such as the least squares formulation developed previously) that transform the original (continuous) problem into an equivalent infinite system of equations which is now well-posed in the Euclidean metric. It is then shown how to utilize adaptive techniques to solve the resulting infinite system of equations. It is shown that for a wide range of problems, this new adaptive method performs with asymptotically optimal complexity, i.e., it recovers an approximate solution with desired accuracy at a computational expense that stays proportional to the number of terms in a corresponding wavelet-best N-term approximation. An important advantage of this adaptive approach is that it automatically stabilizes the numerical procedure so that, for instance, compatibility constraints on the choice of trial spaces, like the LBB condition, no longer arise","tok_text":"adapt wavelet method . ii . beyond the ellipt case \n thi paper is concern with the design and analysi of adapt wavelet method for system of oper equat . it main accomplish is to extend the rang of applic of the adapt wavelet-bas method develop previous for symmetr posit definit problem to indefinit or unsymmetr system of oper equat . thi is accomplish by first introduc techniqu ( such as the least squar formul develop previous ) that transform the origin ( continu ) problem into an equival infinit system of equat which is now well-pos in the euclidean metric . it is then shown how to util adapt techniqu to solv the result infinit system of equat . it is shown that for a wide rang of problem , thi new adapt method perform with asymptot optim complex , i.e. , it recov an approxim solut with desir accuraci at a comput expens that stay proport to the number of term in a correspond wavelet-best n-term approxim . an import advantag of thi adapt approach is that it automat stabil the numer procedur so that , for instanc , compat constraint on the choic of trial space , like the lbb condit , no longer aris","ordered_present_kp":[0,39,140,395,548,736,903],"keyphrases":["adaptive wavelet methods","elliptic case","operator equations","least squares formulation","Euclidean metric","asymptotically optimal complexity","N-term approximation"],"prmu":["P","P","P","P","P","P","P"]} {"id":"576","title":"Application of Sugeno fuzzy-logic controller to the stator field-oriented doubly-fed asynchronous motor drive","abstract":"This study deals with the application of the fuzzy-control theory to wound-rotor asynchronous motor with both its stator and rotor fed by two PWM voltage-source inverters, in which the system operates in stator field-oriented control. Thus, after determining the model of the machine, we present two types of fuzzy controller: Mamdani and Sugeno controllers. The training of the last one is carried out starting from the first. Simulation study is conducted to show the effectiveness of the proposed method","tok_text":"applic of sugeno fuzzy-log control to the stator field-ori doubly-f asynchron motor drive \n thi studi deal with the applic of the fuzzy-control theori to wound-rotor asynchron motor with both it stator and rotor fed by two pwm voltage-sourc invert , in which the system oper in stator field-ori control . thu , after determin the model of the machin , we present two type of fuzzi control : mamdani and sugeno control . the train of the last one is carri out start from the first . simul studi is conduct to show the effect of the propos method","ordered_present_kp":[10,42,130,154,223,278,424],"keyphrases":["Sugeno fuzzy-logic controller","stator field-oriented doubly-fed asynchronous motor drive","fuzzy-control","wound-rotor asynchronous motor","PWM voltage-source inverters","stator field-oriented control","training","machine modelling","Mamdani controller","speed regulation"],"prmu":["P","P","P","P","P","P","P","R","R","U"]} {"id":"1123","title":"A transactional asynchronous replication scheme for mobile database systems","abstract":"In mobile database systems, mobility of users has a significant impact on data replication. As a result, the various replica control protocols that exist today in traditional distributed and multidatabase environments are no longer suitable. To solve this problem, a new mobile database replication scheme, the Transaction-Level Result-Set Propagation (TLRSP) model, is put forward in this paper. The conflict detection and resolution strategy based on TLRSP is discussed in detail, and the implementation algorithm is proposed. In order to compare the performance of the TLRSP model with that of other mobile replication schemes, we have developed a detailed simulation model. Experimental results show that the TLRSP model provides an efficient support for replicated mobile database systems by reducing reprocessing overhead and maintaining database consistency","tok_text":"a transact asynchron replic scheme for mobil databas system \n in mobil databas system , mobil of user ha a signific impact on data replic . as a result , the variou replica control protocol that exist today in tradit distribut and multidatabas environ are no longer suitabl . to solv thi problem , a new mobil databas replic scheme , the transaction-level result-set propag ( tlrsp ) model , is put forward in thi paper . the conflict detect and resolut strategi base on tlrsp is discuss in detail , and the implement algorithm is propos . in order to compar the perform of the tlrsp model with that of other mobil replic scheme , we have develop a detail simul model . experiment result show that the tlrsp model provid an effici support for replic mobil databas system by reduc reprocess overhead and maintain databas consist","ordered_present_kp":[39,126,231,304,338,2],"keyphrases":["transaction","mobile database","data replication","multidatabase","mobile database replication","Transaction-Level Result-Set Propagation","distributed database","mobile computing","conflict reconciliation"],"prmu":["P","P","P","P","P","P","R","M","M"]} {"id":"1166","title":"Embedding the outer automorphism group Out(F\/sub n\/) of a free group of rank n in the group Out(F\/sub m\/) for m > n","abstract":"It is proved that for every n >or= 1, the group Out(F\/sub n\/) is embedded in the group Out(F\/sub m\/) with m = 1 + (n - 1)k\/sup n\/, where k is an arbitrary natural number coprime to n - 1","tok_text":"embed the outer automorph group out(f \/ sub n\/ ) of a free group of rank n in the group out(f \/ sub m\/ ) for m > n \n it is prove that for everi n > or= 1 , the group out(f \/ sub n\/ ) is embed in the group out(f \/ sub m\/ ) with m = 1 + ( n - 1)k \/ sup n\/ , where k is an arbitrari natur number coprim to n - 1","ordered_present_kp":[54,270],"keyphrases":["free group","arbitrary natural number coprime","outer automorphism group embedding"],"prmu":["P","P","R"]} {"id":"632","title":"Modelling dependencies in paired comparison data a log-linear approach","abstract":"In many Bradley-Terry models a more or less explicit assumption is that all decisions of the judges are independent. An assumption which might be questionable at least for the decisions of a given judge. In paired comparison studies, a judge chooses among objects several times, and in such cases, judgements made by the same judge are likely to be dependent. A log-linear representation for the Bradley-Terry model is developed, which takes into account dependencies between judgements. The modelling of the dependencies is embedded in the analysis of multiple binomial responses, which has the advantage of interpretability in terms of conditional odds ratios. Furthermore, the modelling is done in the framework of generalized linear models, thus parameter estimation and the assessment of goodness of fit can be obtained in the standard way by using e.g. GLIM or another standard software","tok_text":"model depend in pair comparison data a log-linear approach \n in mani bradley-terri model a more or less explicit assumpt is that all decis of the judg are independ . an assumpt which might be question at least for the decis of a given judg . in pair comparison studi , a judg choos among object sever time , and in such case , judgement made by the same judg are like to be depend . a log-linear represent for the bradley-terri model is develop , which take into account depend between judgement . the model of the depend is embed in the analysi of multipl binomi respons , which ha the advantag of interpret in term of condit odd ratio . furthermor , the model is done in the framework of gener linear model , thu paramet estim and the assess of good of fit can be obtain in the standard way by use e.g. glim or anoth standard softwar","ordered_present_kp":[39,69,549,620,690,715,747,805],"keyphrases":["log-linear approach","Bradley-Terry model","multiple binomial responses","conditional odds ratios","generalized linear models","parameter estimation","goodness of fit","GLIM","paired comparison data dependency modelling","judge decisions"],"prmu":["P","P","P","P","P","P","P","P","R","R"]} {"id":"677","title":"Acts to facts catalogue","abstract":"The paper shows a way to satisfy users' changing and specific information needs by providing the modified format-author-collaborators-title-series-subject (FACTS). catalogue instead of the traditional author-collaborator-title-series-subjects (ACTS) catalogue","tok_text":"act to fact catalogu \n the paper show a way to satisfi user ' chang and specif inform need by provid the modifi format-author-collaborators-title-series-subject ( fact ) . catalogu instead of the tradit author-collaborator-title-series-subject ( act ) catalogu","ordered_present_kp":[79],"keyphrases":["information needs","format-author-collaborators-title-series-subject catalogue","author-collaborator-title-series-subjects catalogue"],"prmu":["P","R","R"]} {"id":"1222","title":"Mining the optimal class association rule set","abstract":"We define an optimal class association rule set to be the minimum rule set with the same predictive power of the complete class association rule set. Using this rule set instead of the complete class association rule set we can avoid redundant computation that would otherwise be required for mining predictive association rules and hence improve the efficiency of the mining process significantly. We present an efficient algorithm for mining the optimal class association rule set using an upward closure property of pruning weak rules before they are actually generated. We have implemented the algorithm and our experimental results show that our algorithm generates the optimal class association rule set, whose size is smaller than 1\/17 of the complete class association rule set on average, in significantly less time than generating the complete class association rule set. Our proposed criterion has been shown very effective for pruning weak rules in dense databases","tok_text":"mine the optim class associ rule set \n we defin an optim class associ rule set to be the minimum rule set with the same predict power of the complet class associ rule set . use thi rule set instead of the complet class associ rule set we can avoid redund comput that would otherwis be requir for mine predict associ rule and henc improv the effici of the mine process significantli . we present an effici algorithm for mine the optim class associ rule set use an upward closur properti of prune weak rule befor they are actual gener . we have implement the algorithm and our experiment result show that our algorithm gener the optim class associ rule set , whose size is smaller than 1\/17 of the complet class associ rule set on averag , in significantli less time than gener the complet class associ rule set . our propos criterion ha been shown veri effect for prune weak rule in dens databas","ordered_present_kp":[89,120,248,301,463,575,882],"keyphrases":["minimum rule set","predictive power","redundant computation","predictive association rules","upward closure property","experimental results","dense databases","optimal class association rule set mining","relational database","data mining","weak rule pruning"],"prmu":["P","P","P","P","P","P","P","R","M","M","R"]} {"id":"1267","title":"3D reconstruction from uncalibrated-camera optical flow and its reliability evaluation","abstract":"We present a scheme for reconstructing a 3D structure from optical flow observed by a camera with an unknown focal length in a statistically optimal way as well as evaluating the reliability of the computed shape. First, the flow fundamental matrices are optimally computed from the observed flow. They are then decomposed into the focal length, its rate of change, and the motion parameters. Next, the flow is optimally corrected so that it satisfies the epipolar equation exactly. Finally, the 3D positions are computed, and their covariance matrices are evaluated. By simulations and real-image experiments, we test the performance of our system and observe how the normalization (gauge) for removing indeterminacy affects the description of uncertainty","tok_text":"3d reconstruct from uncalibrated-camera optic flow and it reliabl evalu \n we present a scheme for reconstruct a 3d structur from optic flow observ by a camera with an unknown focal length in a statist optim way as well as evalu the reliabl of the comput shape . first , the flow fundament matric are optim comput from the observ flow . they are then decompos into the focal length , it rate of chang , and the motion paramet . next , the flow is optim correct so that it satisfi the epipolar equat exactli . final , the 3d posit are comput , and their covari matric are evalu . by simul and real-imag experi , we test the perform of our system and observ how the normal ( gaug ) for remov indeterminaci affect the descript of uncertainti","ordered_present_kp":[0,20,58,274,410,483,552,591,663],"keyphrases":["3D reconstruction","uncalibrated-camera optical flow","reliability evaluation","flow fundamental matrices","motion parameters","epipolar equation","covariance matrices","real-image experiments","normalization"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"919","title":"Agents in e-commerce: state of the art","abstract":"This paper surveys the state of the art of agent-mediated electronic commerce (e-commerce), especially in business-to-consumer (B2C) e-commerce and business-to-business (B2B) e-commerce. From the consumer buying behaviour perspective, the roles of agents in B2C e-commerce are: product brokering, merchant brokering, and negotiation. The applications of agents in B2B e-commerce are mainly in supply chain management. Mobile agents, evolutionary agents, and data-mining agents are some special techniques which can be applied in agent-mediated e-commerce. In addition, some technologies for implementation are briefly reviewed. Finally, we conclude this paper by discussions on the future directions of agent-mediated e-commerce","tok_text":"agent in e-commerc : state of the art \n thi paper survey the state of the art of agent-medi electron commerc ( e-commerc ) , especi in business-to-consum ( b2c ) e-commerc and business-to-busi ( b2b ) e-commerc . from the consum buy behaviour perspect , the role of agent in b2c e-commerc are : product broker , merchant broker , and negoti . the applic of agent in b2b e-commerc are mainli in suppli chain manag . mobil agent , evolutionari agent , and data-min agent are some special techniqu which can be appli in agent-medi e-commerc . in addit , some technolog for implement are briefli review . final , we conclud thi paper by discuss on the futur direct of agent-medi e-commerc","ordered_present_kp":[21,81,222,295,312,334,394,415,429,454],"keyphrases":["state of the art","agent-mediated electronic commerce","consumer buying behaviour","product brokering","merchant brokering","negotiation","supply chain management","mobile agents","evolutionary agents","data-mining agents","business-to-consumer e-commerce","multi-agent systems","business-to-business e-commerce"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","U","R"]} {"id":"795","title":"Approximation and complexity. II. Iterated integration","abstract":"For pt. I. see ibid., no. 1, p. 289-95 (2001). We introduce two classes of real analytic functions W contained in\/implied by U on an interval. Starting with rational functions to construct functions in W we allow the application of three types of operations: addition, integration, and multiplication by a polynomial with rational coefficients. In a similar way, to construct functions in U we allow integration, addition, and multiplication of functions already constructed in U and multiplication by rational numbers. Thus, U is a subring of the ring of Pfaffian functions. Two lower bounds on the L\/sub infinity \/-norm are proved on a function f from W (or from U, respectively) in terms of the complexity of constructing f","tok_text":"approxim and complex . ii . iter integr \n for pt . i. see ibid . , no . 1 , p. 289 - 95 ( 2001 ) . we introduc two class of real analyt function w contain in \/ impli by u on an interv . start with ration function to construct function in w we allow the applic of three type of oper : addit , integr , and multipl by a polynomi with ration coeffici . in a similar way , to construct function in u we allow integr , addit , and multipl of function alreadi construct in u and multipl by ration number . thu , u is a subr of the ring of pfaffian function . two lower bound on the l \/ sub infin \/-norm are prove on a function f from w ( or from u , respect ) in term of the complex of construct f","ordered_present_kp":[124,197,284,33,305,318,533,557,576],"keyphrases":["integration","real analytic functions","rational functions","addition","multiplication","polynomial","Pfaffian functions","lower bounds","L\/sub infinity \/-norm"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"1385","title":"Cache invalidation and replacement strategies for location-dependent data in mobile environments","abstract":"Mobile location-dependent information services (LDISs) have become increasingly popular in recent years. However, data caching strategies for LDISs have thus far received little attention. In this paper, we study the issues of cache invalidation and cache replacement for location-dependent data under a geometric location model. We introduce a new performance criterion, called caching efficiency, and propose a generic method for location-dependent cache invalidation strategies. In addition, two cache replacement policies, PA and PAID, are proposed. Unlike the conventional replacement policies, PA and PAID take into consideration the valid scope area of a data value. We conduct a series of simulation experiments to study the performance of the proposed caching schemes. The experimental results show that the proposed location-dependent invalidation scheme is very effective and the PA and PAID policies significantly outperform the conventional replacement policies","tok_text":"cach invalid and replac strategi for location-depend data in mobil environ \n mobil location-depend inform servic ( ldiss ) have becom increasingli popular in recent year . howev , data cach strategi for ldiss have thu far receiv littl attent . in thi paper , we studi the issu of cach invalid and cach replac for location-depend data under a geometr locat model . we introduc a new perform criterion , call cach effici , and propos a gener method for location-depend cach invalid strategi . in addit , two cach replac polici , pa and paid , are propos . unlik the convent replac polici , pa and paid take into consider the valid scope area of a data valu . we conduct a seri of simul experi to studi the perform of the propos cach scheme . the experiment result show that the propos location-depend invalid scheme is veri effect and the pa and paid polici significantli outperform the convent replac polici","ordered_present_kp":[83,297,0,77,180],"keyphrases":["cache invalidation","mobile location-dependent information services","location-dependent information","data caching","cache replacement","mobile computing","semantic caching","performance evaluation"],"prmu":["P","P","P","P","P","M","M","M"]} {"id":"1079","title":"A novel robot hand with embedded shape memory alloy actuators","abstract":"Describes the development of an active robot hand, which allows smooth and lifelike motions for anthropomorphic grasping and fine manipulations. An active robot finger 10 mm in outer diameter with a shape memory alloy (SMA) wire actuator embedded in the finger with a constant distance from the geometric centre of the finger was designed and fabricated. The practical specifications of the SMA wire and the flexible rod were determined on the basis of a series of formulae. The active finger consists of two bending parts, the SMA actuators and a connecting part. The mechanical properties of the bending part are investigated. The control system on the basis of resistance feedback is also presented. Finally, a robot hand with three fingers was designed and the grasping experiment was carried out to demonstrate its performance","tok_text":"a novel robot hand with embed shape memori alloy actuat \n describ the develop of an activ robot hand , which allow smooth and lifelik motion for anthropomorph grasp and fine manipul . an activ robot finger 10 mm in outer diamet with a shape memori alloy ( sma ) wire actuat embed in the finger with a constant distanc from the geometr centr of the finger wa design and fabric . the practic specif of the sma wire and the flexibl rod were determin on the basi of a seri of formula . the activ finger consist of two bend part , the sma actuat and a connect part . the mechan properti of the bend part are investig . the control system on the basi of resist feedback is also present . final , a robot hand with three finger wa design and the grasp experi wa carri out to demonstr it perform","ordered_present_kp":[24,126,145,169,486,648,421,84],"keyphrases":["embedded shape memory alloy actuators","active robot hand","lifelike motions","anthropomorphic grasping","fine manipulations","flexible rod","active finger","resistance feedback"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"806","title":"Flow measurement - future directions","abstract":"Interest in the flow of liquids and its measurement can be traced back to early studies by the Egyptians, the Chinese and the Romans. Since these early times the science of flow measurement has undergone a massive change but during the last 25 years or so (1977-2002) it has matured enormously. One of the principal reasons for this is that higher accuracies and reliabilities have been demanded by industry in the measurement of fiscal transfers and today there is vigorous interest in the subject from both the flowmeter manufacturer and user viewpoints. This interest is coupled with the development of advanced computer techniques in fluid mechanics together with the application of increasingly sophisticated electronics","tok_text":"flow measur - futur direct \n interest in the flow of liquid and it measur can be trace back to earli studi by the egyptian , the chines and the roman . sinc these earli time the scienc of flow measur ha undergon a massiv chang but dure the last 25 year or so ( 1977 - 2002 ) it ha matur enorm . one of the princip reason for thi is that higher accuraci and reliabl have been demand by industri in the measur of fiscal transfer and today there is vigor interest in the subject from both the flowmet manufactur and user viewpoint . thi interest is coupl with the develop of advanc comput techniqu in fluid mechan togeth with the applic of increasingli sophist electron","ordered_present_kp":[0,114,129,144,411,490,572,598],"keyphrases":["flow measurement","Egyptians","Chinese","Romans","fiscal transfers","flowmeter manufacturer","advanced computer techniques","fluid mechanics","flow metering","signal processing","liquid flow","electronics application"],"prmu":["P","P","P","P","P","P","P","P","M","U","R","R"]} {"id":"843","title":"An ACM-W literature review on women in computing","abstract":"The pipeline shrinkage problem for women in computer science is a well-known and documented phenomenon where the ratio of women to men involved in computing shrinks dramatically from early student years to working years. During the last decade, considerable research ensued to understand the reasons behind the existence of the shrinking pipeline and in some cases to take action to increase the numbers of women in computing. Through the work of a National Science Foundation funded project, ACM's Committee on Women in Computing (ACM-W) has taken a first step towards pulling this research together. A large number of articles was gathered and processed on the topic of women in computing and the shrinking pipeline. The committee created a publicly available online database to organize the references of this body of work by topic, author, and reference information. The database, constantly being updated, is accessible through ACM-W's website . A final report is also available via the ACM-W Web site which covers current statistics on women in computing, summaries of the literature in the database, and a set of recommendations. The article is a brief synopsis of a subset of the literature review as of August 2001","tok_text":"an acm-w literatur review on women in comput \n the pipelin shrinkag problem for women in comput scienc is a well-known and document phenomenon where the ratio of women to men involv in comput shrink dramat from earli student year to work year . dure the last decad , consider research ensu to understand the reason behind the exist of the shrink pipelin and in some case to take action to increas the number of women in comput . through the work of a nation scienc foundat fund project , acm 's committe on women in comput ( acm-w ) ha taken a first step toward pull thi research togeth . a larg number of articl wa gather and process on the topic of women in comput and the shrink pipelin . the committe creat a publicli avail onlin databas to organ the refer of thi bodi of work by topic , author , and refer inform . the databas , constantli be updat , is access through acm-w 's websit < http:\/\/www.acm.org\/women > . a final report is also avail via the acm-w web site which cover current statist on women in comput , summari of the literatur in the databas , and a set of recommend . the articl is a brief synopsi of a subset of the literatur review as of august 2001","ordered_present_kp":[3,51],"keyphrases":["ACM-W literature review","pipeline shrinkage problem","ACM Committee on Women in Computing"],"prmu":["P","P","R"]} {"id":"1084","title":"On quasi-linear PDAEs with convection: applications, indices, numerical solution","abstract":"For a class of partial differential algebraic equations (PDAEs) of quasi-linear type which include nonlinear terms of convection type, a possibility to determine a time and spatial index is considered. As a typical example we investigate an application from plasma physics. Especially we discuss the numerical solution of initial boundary value problems by means of a corresponding finite difference splitting procedure which is a modification of a well-known fractional step method coupled with a matrix factorization. The convergence of the numerical solution towards the exact solution of the corresponding initial boundary value problem is investigated. Some results of a numerical solution of the plasma PDAE are given","tok_text":"on quasi-linear pdae with convect : applic , indic , numer solut \n for a class of partial differenti algebra equat ( pdae ) of quasi-linear type which includ nonlinear term of convect type , a possibl to determin a time and spatial index is consid . as a typic exampl we investig an applic from plasma physic . especi we discuss the numer solut of initi boundari valu problem by mean of a correspond finit differ split procedur which is a modif of a well-known fraction step method coupl with a matrix factor . the converg of the numer solut toward the exact solut of the correspond initi boundari valu problem is investig . some result of a numer solut of the plasma pdae are given","ordered_present_kp":[224,295,348,400,461,495,26,45,53],"keyphrases":["convection","indices","numerical solution","spatial index","plasma physics","initial boundary value problems","finite difference splitting procedure","fractional step method","matrix factorization","quasi-linear partial differential algebraic equations"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"1455","title":"A wizard idea [Internet in finance]","abstract":"New technology is set to become an ever-more important area of work for brokers. Lawrie Holmes looks at how the Internet is driving change and opportunity","tok_text":"a wizard idea [ internet in financ ] \n new technolog is set to becom an ever-mor import area of work for broker . lawri holm look at how the internet is drive chang and opportun","ordered_present_kp":[105,16,28],"keyphrases":["Internet","finance","brokers"],"prmu":["P","P","P"]} {"id":"1410","title":"WAM!Net: private pipes for electronic media","abstract":"\"We are the digital version of FedEx. We offer storage and intelligent workflow.\" The United States military - especially during war time - is pretty careful about the way it handles its workflow and communications. Before a company is awarded a government contract, the company and its technology are screened and verified. If the technology or its creators aren't trustworthy and secure, chances are they aren't getting by Uncle Sam. Record companies and publishing houses tend to feel the same way. After all, security is just as important to a record executive as it is to a Navy commander. WAM!Net, a Wide-Area Media network (hence, the name) passes muster with both. The company, which employs about 320 employees around the world, has 15000 customers including the US Navy and a host of record labels, publishing companies, healthcare providers, and advertising agencies, all of whom use its network as a way to transport, store, and receive data. \"We are the digital version of FedEx. We offer storage and intelligent workflow,\" says Murad Velani, executive vice president of sales and marketing for WAM!Net. \"We started out as purely transport and we've become a digital platform.\"","tok_text":"wam!net : privat pipe for electron media \n \" we are the digit version of fedex . we offer storag and intellig workflow . \" the unit state militari - especi dure war time - is pretti care about the way it handl it workflow and commun . befor a compani is award a govern contract , the compani and it technolog are screen and verifi . if the technolog or it creator are n't trustworthi and secur , chanc are they are n't get by uncl sam . record compani and publish hous tend to feel the same way . after all , secur is just as import to a record execut as it is to a navi command . wam!net , a wide-area media network ( henc , the name ) pass muster with both . the compani , which employ about 320 employe around the world , ha 15000 custom includ the us navi and a host of record label , publish compani , healthcar provid , and advertis agenc , all of whom use it network as a way to transport , store , and receiv data . \" we are the digit version of fedex . we offer storag and intellig workflow , \" say murad velani , execut vice presid of sale and market for wam!net . \" we start out as pure transport and we 've becom a digit platform . \"","ordered_present_kp":[127,593,774,789,807,830,101,0,26,1127],"keyphrases":["WAM!Net","electronic media","intelligent workflow","United States military","Wide-Area Media network","record labels","publishing companies","healthcare providers","advertising agencies","digital platform","U.S. Navy","content creators","high-speed private network","ATM technology","content information","publishing information","client-server format","ASP format"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","M","M","M","U","M","U","U"]} {"id":"1378","title":"Development of an Internet-based intelligent design support system for rolling element bearings","abstract":"This paper presents a novel approach to developing an intelligent agile design system for rolling bearings based on artificial intelligence (AI), Internet and Web technologies and expertise. The underlying philosophy of the approach is to use AI technology and Web-based design support systems as smart tools from which design customers can rapidly and responsively access the systems' built-in design expertise. The approach is described in detail with a novel AI model and system implementation issues. The major issues in implementing the approach are discussed with particular reference to using AI technologies, network programming, client-server technology and open computing of bearing design and manufacturing requirements","tok_text":"develop of an internet-bas intellig design support system for roll element bear \n thi paper present a novel approach to develop an intellig agil design system for roll bear base on artifici intellig ( ai ) , internet and web technolog and expertis . the underli philosophi of the approach is to use ai technolog and web-bas design support system as smart tool from which design custom can rapidli and respons access the system ' built-in design expertis . the approach is describ in detail with a novel ai model and system implement issu . the major issu in implement the approach are discuss with particular refer to use ai technolog , network program , client-serv technolog and open comput of bear design and manufactur requir","ordered_present_kp":[14,62,131,181,221,349,637,655,712,696],"keyphrases":["Internet-based intelligent design support system","rolling element bearings","intelligent agile design system","artificial intelligence","Web technologies","smart tools","network programming","client-server technology","bearing design","manufacturing requirements","Internet technologies"],"prmu":["P","P","P","P","P","P","P","P","P","P","R"]} {"id":"768","title":"Critical lines identification on voltage collapse analysis","abstract":"This paper deals with critical lines identification on voltage collapse analysis. It is known, from the literature, that voltage collapse is a local phenomenon that spreads around an initial neighborhood Therefore, identifying the system critical bus plays an important role on voltage collapse prevention. For this purpose, the system critical transmission lines should also be identified In this paper, these issues are addressed, yielding reliable results in a short computational time. Tests are done with the help of the IEEE-118 bus and the Southeastern Brazilian systems","tok_text":"critic line identif on voltag collaps analysi \n thi paper deal with critic line identif on voltag collaps analysi . it is known , from the literatur , that voltag collaps is a local phenomenon that spread around an initi neighborhood therefor , identifi the system critic bu play an import role on voltag collaps prevent . for thi purpos , the system critic transmiss line should also be identifi in thi paper , these issu are address , yield reliabl result in a short comput time . test are done with the help of the ieee-118 bu and the southeastern brazilian system","ordered_present_kp":[176,518],"keyphrases":["local phenomenon","IEEE-118 bus","power system voltage collapse analysis","critical transmission lines identification","system critical bus identification","computer simulation","Brazil"],"prmu":["P","P","M","R","R","M","U"]} {"id":"1199","title":"Quasi stage order conditions for SDIRK methods","abstract":"The stage order condition is a simplifying assumption that reduces the number of order conditions to be fulfilled when designing a Runge-Kutta (RK) method. Because a DIRK (diagonally implicit RK) method cannot have stage order greater than 1, we introduce quasi stage order conditions and derive some of their properties for DIRKs. We use these conditions to derive a low-order DIRK method with embedded error estimator. Numerical tests with stiff ODEs and DAEs of index 1 and 2 indicate that the method is competitive with other RK methods for low accuracy tolerances","tok_text":"quasi stage order condit for sdirk method \n the stage order condit is a simplifi assumpt that reduc the number of order condit to be fulfil when design a runge-kutta ( rk ) method . becaus a dirk ( diagon implicit rk ) method can not have stage order greater than 1 , we introduc quasi stage order condit and deriv some of their properti for dirk . we use these condit to deriv a low-ord dirk method with embed error estim . numer test with stiff ode and dae of index 1 and 2 indic that the method is competit with other rk method for low accuraci toler","ordered_present_kp":[0,405,425,29],"keyphrases":["quasi stage order conditions","SDIRK methods","embedded error estimator","numerical tests","diagonally implicit Runge-Kutta method","differential-algebraic systems"],"prmu":["P","P","P","P","R","U"]} {"id":"589","title":"Hierarchical neuro-fuzzy quadtree models","abstract":"Hybrid neuro-fuzzy systems have been in evidence during the past few years, due to its attractive combination of the learning capacity of artificial neural networks with the interpretability of the fuzzy systems. This article proposes a new hybrid neuro-fuzzy model, named hierarchical neuro-fuzzy quadtree (HNFQ), which is based on a recursive partitioning method of the input space named quadtree. The article describes the architecture of this new model, presenting its basic cell and its learning algorithm. The HNFQ system is evaluated in three well known benchmark applications: the sinc(x, y) function approximation, the Mackey Glass chaotic series forecast and the two spirals problem. When compared to other neuro-fuzzy systems, the HNFQ exhibits competing results, with two major advantages it automatically creates its own structure and it is not limited to few input variables","tok_text":"hierarch neuro-fuzzi quadtre model \n hybrid neuro-fuzzi system have been in evid dure the past few year , due to it attract combin of the learn capac of artifici neural network with the interpret of the fuzzi system . thi articl propos a new hybrid neuro-fuzzi model , name hierarch neuro-fuzzi quadtre ( hnfq ) , which is base on a recurs partit method of the input space name quadtre . the articl describ the architectur of thi new model , present it basic cell and it learn algorithm . the hnfq system is evalu in three well known benchmark applic : the sinc(x , y ) function approxim , the mackey glass chaotic seri forecast and the two spiral problem . when compar to other neuro-fuzzi system , the hnfq exhibit compet result , with two major advantag it automat creat it own structur and it is not limit to few input variabl","ordered_present_kp":[44,50,0,21,333,471,594],"keyphrases":["hierarchical neuro-fuzzy quadtree","quadtree","neuro-fuzzy systems","fuzzy systems","recursive partitioning","learning algorithm","Mackey Glass chaotic series"],"prmu":["P","P","P","P","P","P","P"]} {"id":"630","title":"Score tests for zero-inflated Poisson models","abstract":"In many situations count data have a large proportion of zeros and the zero-inflated Poisson regression (ZIP) model may be appropriate. A simple score test for zero-inflation, comparing the ZIP model with a constant proportion of excess zeros to a standard Poisson regression model, was given by van den Broek (1995). We extend this test to the more general situation where the zero probability is allowed to depend on covariates. The performance of this test is evaluated using a simulation study. To identify potentially important covariates in the zero-inflation model a composite test is proposed. The use of the general score test and the composite procedure is illustrated on two examples from the literature. The composite score test is found to suggest appropriate models","tok_text":"score test for zero-infl poisson model \n in mani situat count data have a larg proport of zero and the zero-infl poisson regress ( zip ) model may be appropri . a simpl score test for zero-infl , compar the zip model with a constant proport of excess zero to a standard poisson regress model , wa given by van den broek ( 1995 ) . we extend thi test to the more gener situat where the zero probabl is allow to depend on covari . the perform of thi test is evalu use a simul studi . to identifi potenti import covari in the zero-infl model a composit test is propos . the use of the gener score test and the composit procedur is illustr on two exampl from the literatur . the composit score test is found to suggest appropri model","ordered_present_kp":[56,0,385,420,244,468,541],"keyphrases":["score tests","count data","excess zeros","zero probability","covariates","simulation","composite test","zero-inflated Poisson regression model"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"675","title":"Application foundations [application servers]","abstract":"The changing role of application servers means choosing the right platform has become a complex challenge","tok_text":"applic foundat [ applic server ] \n the chang role of applic server mean choos the right platform ha becom a complex challeng","ordered_present_kp":[17],"keyphrases":["application servers","Microsoft .Net","transaction processing","security","availability","load balancing","Java 2 Enterprise Edition"],"prmu":["P","U","U","U","U","U","U"]} {"id":"1220","title":"Modeling discourse in collaborative work support systems: a knowledge representation and configuration perspective","abstract":"Collaborative work processes usually raise a lot of intricate debates and negotiations among participants, whereas conflicts of interest are inevitable and support for achieving consensus and compromise is required. Individual contributions, brought up by parties with different backgrounds and interests, need to be appropriately structured and maintained. This paper presents a model of discourse acts that participants use to communicate their attitudes to each other, or affect the attitudes of others, in such environments. The first part deals with the knowledge representation and communication aspects of the problem, while the second one, in the context of an already implemented system, namely HERMES, with issues related to the configuration of the contributions asserted at each discourse instance. The overall work focuses on the machinery needed in a computer-assisted collaborative work environment, the aim being to further enhance the human-computer interaction","tok_text":"model discours in collabor work support system : a knowledg represent and configur perspect \n collabor work process usual rais a lot of intric debat and negoti among particip , wherea conflict of interest are inevit and support for achiev consensu and compromis is requir . individu contribut , brought up by parti with differ background and interest , need to be appropri structur and maintain . thi paper present a model of discours act that particip use to commun their attitud to each other , or affect the attitud of other , in such environ . the first part deal with the knowledg represent and commun aspect of the problem , while the second one , in the context of an alreadi implement system , name herm , with issu relat to the configur of the contribut assert at each discours instanc . the overal work focus on the machineri need in a computer-assist collabor work environ , the aim be to further enhanc the human-comput interact","ordered_present_kp":[18,51,184,239,252,707,919],"keyphrases":["collaborative work support systems","knowledge representation","conflicts of interest","consensus","compromise","HERMES","human-computer interaction","discourse modeling","knowledge communication"],"prmu":["P","P","P","P","P","P","P","R","R"]} {"id":"1265","title":"Optimization of requantization parameter for MPEG transcoding","abstract":"This paper considers transcoding in which an MPEG stream is converted to a low-bit-rate MPEG stream, and proposes a method in which the transcoding error can be reduced by optimally selecting the quantization parameter for each macroblock. In transcoding with a low compression ratio, it is crucial to prohibit transcoding with a requantization parameter which is 1 to 2 times the quantization parameter of the input stream. Consequently, as the first step, an optimization method for the requantization parameter is proposed which cares for the error propagation effect by interframe prediction. Then, the proposed optimization method is extended so that the method can also be applied to the case of a high compression ratio in which the rate-distortion curve is approximated for each macroblock in the range of requantization parameters larger than 2 times the quantization parameter. It is verified by a simulation experiment that the PSNR is improved by 0.5 to 0.8 dB compared to the case in which a 6 Mbit\/s MPEG stream is not optimized by twofold recompression","tok_text":"optim of requant paramet for mpeg transcod \n thi paper consid transcod in which an mpeg stream is convert to a low-bit-r mpeg stream , and propos a method in which the transcod error can be reduc by optim select the quantiz paramet for each macroblock . in transcod with a low compress ratio , it is crucial to prohibit transcod with a requant paramet which is 1 to 2 time the quantiz paramet of the input stream . consequ , as the first step , an optim method for the requant paramet is propos which care for the error propag effect by interfram predict . then , the propos optim method is extend so that the method can also be appli to the case of a high compress ratio in which the rate-distort curv is approxim for each macroblock in the rang of requant paramet larger than 2 time the quantiz paramet . it is verifi by a simul experi that the psnr is improv by 0.5 to 0.8 db compar to the case in which a 6 mbit \/ s mpeg stream is not optim by twofold recompress","ordered_present_kp":[111,168,241,277,514,537,685,847,825,948,909],"keyphrases":["low-bit-rate MPEG stream","transcoding error","macroblock","compression ratio","error propagation effect","interframe prediction","rate-distortion curve","simulation","PSNR","6 Mbit\/s","twofold recompression","requantization parameter optimization","rate conversion","rate control"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","U","U"]} {"id":"1298","title":"An analytical model for a composite adaptive rectangular structure using the Heaviside function","abstract":"The objective of this article is to describe a mathematical model, based on the Heaviside function and on the delta -Dirac distribution, for a composite adaptive rectangular structure with embedded and\/or bonded piezoelectric actuators and sensors. In the adopted structure model, the laminae are made up a configuration of rectangular nonpiezoelectric and piezoelectric patches. The laminae do not all have the same area nor do they present the same configuration, such that there are points where there is no material. The equations of motion and the boundary conditions, which describe the electromechanical coupling, are based on the Mindlin displacement field, on the linear theory of piezoelectricity, and on the Hamilton principle","tok_text":"an analyt model for a composit adapt rectangular structur use the heavisid function \n the object of thi articl is to describ a mathemat model , base on the heavisid function and on the delta -dirac distribut , for a composit adapt rectangular structur with embed and\/or bond piezoelectr actuat and sensor . in the adopt structur model , the lamina are made up a configur of rectangular nonpiezoelectr and piezoelectr patch . the lamina do not all have the same area nor do they present the same configur , such that there are point where there is no materi . the equat of motion and the boundari condit , which describ the electromechan coupl , are base on the mindlin displac field , on the linear theori of piezoelectr , and on the hamilton principl","ordered_present_kp":[22,127,66,275,405,563,587,623,661,734],"keyphrases":["composite adaptive rectangular structure","Heaviside function","mathematical model","piezoelectric actuators","piezoelectric patches","equations of motion","boundary conditions","electromechanical coupling","Mindlin displacement field","Hamilton principle","delta-Dirac distribution","embedded actuators","embedded sensors","bonded actuators","bonded sensors","piezoelectric sensors","nonpiezoelectric patches","closed-form solution","Lagrangian functions","linear piezoelectricity","constitutive relations","virtual kinetic energy","rectangular composite plate","finite-element method"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","R","R","R","R","R","R","U","M","R","U","U","M","U"]} {"id":"688","title":"Active vibration control of piezolaminated smart beams","abstract":"This paper deals with the active vibration control of beam like structures with distributed piezoelectric sensor and actuator layers bonded on top and bottom surfaces of the beam. A finite element model based on Euler-Bernoulli beam theory has been developed. The contribution of the piezoelectric sensor and actuator layers on the mass and stiffness of the beam is considered. Three types of classical control strategies, namely direct proportional feedback, constant-gain negative velocity feedback and Lyapunov feedback and an optimal control strategy, linear quadratic regulator (LQR) scheme are applied to study their control effectiveness. Also, the control performance with different types of loading, such as impulse loading, step loading, harmonic and random loading is studied","tok_text":"activ vibrat control of piezolamin smart beam \n thi paper deal with the activ vibrat control of beam like structur with distribut piezoelectr sensor and actuat layer bond on top and bottom surfac of the beam . a finit element model base on euler-bernoulli beam theori ha been develop . the contribut of the piezoelectr sensor and actuat layer on the mass and stiff of the beam is consid . three type of classic control strategi , name direct proport feedback , constant-gain neg veloc feedback and lyapunov feedback and an optim control strategi , linear quadrat regul ( lqr ) scheme are appli to studi their control effect . also , the control perform with differ type of load , such as impuls load , step load , harmon and random load is studi","ordered_present_kp":[0,24,96,182,212,240,350,359,435,461,498,523,548,609,688,702,725],"keyphrases":["active vibration control","piezolaminated smart beams","beam like structures","bottom surfaces","finite element model","Euler-Bernoulli beam theory","mass","stiffness","direct proportional feedback","constant-gain negative velocity feedback","Lyapunov feedback","optimal control strategy","linear quadratic regulator","control effectiveness","impulse loading","step loading","random loading","distributed piezoelectric sensor layers","distributed piezoelectric actuator layers","top surfaces","harmonic loading"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R"]} {"id":"574","title":"A novel approach for the detection of pathlines in X-ray angiograms: the wavefront propagation algorithm","abstract":"Presents a new pathline approach, based on the wavefront propagation principle, and developed in order to reduce the variability in the outcomes of the quantitative coronary artery analysis. This novel approach, called wavepath, reduces the influence of the user-defined start- and endpoints of the vessel segment and is therefore more robust and improves the reproducibility of the lesion quantification substantially. The validation study shows that the wavepath method is totally constant in the middle part of the pathline, even when using the method for constructing a bifurcation or sidebranch pathline. Furthermore, the number of corrections needed to guide the wavepath through the correct vessel is decreased from an average of 0.44 corrections per pathline to an average of 0.12 per pathline. Therefore, it can be concluded that the wavepath algorithm improves the overall analysis substantially","tok_text":"a novel approach for the detect of pathlin in x-ray angiogram : the wavefront propag algorithm \n present a new pathlin approach , base on the wavefront propag principl , and develop in order to reduc the variabl in the outcom of the quantit coronari arteri analysi . thi novel approach , call wavepath , reduc the influenc of the user-defin start- and endpoint of the vessel segment and is therefor more robust and improv the reproduc of the lesion quantif substanti . the valid studi show that the wavepath method is total constant in the middl part of the pathlin , even when use the method for construct a bifurc or sidebranch pathlin . furthermor , the number of correct need to guid the wavepath through the correct vessel is decreas from an averag of 0.44 correct per pathlin to an averag of 0.12 per pathlin . therefor , it can be conclud that the wavepath algorithm improv the overal analysi substanti","ordered_present_kp":[142,233,368,442,499,609,619,667,713,68,46],"keyphrases":["X-ray angiograms","wavefront propagation algorithm","wavefront propagation principle","quantitative coronary artery analysis","vessel segment","lesion quantification","wavepath method","bifurcation","sidebranch pathline","corrections","correct vessel","user-defined startpoints","user-defined endpoints"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","M","R"]} {"id":"1121","title":"Optimal bandwidth utilization of all-optical ring with a converter of degree 4","abstract":"In many models of all-optical routing, a set of communication paths in a network is given, and a wavelength is to be assigned to each path so that paths sharing an edge receive different wavelengths. The goal is to assign as few wavelengths as possible, in order to use the optical bandwidth efficiently. If a node of a network contains a wavelength converter, any path that passes through this node may change its wavelength. Having converters at some of the nodes can reduce the number of wavelengths required for routing. This paper presents a wavelength converter with degree 4 and gives a routing algorithm which shows that any routing with load L can be realized with L wavelengths when a node of an all-optical ring hosts such a wavelength converter. It is also proved that 4 is the minimum degree of the converter to reach the full utilization of the available wavelengths if only one node of an all-optical ring hosts a converter","tok_text":"optim bandwidth util of all-opt ring with a convert of degre 4 \n in mani model of all-opt rout , a set of commun path in a network is given , and a wavelength is to be assign to each path so that path share an edg receiv differ wavelength . the goal is to assign as few wavelength as possibl , in order to use the optic bandwidth effici . if a node of a network contain a wavelength convert , ani path that pass through thi node may chang it wavelength . have convert at some of the node can reduc the number of wavelength requir for rout . thi paper present a wavelength convert with degre 4 and give a rout algorithm which show that ani rout with load l can be realiz with l wavelength when a node of an all-opt ring host such a wavelength convert . it is also prove that 4 is the minimum degre of the convert to reach the full util of the avail wavelength if onli one node of an all-opt ring host a convert","ordered_present_kp":[82,106,372,24],"keyphrases":["all-optical ring","all-optical routing","communication paths","wavelength converter","all-optical network","wavelength assignment","wavelength translation"],"prmu":["P","P","P","P","R","R","M"]} {"id":"1164","title":"Friedberg numberings of families of n-computably enumerable sets","abstract":"We establish a number of results on numberings, in particular, on Friedberg numberings, of families of d.c.e. sets. First, it is proved that there exists a Friedberg numbering of the family of all d.c.e. sets. We also show that this result, patterned on Friedberg's famous theorem for the family of all c.e. sets, holds for the family of all n-c.e. sets for any n > 2. Second, it is stated that there exists an infinite family of d.c.e. sets without a Friedberg numbering. Third, it is shown that there exists an infinite family of c.e. sets (treated as a family of d.c.e. sets) with a numbering which is unique up to equivalence. Fourth, it is proved that there exists a family of d.c.e. sets with a least numbering (under reducibility) which is Friedberg but is not the only numbering (modulo reducibility)","tok_text":"friedberg number of famili of n-comput enumer set \n we establish a number of result on number , in particular , on friedberg number , of famili of d.c.e . set . first , it is prove that there exist a friedberg number of the famili of all d.c.e . set . we also show that thi result , pattern on friedberg 's famou theorem for the famili of all c.e . set , hold for the famili of all n-c.e . set for ani n > 2 . second , it is state that there exist an infinit famili of d.c.e . set without a friedberg number . third , it is shown that there exist an infinit famili of c.e . set ( treat as a famili of d.c.e . set ) with a number which is uniqu up to equival . fourth , it is prove that there exist a famili of d.c.e . set with a least number ( under reduc ) which is friedberg but is not the onli number ( modulo reduc )","ordered_present_kp":[0,451,20],"keyphrases":["Friedberg numberings","families of n-computably enumerable sets","infinite family","computability theory"],"prmu":["P","P","P","U"]} {"id":"549","title":"Taking it to the max [ventilation systems]","abstract":"Raising the volumetric air supply rate is one way of increasing the cooling capacity of displacement ventilation systems. David Butler and Michael Swainson explore how different types of diffusers can help make this work","tok_text":"take it to the max [ ventil system ] \n rais the volumetr air suppli rate is one way of increas the cool capac of displac ventil system . david butler and michael swainson explor how differ type of diffus can help make thi work","ordered_present_kp":[48,99,113,197],"keyphrases":["volumetric air supply rate","cooling capacity","displacement ventilation systems","diffusers"],"prmu":["P","P","P","P"]} {"id":"1159","title":"Sigma -admissible families over linear orders","abstract":"Admissible sets of the form HYP(M), where M is a recursively saturated system, are treated. We provide descriptions of subsets M, which are Sigma \/sub *\/-sets in HYP(M), and of families of subsets M, which form Sigma -regular families in HYP(M), in terms of the concept of being fundamental couched in the article. Fundamental subsets and families are characterized for models of dense linear orderings","tok_text":"sigma -admiss famili over linear order \n admiss set of the form hyp(m ) , where m is a recurs satur system , are treat . we provid descript of subset m , which are sigma \/sub * \/-set in hyp(m ) , and of famili of subset m , which form sigma -regular famili in hyp(m ) , in term of the concept of be fundament couch in the articl . fundament subset and famili are character for model of dens linear order","ordered_present_kp":[0,26,64,87,331,386],"keyphrases":["Sigma -admissible families","linear orders","HYP(M)","recursively saturated system","fundamental subsets","dense linear orderings"],"prmu":["P","P","P","P","P","P"]} {"id":"120","title":"Self-organized critical traffic in parallel computer networks","abstract":"In a recent paper, we analysed the dynamics of traffic flow in a simple, square lattice architecture. It was shown that a phase transition takes place between a free and a congested phase. The transition point was shown to exhibit optimal information transfer and wide fluctuations in time, with scale-free properties. In this paper, we further extend our analysis by considering a generalization of the previous model in which the rate of packet emission is regulated by the local congestion perceived by each node. As a result of the feedback between traffic congestion and packet release, the system is poised at criticality. Many well-known statistical features displayed by Internet traffic are recovered from our model in a natural way","tok_text":"self-organ critic traffic in parallel comput network \n in a recent paper , we analys the dynam of traffic flow in a simpl , squar lattic architectur . it wa shown that a phase transit take place between a free and a congest phase . the transit point wa shown to exhibit optim inform transfer and wide fluctuat in time , with scale-fre properti . in thi paper , we further extend our analysi by consid a gener of the previou model in which the rate of packet emiss is regul by the local congest perceiv by each node . as a result of the feedback between traffic congest and packet releas , the system is pois at critic . mani well-known statist featur display by internet traffic are recov from our model in a natur way","ordered_present_kp":[0,170,216,236,270,296,325,403,451,573,636,662,124,29],"keyphrases":["self-organized critical traffic","parallel computer networks","square lattice architecture","phase transition","congested phase","transition point","optimal information transfer","wide fluctuations","scale-free properties","generalization","packet emission","packet release","statistical features","Internet traffic","traffic flow dynamics","free phase"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"963","title":"A computational model of learned avoidance behavior in a one-way avoidance experiment","abstract":"We present a computational model of learned avoidance behavior in a one-way avoidance experiment. Our model employs the reinforcement learning paradigm and a temporal-difference algorithm to implement both classically conditioned and instrumentally conditioned components. The role of the classically conditioned component is to develop an expectation of future benefit that is a function of the learning system's state and action. Competition among the instrumentally conditioned components determines the overt behavior generated by the learning system. Our model displays, in simulation, the reduced latency of the avoidance behavior during learning with continuing trials and the resistance to extinction of the avoidance response. These results are consistent with experimentally observed animal behavior. Our model extends the traditional two-process learning mechanism of Mowrer (1947) by explicitly defining the mechanisms of proprioceptive feedback, an internal clock, and generalization over the action space","tok_text":"a comput model of learn avoid behavior in a one-way avoid experi \n we present a comput model of learn avoid behavior in a one-way avoid experi . our model employ the reinforc learn paradigm and a temporal-differ algorithm to implement both classic condit and instrument condit compon . the role of the classic condit compon is to develop an expect of futur benefit that is a function of the learn system 's state and action . competit among the instrument condit compon determin the overt behavior gener by the learn system . our model display , in simul , the reduc latenc of the avoid behavior dure learn with continu trial and the resist to extinct of the avoid respons . these result are consist with experiment observ anim behavior . our model extend the tradit two-process learn mechan of mowrer ( 1947 ) by explicitli defin the mechan of propriocept feedback , an intern clock , and gener over the action space","ordered_present_kp":[2,18,44,166,196,302,259,561,723,760,845,871],"keyphrases":["computational model","learned avoidance behavior","one-way avoidance experiment","reinforcement learning","temporal-difference algorithm","instrumentally conditioned components","classically conditioned components","reduced latency","animal behavior","traditional two-process learning mechanism","proprioceptive feedback","internal clock"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"926","title":"Experimental investigation of active vibration control using neural networks and piezoelectric actuators","abstract":"The use of neural networks for identification and control of smart structures is investigated experimentally. Piezoelectric actuators are employed to suppress the vibrations of a cantilevered plate subject to impulse, sine wave and band-limited white noise disturbances. The neural networks used are multilayer perceptrons trained with error backpropagation. Validation studies show that the identifier predicts the system dynamics accurately. The controller is trained adaptively with the help of the neural identifier. Experimental results demonstrate excellent closed-loop performance and robustness of the neurocontroller","tok_text":"experiment investig of activ vibrat control use neural network and piezoelectr actuat \n the use of neural network for identif and control of smart structur is investig experiment . piezoelectr actuat are employ to suppress the vibrat of a cantilev plate subject to impuls , sine wave and band-limit white nois disturb . the neural network use are multilay perceptron train with error backpropag . valid studi show that the identifi predict the system dynam accur . the control is train adapt with the help of the neural identifi . experiment result demonstr excel closed-loop perform and robust of the neurocontrol","ordered_present_kp":[23,48,67,118,36,141,239,299,347,378,564,588,602],"keyphrases":["active vibration control","control","neural networks","piezoelectric actuators","identification","smart structures","cantilevered plate","white noise disturbances","multilayer perceptrons","error backpropagation","closed-loop performance","robustness","neurocontroller","vibration suppression"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"648","title":"Study of ambiguities inherent to the spectral analysis of Voigt profiles-a modified Simplex approach","abstract":"In pulsed spectrometries, temporal transients are often analyzed directly in the temporal domain, assuming they consist only of purely exponentially decaying sinusoids. When experimental spectra actually consist of Gaussian or Voigt profiles (Gauss-Lorentz profiles), we show that the direct methods may erroneously interpret such lines as the sum of two or more Lorentzian profiles. Using a Nelder and Mead Simplex method, modified by introducing new means to avoid degeneracies and quenchings in secondary minima, we demonstrate that a large number of different solutions can be obtained with equivalent accuracy over the limited acquisition time interval, with final peak parameters devoid of physical or chemical meaning","tok_text":"studi of ambigu inher to the spectral analysi of voigt profiles-a modifi simplex approach \n in puls spectrometri , tempor transient are often analyz directli in the tempor domain , assum they consist onli of pure exponenti decay sinusoid . when experiment spectra actual consist of gaussian or voigt profil ( gauss-lorentz profil ) , we show that the direct method may erron interpret such line as the sum of two or more lorentzian profil . use a nelder and mead simplex method , modifi by introduc new mean to avoid degeneraci and quench in secondari minima , we demonstr that a larg number of differ solut can be obtain with equival accuraci over the limit acquisit time interv , with final peak paramet devoid of physic or chemic mean","ordered_present_kp":[95,115,29,49,309,447,635,653,687],"keyphrases":["spectral analysis","Voigt profiles","pulsed spectrometries","temporal transients","Gauss-Lorentz profiles","Nelder and Mead Simplex method","accuracy","limited acquisition time interval","final peak parameters","Gaussian profiles"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"72","title":"A three-tier technology training strategy in a dynamic business environment","abstract":"As end-user training becomes increasingly important in today's technology-intensive business environment, progressive companies remain alert to find ways to provide their end users with timely training and resources. This paper describes an innovative training strategy adopted by one midsize organization to provide its end users with adequate, flexible, and responsive training. The paper then compares the three-tier strategy with other models described in technology training literature. Managers who supervise technology end users in organizations comparable to the one in the study may find the three-tier strategy workable and may want to use it in their own training programs to facilitate training and improve end-user skills. Researchers and scholars may find that the idea of three-tier training generates new opportunities for research","tok_text":"a three-tier technolog train strategi in a dynam busi environ \n as end-us train becom increasingli import in today 's technology-intens busi environ , progress compani remain alert to find way to provid their end user with time train and resourc . thi paper describ an innov train strategi adopt by one midsiz organ to provid it end user with adequ , flexibl , and respons train . the paper then compar the three-tier strategi with other model describ in technolog train literatur . manag who supervis technolog end user in organ compar to the one in the studi may find the three-tier strategi workabl and may want to use it in their own train program to facilit train and improv end-us skill . research and scholar may find that the idea of three-tier train gener new opportun for research","ordered_present_kp":[2,43,67,118,160,269,303,310],"keyphrases":["three-tier technology training strategy","dynamic business environment","end-user training","technology-intensive business environment","companies","innovative training strategy","midsize organization","organizations"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1258","title":"Implementation and performance evaluation of a FIFO queue class library for time warp","abstract":"The authors describe the implementation, use, and performance evaluation of a FIFO queue class library by means of a high-performance, easy-to-use interface employed for queuing simulations in parallel discrete simulations based on the time warp method. Various general-purpose simulation libraries and languages have been proposed, and among these some have the advantage of not requiring users to define anything other than the state vector, and not needing awareness of rollback under a platform which performs state control based on copies. However, because the state vectors must be defined as simple data structures without pointers, dynamic data structures such as a FIFO queue cannot be handled directly. Under the proposed class library, both the platform and the user can handle such structures in the same fashion that embedded data structures are handled. In addition, instead of all stored data, just the operational history can be stored and recovered efficiently at an effectively minimal cost by taking advantage of the first-in-first-out characteristics of the above data structures. When the kernel deletes past state histories during a simulation, garbage collection is also performed transparently using the corresponding method","tok_text":"implement and perform evalu of a fifo queue class librari for time warp \n the author describ the implement , use , and perform evalu of a fifo queue class librari by mean of a high-perform , easy-to-us interfac employ for queu simul in parallel discret simul base on the time warp method . variou general-purpos simul librari and languag have been propos , and among these some have the advantag of not requir user to defin anyth other than the state vector , and not need awar of rollback under a platform which perform state control base on copi . howev , becaus the state vector must be defin as simpl data structur without pointer , dynam data structur such as a fifo queue can not be handl directli . under the propos class librari , both the platform and the user can handl such structur in the same fashion that embed data structur are handl . in addit , instead of all store data , just the oper histori can be store and recov effici at an effect minim cost by take advantag of the first-in-first-out characterist of the abov data structur . when the kernel delet past state histori dure a simul , garbag collect is also perform transpar use the correspond method","ordered_present_kp":[33,44,14,191,222,236,297,445,637,819,899,990,1106],"keyphrases":["performance evaluation","FIFO queue","class library","easy-to-use interface","queuing simulations","parallel discrete simulations","general-purpose simulation libraries","state vectors","dynamic data structures","embedded data structures","operational history","first-in-first-out characteristics","garbage collection","time warp simulation","simulation languages","object oriented method","state management"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","M","M"]} {"id":"1345","title":"Infrared-image classification using hidden Markov trees","abstract":"An image of a three-dimensional target is generally characterized by the visible target subcomponents, with these dictated by the target-sensor orientation (target pose). An image often changes quickly with variable pose. We define a class as a set of contiguous target-sensor orientations over which the associated target image is relatively stationary with aspect. Each target is in general characterized by multiple classes. A distinct set of Wiener filters are employed for each class of images, to identify the presence of target subcomponents. A Karhunen-Loeve representation is used to minimize the number of filters (templates) associated with a given subcomponent. The statistical relationships between the different target subcomponents are modeled via a hidden Markov tree (HMT). The HMT classifier is discussed and example results are presented for forward-looking-infrared (FLIR) imagery of several vehicles","tok_text":"infrared-imag classif use hidden markov tree \n an imag of a three-dimension target is gener character by the visibl target subcompon , with these dictat by the target-sensor orient ( target pose ) . an imag often chang quickli with variabl pose . we defin a class as a set of contigu target-sensor orient over which the associ target imag is rel stationari with aspect . each target is in gener character by multipl class . a distinct set of wiener filter are employ for each class of imag , to identifi the presenc of target subcompon . a karhunen-loev represent is use to minim the number of filter ( templat ) associ with a given subcompon . the statist relationship between the differ target subcompon are model via a hidden markov tree ( hmt ) . the hmt classifi is discuss and exampl result are present for forward-looking-infrar ( flir ) imageri of sever vehicl","ordered_present_kp":[0,26,160,183,276,442,540,574,743,862],"keyphrases":["infrared-image classification","hidden Markov trees","target-sensor orientation","target pose","contiguous target-sensor orientations","Wiener filters","Karhunen-Loeve representation","minimization","HMT","vehicles","IR image classification","3D target image","forward-looking-infrared imagery","FLIR imagery"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","M","R","R"]} {"id":"1300","title":"Will CPXe save the photofinishing market?","abstract":"A consortium of film suppliers and electronics firms has proposed the Common Picture Exchange environment. It will let diverse providers cooperate via the Internet to sell digital-photo prints","tok_text":"will cpxe save the photofinish market ? \n a consortium of film supplier and electron firm ha propos the common pictur exchang environ . it will let divers provid cooper via the internet to sell digital-photo print","ordered_present_kp":[5,19,104],"keyphrases":["CPXe","photofinishing market","Common Picture Exchange environment","Kodak","Fujifilm","HP","Web-services standards"],"prmu":["P","P","P","U","U","U","U"]} {"id":"755","title":"Hardware and software platform for real-time processing and visualization of echographic radiofrequency signals","abstract":"In this paper the architecture of a hardware and software platform, for ultrasonic investigation is presented. The platform, used in conjunction with an analog front-end hardware for driving the ultrasonic transducers of any commercial echograph, having the radiofrequency echo signal access, make it possible to dispose of a powerful echographic system for experimenting any processing technique, also in a clinical environment in which real-time operation mode is an essential prerequisite. The platform transforms any echograph into a test-system for evaluating the diagnostic effectiveness of new investigation techniques. A particular user interface was designed in order to allow a real-time and simultaneous visualization of the results produced in the different stages of the chosen processing procedure. This is aimed at obtaining a better optimization of the processing algorithm. The most important platform aspect, which also constitutes the basic differentiation with respect to similar systems, is the direct processing of the radiofrequency echo signal, which is essential for a complete analysis of the particular ultrasound-media interaction phenomenon. The platform completely integrates the architecture of a personal computer (PC) giving rise to several benefits, such as the quick technological evolution in the PC field and an extreme degree of programmability for different applications. The PC also constitutes the user interface, as a flexible and intuitive visualization support, and performs some software signal processing, by custom algorithms and commercial libraries. The realized close synergy between hardware and software allows the acquisition and real-time processing of the echographic radiofrequency (RF) signal with fast data representation","tok_text":"hardwar and softwar platform for real-tim process and visual of echograph radiofrequ signal \n in thi paper the architectur of a hardwar and softwar platform , for ultrason investig is present . the platform , use in conjunct with an analog front-end hardwar for drive the ultrason transduc of ani commerci echograph , have the radiofrequ echo signal access , make it possibl to dispos of a power echograph system for experi ani process techniqu , also in a clinic environ in which real-tim oper mode is an essenti prerequisit . the platform transform ani echograph into a test-system for evalu the diagnost effect of new investig techniqu . a particular user interfac wa design in order to allow a real-tim and simultan visual of the result produc in the differ stage of the chosen process procedur . thi is aim at obtain a better optim of the process algorithm . the most import platform aspect , which also constitut the basic differenti with respect to similar system , is the direct process of the radiofrequ echo signal , which is essenti for a complet analysi of the particular ultrasound-media interact phenomenon . the platform complet integr the architectur of a person comput ( pc ) give rise to sever benefit , such as the quick technolog evolut in the pc field and an extrem degre of programm for differ applic . the pc also constitut the user interfac , as a flexibl and intuit visual support , and perform some softwar signal process , by custom algorithm and commerci librari . the realiz close synergi between hardwar and softwar allow the acquisit and real-tim process of the echograph radiofrequ ( rf ) signal with fast data represent","ordered_present_kp":[64,33,12,654,1172],"keyphrases":["software platform","real-time processing","echographic radiofrequency signal","user interface","personal computer","data visualization","hardware platform","ultrasonic imaging","clinical diagnosis"],"prmu":["P","P","P","P","P","R","R","M","M"]} {"id":"710","title":"Optimal allocation of runs in a simulation metamodel with several independent variables","abstract":"Cheng and Kleijnen (1999) propose a very general regression metamodel for modelling the output of a queuing system. Its main limitations are that the regression function is based on a polynomial and that it can use only one independent variable. These limitations are removed here. We derive an explicit formula for the optimal way of assigning simulation runs to the different design points","tok_text":"optim alloc of run in a simul metamodel with sever independ variabl \n cheng and kleijnen ( 1999 ) propos a veri gener regress metamodel for model the output of a queu system . it main limit are that the regress function is base on a polynomi and that it can use onli one independ variabl . these limit are remov here . we deriv an explicit formula for the optim way of assign simul run to the differ design point","ordered_present_kp":[24,51,112,162,203],"keyphrases":["simulation metamodel","independent variables","general regression metamodel","queuing system","regression function","optimal runs allocation"],"prmu":["P","P","P","P","P","R"]} {"id":"1044","title":"Analogue realizations of fractional-order controllers","abstract":"An approach to the design of analogue circuits, implementing fractional-order controllers, is presented. The suggested approach is based on the use of continued fraction expansions; in the case of negative coefficients in a continued fraction expansion, the use of negative impedance converters is proposed. Several possible methods for obtaining suitable rational approximations and continued fraction expansions are discussed. An example of realization of a fractional-order I\/sup lambda \/ controller is presented and illustrated by obtained measurements. The suggested approach can be used for the control of very fast processes, where the use of digital controllers is difficult or impossible","tok_text":"analogu realiz of fractional-ord control \n an approach to the design of analogu circuit , implement fractional-ord control , is present . the suggest approach is base on the use of continu fraction expans ; in the case of neg coeffici in a continu fraction expans , the use of neg imped convert is propos . sever possibl method for obtain suitabl ration approxim and continu fraction expans are discuss . an exampl of realiz of a fractional-ord i \/ sup lambda \/ control is present and illustr by obtain measur . the suggest approach can be use for the control of veri fast process , where the use of digit control is difficult or imposs","ordered_present_kp":[0,18,181,222,189,277,347,568,600],"keyphrases":["analogue realizations","fractional-order controllers","continued fraction expansions","fraction expansion","negative coefficients","negative impedance converters","rational approximations","fast processes","digital controllers","fractional differentiation","fractional integration"],"prmu":["P","P","P","P","P","P","P","P","P","M","M"]} {"id":"1001","title":"A conflict between language and atomistic information","abstract":"Fred Dretske and Jerry Fodor are responsible for popularizing three well-known theses in contemporary philosophy of mind: the thesis of Information-Based Semantics (IBS), the thesis of Content Atomism (Atomism) and the thesis of the Language of Thought (LOT). LOT concerns the semantically relevant structure of representations involved in cognitive states such as beliefs and desires. It maintains that all such representations must have syntactic structures mirroring the structure of their contents. IBS is a thesis about the nature of the relations that connect cognitive representations and their parts to their contents (semantic relations). It holds that these relations supervene solely on relations of the kind that support information content, perhaps with some help from logical principles of combination. Atomism is a thesis about the nature of the content of simple symbols. It holds that each substantive simple symbol possesses its content independently of all other symbols in the representational system. I argue that Dretske's and Fodor's theories are false and that their falsehood results from a conflict IBS and Atomism, on the one hand, and LOT, on the other","tok_text":"a conflict between languag and atomist inform \n fred dretsk and jerri fodor are respons for popular three well-known these s in contemporari philosophi of mind : the thesi of information-bas semant ( ib ) , the thesi of content atom ( atom ) and the thesi of the languag of thought ( lot ) . lot concern the semant relev structur of represent involv in cognit state such as belief and desir . it maintain that all such represent must have syntact structur mirror the structur of their content . ib is a thesi about the natur of the relat that connect cognit represent and their part to their content ( semant relat ) . it hold that these relat superven sole on relat of the kind that support inform content , perhap with some help from logic principl of combin . atom is a thesi about the natur of the content of simpl symbol . it hold that each substant simpl symbol possess it content independ of all other symbol in the represent system . i argu that dretsk 's and fodor 's theori are fals and that their falsehood result from a conflict ib and atom , on the one hand , and lot , on the other","ordered_present_kp":[141,175,220,200,263,284,353,374,385],"keyphrases":["philosophy of mind","Information-Based Semantics","IBS","Content Atomism","Language of Thought","LOT","cognitive states","beliefs","desires"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"883","title":"On conflict-free executions of elementary nets","abstract":"Deals with analysis of elementary Petri nets with respect to possibilities of avoiding conflicts during their executions. There are two main aims of the paper. The first is to find a method of checking if a net is conflict-avoidable (i.e., if it possesses a conflict-free fair run). The second is to find a method of rebuilding any net to a totally conflict-avoidable net (i.e., a net possessing a conflict-free fair run in every one process) with the same behaviour. The main results are the following: 1. The proof of decidability, for elementary nets, of the problem of existence of a conflict-avoidable fair process (and an algorithm producing all fair runs). 2. Construction, for an arbitrary given elementary net, of a totally conflict-avoidable net with the same behaviour. The net, completed this way, has the same behaviour as the original one. Moreover, it is totally conflict-avoidable, and its execution may be supervised (in order to ensure conflict-freeness) by the reduced case graph built by the algorithm of the former section","tok_text":"on conflict-fre execut of elementari net \n deal with analysi of elementari petri net with respect to possibl of avoid conflict dure their execut . there are two main aim of the paper . the first is to find a method of check if a net is conflict-avoid ( i.e. , if it possess a conflict-fre fair run ) . the second is to find a method of rebuild ani net to a total conflict-avoid net ( i.e. , a net possess a conflict-fre fair run in everi one process ) with the same behaviour . the main result are the follow : 1 . the proof of decid , for elementari net , of the problem of exist of a conflict-avoid fair process ( and an algorithm produc all fair run ) . 2 . construct , for an arbitrari given elementari net , of a total conflict-avoid net with the same behaviour . the net , complet thi way , ha the same behaviour as the origin one . moreov , it is total conflict-avoid , and it execut may be supervis ( in order to ensur conflict-fre ) by the reduc case graph built by the algorithm of the former section","ordered_present_kp":[3,64,276,357,528,949],"keyphrases":["conflict-free executions","elementary Petri nets","conflict-free fair run","totally conflict-avoidable net","decidability","reduced case graph"],"prmu":["P","P","P","P","P","P"]} {"id":"13","title":"Stability analysis of the characteristic polynomials whose coefficients are polynomials of interval parameters using monotonicity","abstract":"We analyze the stability of the characteristic polynomials whose coefficients are polynomials of interval parameters via monotonicity methods. Our stability conditions are based on Frazer-Duncan's theorem and all conditions can be checked using only endpoint values of interval parameters. These stability conditions are necessary and sufficient under the monotonicity assumptions. When the monotonicity conditions do not hold on the whole parameter region, we present an interval division method and a transformation algorithm in order to apply the monotonicity conditions. Then, our stability analysis methods can be applied to all characteristic polynomials whose coefficients are polynomials of interval parameters","tok_text":"stabil analysi of the characterist polynomi whose coeffici are polynomi of interv paramet use monoton \n we analyz the stabil of the characterist polynomi whose coeffici are polynomi of interv paramet via monoton method . our stabil condit are base on frazer-duncan 's theorem and all condit can be check use onli endpoint valu of interv paramet . these stabil condit are necessari and suffici under the monoton assumpt . when the monoton condit do not hold on the whole paramet region , we present an interv divis method and a transform algorithm in order to appli the monoton condit . then , our stabil analysi method can be appli to all characterist polynomi whose coeffici are polynomi of interv paramet","ordered_present_kp":[0,22,75,94,313,501,527],"keyphrases":["stability analysis","characteristic polynomials","interval parameters","monotonicity","endpoint values","interval division method","transformation algorithm","Frazer-Duncan theorem","necessary and sufficient conditions"],"prmu":["P","P","P","P","P","P","P","R","R"]} {"id":"56","title":"New thinking on rendering","abstract":"Looks at how graphics hardware solves a range of rendering problems","tok_text":"new think on render \n look at how graphic hardwar solv a rang of render problem","ordered_present_kp":[13,34],"keyphrases":["rendering","graphics hardware","programmability","Gourand-shaded image","color values"],"prmu":["P","P","U","U","U"]} {"id":"629","title":"Calibrated initials for an EM applied to recursive models of categorical variables","abstract":"The estimates from an EM, when it is applied to a large causal model of 10 or more categorical variables, are often subject to the initial values for the estimates. This phenomenon becomes more serious as the model structure becomes more complicated involving more variables. As a measure of compensation for this, it has been recommended in literature that EMs are implemented several times with different sets of initial values to obtain more appropriate estimates. We propose an improved approach for initial values. The main idea is that we use initials that are calibrated to data. A simulation result strongly indicates that the calibrated initials give rise to the estimates that are far closer to the true values than the initials that are not calibrated","tok_text":"calibr initi for an em appli to recurs model of categor variabl \n the estim from an em , when it is appli to a larg causal model of 10 or more categor variabl , are often subject to the initi valu for the estim . thi phenomenon becom more seriou as the model structur becom more complic involv more variabl . as a measur of compens for thi , it ha been recommend in literatur that em are implement sever time with differ set of initi valu to obtain more appropri estim . we propos an improv approach for initi valu . the main idea is that we use initi that are calibr to data . a simul result strongli indic that the calibr initi give rise to the estim that are far closer to the true valu than the initi that are not calibr","ordered_present_kp":[20,32,48,0,111,186,580],"keyphrases":["calibrated initials","EM","recursive models","categorical variables","large causal model","initial values","simulation"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1239","title":"Three-dimensional global MHD simulation code for the Earth's magnetosphere using HPF\/JA","abstract":"We have translated a three-dimensional magnetohydrodynamic (MHD) simulation code of the Earth's magnetosphere from VPP Fortran to HPF\/JA on the Fujitsu VPP5000\/56 vector-parallel supercomputer and the MHD code was fully vectorized and fully parallelized in VPP Fortran. The entire performance and capability of the HPF MHD code could be shown to be almost comparable to that of VPP Fortran. A three-dimensional global MHD simulation of the Earth's magnetosphere was performed at a speed of over 400 Gflops with an efficiency of 76.5% using 56 processing elements of the Fujitsu VPP5000\/56 in vector and parallel computation that permitted comparison with catalog values. We have concluded that fluid and MHD codes that are fully vectorized and fully parallelized in VPP Fortran can be translated with relative ease to HPF\/JA, and a code in HPF\/JA may be expected to perform comparably to the same code written in VPP Fortran","tok_text":"three-dimension global mhd simul code for the earth 's magnetospher use hpf \/ ja \n we have translat a three-dimension magnetohydrodynam ( mhd ) simul code of the earth 's magnetospher from vpp fortran to hpf \/ ja on the fujitsu vpp5000\/56 vector-parallel supercomput and the mhd code wa fulli vector and fulli parallel in vpp fortran . the entir perform and capabl of the hpf mhd code could be shown to be almost compar to that of vpp fortran . a three-dimension global mhd simul of the earth 's magnetospher wa perform at a speed of over 400 gflop with an effici of 76.5 % use 56 process element of the fujitsu vpp5000\/56 in vector and parallel comput that permit comparison with catalog valu . we have conclud that fluid and mhd code that are fulli vector and fulli parallel in vpp fortran can be translat with rel eas to hpf \/ ja , and a code in hpf \/ ja may be expect to perform compar to the same code written in vpp fortran","ordered_present_kp":[239,220,372,23,637],"keyphrases":["MHD simulation","Fujitsu VPP5000\/56","vector-parallel supercomputer","HPF MHD code","parallel computation","magnetohydrodynamic simulation"],"prmu":["P","P","P","P","P","R"]} {"id":"1180","title":"Decomposition of additive cellular automata","abstract":"Finite additive cellular automata with fixed and periodic boundary conditions are considered as endomorphisms over pattern spaces. A characterization of the nilpotent and regular parts of these endomorphisms is given in terms of their minimal polynomials. Generalized eigenspace decomposition is determined and relevant cyclic subspaces are described in terms of symmetries. As an application, the lengths and frequencies of limit cycles in the transition diagram of the automaton are calculated","tok_text":"decomposit of addit cellular automata \n finit addit cellular automata with fix and period boundari condit are consid as endomorph over pattern space . a character of the nilpot and regular part of these endomorph is given in term of their minim polynomi . gener eigenspac decomposit is determin and relev cyclic subspac are describ in term of symmetri . as an applic , the length and frequenc of limit cycl in the transit diagram of the automaton are calcul","ordered_present_kp":[20,414,120],"keyphrases":["cellular automata","endomorphisms","transition diagram","finite cellular automaton","computational complexity"],"prmu":["P","P","P","R","U"]} {"id":"947","title":"The fully entangled fraction as an inclusive measure of entanglement applications","abstract":"Characterizing entanglement in all but the simplest case of a two qubit pure state is a hard problem, even understanding the relevant experimental quantities that are related to entanglement is difficult. It may not be necessary, however, to quantify the entanglement of a state in order to quantify the quantum information processing significance of a state. It is known that the fully entangled fraction has a direct relationship to the fidelity of teleportation maximized under the actions of local unitary operations. In the case of two qubits we point out that the fully entangled fraction can also be related to the fidelities, maximized under the actions of local unitary operations, of other important quantum information tasks such as dense coding, entanglement swapping and quantum cryptography in such a way as to provide an inclusive measure of these entanglement applications. For two qubit systems the fully entangled fraction has a simple known closed-form expression and we establish lower and upper bounds of this quantity with the concurrence. This approach is readily extendable to more complicated systems","tok_text":"the fulli entangl fraction as an inclus measur of entangl applic \n character entangl in all but the simplest case of a two qubit pure state is a hard problem , even understand the relev experiment quantiti that are relat to entangl is difficult . it may not be necessari , howev , to quantifi the entangl of a state in order to quantifi the quantum inform process signific of a state . it is known that the fulli entangl fraction ha a direct relationship to the fidel of teleport maxim under the action of local unitari oper . in the case of two qubit we point out that the fulli entangl fraction can also be relat to the fidel , maxim under the action of local unitari oper , of other import quantum inform task such as dens code , entangl swap and quantum cryptographi in such a way as to provid an inclus measur of these entangl applic . for two qubit system the fulli entangl fraction ha a simpl known closed-form express and we establish lower and upper bound of thi quantiti with the concurr . thi approach is readili extend to more complic system","ordered_present_kp":[10,119,341,4,462,471,733,750],"keyphrases":["fully entangled fraction","entanglement","two qubit pure state","quantum information processing","fidelity","teleportation","entanglement swapping","quantum cryptography"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"141","title":"A high-resolution high-frequency monolithic top-shooting microinjector free of satellite drops - part I: concept, design, and model","abstract":"Introduces an innovative microinjector design, featuring a bubble valve, which entails superior droplet ejection characteristics and monolithic fabrication, which allows handling of a wide range of liquids. This new microinjector uses asymmetric bubbles to reduce crosstalk, increase frequency response and eliminate satellite droplets. During a firing, i.e., droplet ejection, the \"virtual valve\" closes, by growing a thermal bubble in the microchannel, to isolate the microchamber from the liquid supply and neighboring chambers. Between firings, however, the virtual valve opens, by collapsing the bubble, to reduce flow restriction for fast refilling of the microchamber. The use of bubble valves brings about fast and reliable device operation without imposing the significant complication fabrication of physical microvalves would call for. In addition, through a special heater configuration and chamber designs, bubbles surrounding the nozzle cut off the tail of the droplets being ejected and completely eliminate satellite droplets. A simple one-dimensional model of the operation of the microinjector is used to estimate the bubble formation and liquid refilling","tok_text":"a high-resolut high-frequ monolith top-shoot microinjector free of satellit drop - part i : concept , design , and model \n introduc an innov microinjector design , featur a bubbl valv , which entail superior droplet eject characterist and monolith fabric , which allow handl of a wide rang of liquid . thi new microinjector use asymmetr bubbl to reduc crosstalk , increas frequenc respons and elimin satellit droplet . dure a fire , i.e. , droplet eject , the \" virtual valv \" close , by grow a thermal bubbl in the microchannel , to isol the microchamb from the liquid suppli and neighbor chamber . between fire , howev , the virtual valv open , by collaps the bubbl , to reduc flow restrict for fast refil of the microchamb . the use of bubbl valv bring about fast and reliabl devic oper without impos the signific complic fabric of physic microvalv would call for . in addit , through a special heater configur and chamber design , bubbl surround the nozzl cut off the tail of the droplet be eject and complet elimin satellit droplet . a simpl one-dimension model of the oper of the microinjector is use to estim the bubbl format and liquid refil","ordered_present_kp":[26,173,208,328,352,1137,372,400,462,679,918],"keyphrases":["monolithic top-shooting microinjector","bubble valve","droplet ejection characteristics","asymmetric bubbles","crosstalk","frequency response","satellite droplets","virtual valve","flow restriction","chamber designs","liquid refilling","inkjet printing","thermal bubble jet"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","U","M"]} {"id":"902","title":"TCP explicit congestion notification over ATM-UBR: a simulation study","abstract":"The enhancement of transmission control protocol's (TCP's) congestion control mechanisms using explicit congestion notification (ECN) over asynchronous transfer mode (ATM) networks is overviewed. TCP's congestion control is enhanced so that congestion is indicated by not only packet losses as is currently the case but an agent implemented at the ATM network's edge as well. The novel idea uses EFCI (explicit forward congestion indication) bits (available in every ATM cell header) to generalize the ECN response to the UBR (unspecified bit rate) service, notify congestion, and adjust the credit-based window size of the TCR. The authors' simulation experiments show that TCP ECN achieves significantly lower cell loss, packet retransmissions, and buffer utilization, and exhibits better throughput than (non-ECN) TCP Reno","tok_text":"tcp explicit congest notif over atm-ubr : a simul studi \n the enhanc of transmiss control protocol 's ( tcp 's ) congest control mechan use explicit congest notif ( ecn ) over asynchron transfer mode ( atm ) network is overview . tcp 's congest control is enhanc so that congest is indic by not onli packet loss as is current the case but an agent implement at the atm network 's edg as well . the novel idea use efci ( explicit forward congest indic ) bit ( avail in everi atm cell header ) to gener the ecn respons to the ubr ( unspecifi bit rate ) servic , notifi congest , and adjust the credit-bas window size of the tcr . the author ' simul experi show that tcp ecn achiev significantli lower cell loss , packet retransmiss , and buffer util , and exhibit better throughput than ( non-ecn ) tcp reno","ordered_present_kp":[0,32,44,113,365,300,342,592,699,711,736,769],"keyphrases":["TCP explicit congestion notification","ATM-UBR","simulation","congestion control mechanisms","packet losses","agent","ATM networks","credit-based window size","cell loss","packet retransmissions","buffer utilization","throughput","explicit forward congestion indication bits","unspecified bit rate service"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"590","title":"Universal approximation by hierarchical fuzzy system with constraints on the fuzzy rule","abstract":"This paper presents a special hierarchical fuzzy system where the outputs of the previous layer are not used in the IF-parts, but used only in the THEN-parts of the fuzzy rules of the current layer. The proposed scheme can be shown to be a universal approximator to any continuous function on a compact set if complete fuzzy sets are used in the IF-parts of the fuzzy rules with singleton fuzzifier and center average defuzzifier. From the simulation of ball and beam control system, it is demonstrated that the proposed scheme approximates with good accuracy the model nonlinear controller with fewer fuzzy rules than the centralized fuzzy system and its control performance is comparable to that of the nonlinear controller","tok_text":"univers approxim by hierarch fuzzi system with constraint on the fuzzi rule \n thi paper present a special hierarch fuzzi system where the output of the previou layer are not use in the if-part , but use onli in the then-part of the fuzzi rule of the current layer . the propos scheme can be shown to be a univers approxim to ani continu function on a compact set if complet fuzzi set are use in the if-part of the fuzzi rule with singleton fuzzifi and center averag defuzzifi . from the simul of ball and beam control system , it is demonstr that the propos scheme approxim with good accuraci the model nonlinear control with fewer fuzzi rule than the central fuzzi system and it control perform is compar to that of the nonlinear control","ordered_present_kp":[20,65,0,329,496],"keyphrases":["universal approximator","hierarchical fuzzy system","fuzzy rules","continuous function","ball and beam control system","hierarchical fuzzy logic","Stone-Weierstrass theorem"],"prmu":["P","P","P","P","P","M","U"]} {"id":"1138","title":"Approximating martingales for variance reduction in Markov process simulation","abstract":"\"Knowledge of either analytical or numerical approximations should enable more efficient simulation estimators to be constructed.\" This principle seems intuitively plausible and certainly attractive, yet no completely satisfactory general methodology has been developed to exploit it. The authors present a new approach for obtaining variance reduction in Markov process simulation that is applicable to a vast array of different performance measures. The approach relies on the construction of a martingale that is then used as an internal control variate","tok_text":"approxim martingal for varianc reduct in markov process simul \n \" knowledg of either analyt or numer approxim should enabl more effici simul estim to be construct . \" thi principl seem intuit plausibl and certainli attract , yet no complet satisfactori gener methodolog ha been develop to exploit it . the author present a new approach for obtain varianc reduct in markov process simul that is applic to a vast array of differ perform measur . the approach reli on the construct of a martingal that is then use as an intern control variat","ordered_present_kp":[41,23,9,427,517],"keyphrases":["martingales","variance reduction","Markov process simulation","performance measures","internal control variate","approximating martingale-process method","complex stochastic processes","single-server queue"],"prmu":["P","P","P","P","P","M","M","U"]} {"id":"1281","title":"A notion of non-interference for timed automata","abstract":"The non-interference property of concurrent systems is a security property concerning the flow of information among different levels of security of the system. In this paper we introduce a notion of timed non-interference for real-time systems specified by timed automata. The notion is presented using an automata based approach and then it is characterized also by operations and equivalence between timed languages. The definition is applied to an example of a time-critical system modeling a simplified control of an airplane","tok_text":"a notion of non-interfer for time automata \n the non-interfer properti of concurr system is a secur properti concern the flow of inform among differ level of secur of the system . in thi paper we introduc a notion of time non-interfer for real-tim system specifi by time automata . the notion is present use an automata base approach and then it is character also by oper and equival between time languag . the definit is appli to an exampl of a time-crit system model a simplifi control of an airplan","ordered_present_kp":[29,74,94,239,446],"keyphrases":["timed automata","concurrent systems","security property","real-time systems","time-critical system","noninterference notion"],"prmu":["P","P","P","P","P","M"]} {"id":"691","title":"Robust output-feedback control for linear continuous uncertain state delayed systems with unknown time delay","abstract":"The state-delayed time often is unknown and independent of other variables in most real physical systems. A new stability criterion for uncertain systems with a state time-varying delay is proposed. Then, a robust observer-based control law based on this criterion is constructed via the sequential quadratic programming method. We also develop a separation property so that the state feedback control law and observer can be independently designed and maintain closed-loop system stability. An example illustrates the availability of the proposed design method","tok_text":"robust output-feedback control for linear continu uncertain state delay system with unknown time delay \n the state-delay time often is unknown and independ of other variabl in most real physic system . a new stabil criterion for uncertain system with a state time-vari delay is propos . then , a robust observer-bas control law base on thi criterion is construct via the sequenti quadrat program method . we also develop a separ properti so that the state feedback control law and observ can be independ design and maintain closed-loop system stabil . an exampl illustr the avail of the propos design method","ordered_present_kp":[7,229,60,92,253,303,371,450,524],"keyphrases":["output-feedback control","state delayed systems","time delay","uncertain systems","state time-varying delay","observer-based control law","sequential quadratic programming","state feedback control law","closed-loop system stability","robust control","linear continuous systems"],"prmu":["P","P","P","P","P","P","P","P","P","R","R"]} {"id":"1060","title":"Variety identification of wheat using mass spectrometry with neural networks and the influence of mass spectra processing prior to neural network analysis","abstract":"The performance of matrix-assisted laser desorption\/ionisation time-of-flight mass spectrometry with neural networks in wheat variety classification is further evaluated. Two principal issues were studied: (a) the number of varieties that could be classified correctly; and (b) various means of preprocessing mass spectrometric data. The number of wheat varieties tested was increased from 10 to 30. The main pre-processing method investigated was based on Gaussian smoothing of the spectra, but other methods based on normalisation procedures and multiplicative scatter correction of data were also used. With the final method, it was possible to classify 30 wheat varieties with 87% correctly classified mass spectra and a correlation coefficient of 0.90","tok_text":"varieti identif of wheat use mass spectrometri with neural network and the influenc of mass spectra process prior to neural network analysi \n the perform of matrix-assist laser desorpt \/ ionis time-of-flight mass spectrometri with neural network in wheat varieti classif is further evalu . two princip issu were studi : ( a ) the number of varieti that could be classifi correctli ; and ( b ) variou mean of preprocess mass spectrometr data . the number of wheat varieti test wa increas from 10 to 30 . the main pre-process method investig wa base on gaussian smooth of the spectra , but other method base on normalis procedur and multipl scatter correct of data were also use . with the final method , it wa possibl to classifi 30 wheat varieti with 87 % correctli classifi mass spectra and a correl coeffici of 0.90","ordered_present_kp":[157,249,419,551,609,631,756,794,0,87,117],"keyphrases":["variety identification","mass spectra processing","neural network analysis","matrix-assisted laser desorption\/ionisation time-of-flight mass spectrometry","wheat variety classification","mass spectrometric data","Gaussian smoothing","normalisation procedures","multiplicative scatter correction","correctly classified mass spectra","correlation coefficient","pre-processing- method"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","M"]} {"id":"1025","title":"Watermarking techniques for electronic delivery of remote sensing images","abstract":"Earth observation missions have recently attracted a growing interest, mainly due to the large number of possible applications capable of exploiting remotely sensed data and images. Along with the increase of market potential, the need arises for the protection of the image products. Such a need is a very crucial one, because the Internet and other public\/private networks have become preferred means of data exchange. A critical issue arising when dealing with digital image distribution is copyright protection. Such a problem has been largely addressed by resorting to watermarking technology. A question that obviously arises is whether the requirements imposed by remote sensing imagery are compatible with existing watermarking techniques. On the basis of these motivations, the contribution of this work is twofold: assessment of the requirements imposed by remote sensing applications on watermark-based copyright protection, and modification of two well-established digital watermarking techniques to meet such constraints. More specifically, the concept of near-lossless watermarking is introduced and two possible algorithms matching such a requirement are presented. Experimental results are shown to measure the impact of watermark introduction on a typical remote sensing application, i.e., unsupervised image classification","tok_text":"watermark techniqu for electron deliveri of remot sens imag \n earth observ mission have recent attract a grow interest , mainli due to the larg number of possibl applic capabl of exploit remot sens data and imag . along with the increas of market potenti , the need aris for the protect of the imag product . such a need is a veri crucial one , becaus the internet and other public \/ privat network have becom prefer mean of data exchang . a critic issu aris when deal with digit imag distribut is copyright protect . such a problem ha been larg address by resort to watermark technolog . a question that obvious aris is whether the requir impos by remot sens imageri are compat with exist watermark techniqu . on the basi of these motiv , the contribut of thi work is twofold : assess of the requir impos by remot sens applic on watermark-bas copyright protect , and modif of two well-establish digit watermark techniqu to meet such constraint . more specif , the concept of near-lossless watermark is introduc and two possibl algorithm match such a requir are present . experiment result are shown to measur the impact of watermark introduct on a typic remot sens applic , i.e. , unsupervis imag classif","ordered_present_kp":[44,23,0,62,498,896,976,474,1182],"keyphrases":["watermarking techniques","electronic delivery","remote sensing images","Earth observation missions","digital image distribution","copyright protection","digital watermarking","near-lossless watermarking","unsupervised image classification"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"1361","title":"Adaptive scheduling of batch servers in flow shops","abstract":"Batch servicing is a common way of benefiting from economies of scale in manufacturing operations. Good examples of production systems that allow for batch processing are ovens found in the aircraft industry and in semiconductor manufacturing. In this paper we study the issue of dynamic scheduling of such systems within the context of multi-stage flow shops. So far, a great deal of research has concentrated on the development of control strategies, which only address the batch stage. This paper proposes an integral scheduling approach that also includes succeeding stages. In this way, we aim for shop optimization, instead of optimizing performance for a single stage. Our so-called look-ahead strategy adapts its scheduling decision to shop status, which includes information on a limited number of near-future arrivals. In particular, we study a two-stage flow shop, in which the batch stage is succeeded by a serial stage. The serial stage may be realized by a single machine or by parallel machines. Through an extensive simulation study it is demonstrated how shop performance can be improved by the proposed strategy relative to existing strategies","tok_text":"adapt schedul of batch server in flow shop \n batch servic is a common way of benefit from economi of scale in manufactur oper . good exampl of product system that allow for batch process are oven found in the aircraft industri and in semiconductor manufactur . in thi paper we studi the issu of dynam schedul of such system within the context of multi-stag flow shop . so far , a great deal of research ha concentr on the develop of control strategi , which onli address the batch stage . thi paper propos an integr schedul approach that also includ succeed stage . in thi way , we aim for shop optim , instead of optim perform for a singl stage . our so-cal look-ahead strategi adapt it schedul decis to shop statu , which includ inform on a limit number of near-futur arriv . in particular , we studi a two-stag flow shop , in which the batch stage is succeed by a serial stage . the serial stage may be realiz by a singl machin or by parallel machin . through an extens simul studi it is demonstr how shop perform can be improv by the propos strategi rel to exist strategi","ordered_present_kp":[0,17,33,45,110,143,191,209,234,295,346,433,509,590,659,759,805,918,937,973],"keyphrases":["adaptive scheduling","batch servers","flow shops","batch servicing","manufacturing operations","production systems","ovens","aircraft industry","semiconductor manufacturing","dynamic scheduling","multi-stage flow shops","control strategies","integral scheduling approach","shop optimization","look-ahead strategy","near-future arrivals","two-stage flow shop","single machine","parallel machines","simulation study"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1324","title":"A look at MonacoProfiler 4","abstract":"The newest profiling program from Monaco Software adds some valuable features: support for up to 8-color printing, profiling for digital cameras, fine-tuning of black generation and tweaking of profile transforms. We tested its ease of use and a few of the advanced functions. In all, it's pretty good","tok_text":"a look at monacoprofil 4 \n the newest profil program from monaco softwar add some valuabl featur : support for up to 8-color print , profil for digit camera , fine-tun of black gener and tweak of profil transform . we test it eas of use and a few of the advanc function . in all , it 's pretti good","ordered_present_kp":[10],"keyphrases":["MonacoProfiler 4","color-correction","Pantone Hexachrome","commercial printers"],"prmu":["P","U","U","U"]} {"id":"771","title":"Pareto-optimal formulations for cost versus colorimetric accuracy trade-offs in printer color management","abstract":"Color management for the printing of digital images is a challenging task, due primarily to nonlinear ink-mixing behavior and the presence of redundant solutions for print devices with more than three inks. Algorithms for the conversion of image data to printer-specific format are typically designed to achieve a single predetermined rendering intent, such as colorimetric accuracy. We present two CIELAB to CMYK color conversion schemes based on a general Pareto-optimal formulation for printer color management. The schemes operate using a 149-color characterization data set selected to efficiently capture the entire CMYK gamut. The first scheme uses artificial neural networks as transfer functions between the CIELAB and CMYK spaces. The second scheme is based on a reformulation of tetrahedral interpolation as an optimization problem. Characterization data are divided into tetrahedra for the interpolation-based approach using the program Qhull, which removes the common restriction that characterization data be well organized. Both schemes offer user control over trade-off problems such as cost versus reproduction accuracy, allowing for user-specified print objectives and the use of constraints such as maximum allowable ink and maximum allowable AE*\/sub ab\/. A formulation for minimization of ink is shown to be particularly favorable, integrating both a clipping and gamut compression features into a single methodology","tok_text":"pareto-optim formul for cost versu colorimetr accuraci trade-off in printer color manag \n color manag for the print of digit imag is a challeng task , due primarili to nonlinear ink-mix behavior and the presenc of redund solut for print devic with more than three ink . algorithm for the convers of imag data to printer-specif format are typic design to achiev a singl predetermin render intent , such as colorimetr accuraci . we present two cielab to cmyk color convers scheme base on a gener pareto-optim formul for printer color manag . the scheme oper use a 149-color character data set select to effici captur the entir cmyk gamut . the first scheme use artifici neural network as transfer function between the cielab and cmyk space . the second scheme is base on a reformul of tetrahedr interpol as an optim problem . character data are divid into tetrahedra for the interpolation-bas approach use the program qhull , which remov the common restrict that character data be well organ . both scheme offer user control over trade-off problem such as cost versu reproduct accuraci , allow for user-specifi print object and the use of constraint such as maximum allow ink and maximum allow ae*\/sub ab\/. a formul for minim of ink is shown to be particularli favor , integr both a clip and gamut compress featur into a singl methodolog","ordered_present_kp":[68,168,214,0,24,442,659,686,783,7,854,873,1010,1054,1096,1156,1137,1290,1281,381],"keyphrases":["Pareto-optimal formulations","optimization","cost versus colorimetric accuracy trade-offs","printer color management","nonlinear ink-mixing behavior","redundant solutions","rendering intent","CIELAB to CMYK color conversion schemes","artificial neural networks","transfer functions","tetrahedral interpolation","tetrahedra","interpolation-based approach","user control","cost versus reproduction accuracy","user-specified print objectives","constraints","maximum allowable ink","clipping","gamut compression features","digital image printing","image data conversion","color characterization data set","Qhull program","MacBeth ColorChecker chart","grey component replacement"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R","U","U"]} {"id":"734","title":"Web services boost integration","abstract":"Microsoft and IBM have announced products to help their database software co-exist with competitors' offerings. The products use web services technology allowing users to improve integration between databases and application software from rival vendors","tok_text":"web servic boost integr \n microsoft and ibm have announc product to help their databas softwar co-exist with competitor ' offer . the product use web servic technolog allow user to improv integr between databas and applic softwar from rival vendor","ordered_present_kp":[146,26,40,79],"keyphrases":["Microsoft","IBM","database software","web services technology"],"prmu":["P","P","P","P"]} {"id":"1409","title":"North American carrier survey: simply the best","abstract":"Network Magazine carried out a North American carrier survey. Thousands of network engineers gave information on providers' strengths and weaknesses across seven services: private lines, frame relay, ATM, VPNs, dedicated Internet access, Ethernet services, and Web hosting. Respondents also ranked providers on their ability to perform in up to eight categories including customer service, reliability, and price. Users rated more than a dozen providers for each survey. Carriers needed to receive at least 30 votes for inclusion in the survey. Readers were asked to rate carriers on up to nine categories using a scale of 1 (unacceptable) to 5 (excellent). Not all categories are equally important. To try and get at these differences, Network Magazine asked readers to assign a weight to each category. The big winners were VPNs","tok_text":"north american carrier survey : simpli the best \n network magazin carri out a north american carrier survey . thousand of network engin gave inform on provid ' strength and weak across seven servic : privat line , frame relay , atm , vpn , dedic internet access , ethernet servic , and web host . respond also rank provid on their abil to perform in up to eight categori includ custom servic , reliabl , and price . user rate more than a dozen provid for each survey . carrier need to receiv at least 30 vote for inclus in the survey . reader were ask to rate carrier on up to nine categori use a scale of 1 ( unaccept ) to 5 ( excel ) . not all categori are equal import . to tri and get at these differ , network magazin ask reader to assign a weight to each categori . the big winner were vpn","ordered_present_kp":[0,200,214,228,234,240,264,286,378,394,408],"keyphrases":["North American carrier survey","private lines","frame relay","ATM","VPNs","dedicated Internet access","Ethernet services","Web hosting","customer service","reliability","price","service providers"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1319","title":"Routing security in wireless ad hoc networks","abstract":"A mobile ad hoc network consists of a collection of wireless mobile nodes that are capable of communicating with each other without the use of a network infrastructure or any centralized administration. MANET is an emerging research area with practical applications. However, wireless MANET is particularly vulnerable due to its fundamental characteristics, such as open medium, dynamic topology, distributed cooperation, and constrained capability. Routing plays an important role in the security of the entire network. In general, routing security in wireless MANETs appears to be a problem that is not trivial to solve. In this article we study the routing security issues of MANETs, and analyze in detail one type of attack-the \"black hole\" problem-that can easily be employed against the MANETs. We also propose a solution for the black hole problem for ad hoc on-demand distance vector routing protocol","tok_text":"rout secur in wireless ad hoc network \n a mobil ad hoc network consist of a collect of wireless mobil node that are capabl of commun with each other without the use of a network infrastructur or ani central administr . manet is an emerg research area with practic applic . howev , wireless manet is particularli vulner due to it fundament characterist , such as open medium , dynam topolog , distribut cooper , and constrain capabl . rout play an import role in the secur of the entir network . in gener , rout secur in wireless manet appear to be a problem that is not trivial to solv . in thi articl we studi the rout secur issu of manet , and analyz in detail one type of attack-th \" black hole \" problem-that can easili be employ against the manet . we also propos a solut for the black hole problem for ad hoc on-demand distanc vector rout protocol","ordered_present_kp":[0,14,42,87,281,362,376,392,815],"keyphrases":["routing security","wireless ad hoc networks","mobile ad hoc network","wireless mobile nodes","wireless MANET","open medium","dynamic topology","distributed cooperation","on-demand distance vector routing protocol","satellite transmission","home wireless personal area networks"],"prmu":["P","P","P","P","P","P","P","P","P","U","M"]} {"id":"709","title":"Cooperative mutation based evolutionary programming for continuous function optimization","abstract":"An evolutionary programming (EP) algorithm adapting a new mutation operator is presented. Unlike most previous EPs, in which each individual is mutated on its own, each individual in the proposed algorithm is mutated in cooperation with the other individuals. This not only enhances convergence speed but also gives more chance to escape from local minima","tok_text":"cooper mutat base evolutionari program for continu function optim \n an evolutionari program ( ep ) algorithm adapt a new mutat oper is present . unlik most previou ep , in which each individu is mutat on it own , each individu in the propos algorithm is mutat in cooper with the other individu . thi not onli enhanc converg speed but also give more chanc to escap from local minima","ordered_present_kp":[0,43,316,369],"keyphrases":["cooperative mutation based evolutionary programming","continuous function optimization","convergence speed","local minima"],"prmu":["P","P","P","P"]} {"id":"822","title":"Reinventing broadband","abstract":"Many believe that broadband providers need to change their whole approach. The future, then, is in reinventing broadband. That means tiered pricing to make broadband more competitive with dial-up access and livelier, more distinct content: video on demand, MP3, and other features exclusive to the fat-pipe superhighway","tok_text":"reinvent broadband \n mani believ that broadband provid need to chang their whole approach . the futur , then , is in reinvent broadband . that mean tier price to make broadband more competit with dial-up access and liveli , more distinct content : video on demand , mp3 , and other featur exclus to the fat-pip superhighway","ordered_present_kp":[266,248,148,9],"keyphrases":["broadband","tiered pricing","video on demand","MP3","business plans"],"prmu":["P","P","P","P","U"]} {"id":"867","title":"Tracking control of the flexible slider-crank mechanism system under impact","abstract":"The variable structure control (VSC) and the stabilizer design by using the pole placement technique are applied to the tracking control of the flexible slider-crank mechanism under impact. The VSC strategy is employed to track the crank angular position and speed, while the stabilizer design is involved to suppress the flexible vibrations simultaneously. From the theoretical impact consideration, three approaches including the generalized momentum balance (GMB), the continuous force model (CFM), and the CFM associated with the effective mass compensation EMC are adopted, and are derived on the basis of the energy and impulse-momentum conservations. Simulation results are provided to demonstrate the performance of the motor-controller flexible slider-crank mechanism not only accomplishing good tracking trajectory of the crank angle, but also eliminating vibrations of the flexible connecting rod","tok_text":"track control of the flexibl slider-crank mechan system under impact \n the variabl structur control ( vsc ) and the stabil design by use the pole placement techniqu are appli to the track control of the flexibl slider-crank mechan under impact . the vsc strategi is employ to track the crank angular posit and speed , while the stabil design is involv to suppress the flexibl vibrat simultan . from the theoret impact consider , three approach includ the gener momentum balanc ( gmb ) , the continu forc model ( cfm ) , and the cfm associ with the effect mass compens emc are adopt , and are deriv on the basi of the energi and impulse-momentum conserv . simul result are provid to demonstr the perform of the motor-control flexibl slider-crank mechan not onli accomplish good track trajectori of the crank angl , but also elimin vibrat of the flexibl connect rod","ordered_present_kp":[0,21,62,75,116,286,368,455,491,548,777,844,141],"keyphrases":["tracking control","flexible slider-crank mechanism system","impact","variable structure control","stabilizer design","pole placement technique","crank angular position","flexible vibrations","generalized momentum balance","continuous force model","effective mass compensation","tracking trajectory","flexible connecting rod","conservation laws","multibody dynamics"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","M","U"]} {"id":"1434","title":"A simple etalon-stabilized visible laser diode","abstract":"Visible laser diodes (LDs) are inexpensively available with single-transverse-mode, single-longitudinal-mode operation with a coherence length in the metre range. With constant current bias and constant operating temperature, the optical output power and operating wavelength are stable. A simple and inexpensive way is developed to maintain a constant LD temperature as the temperature of the local environment varies, by monitoring the initially changing wavelength with an external etalon and using this information to apply a heating correction to the monitor photodiode commonly integral to the LD package. The fractional wavelength stability achieved is limited by the solid etalon to 7*10\/sup -6\/ degrees C\/sup -1\/","tok_text":"a simpl etalon-stabil visibl laser diod \n visibl laser diod ( ld ) are inexpens avail with single-transverse-mod , single-longitudinal-mod oper with a coher length in the metr rang . with constant current bia and constant oper temperatur , the optic output power and oper wavelength are stabl . a simpl and inexpens way is develop to maintain a constant ld temperatur as the temperatur of the local environ vari , by monitor the initi chang wavelength with an extern etalon and use thi inform to appli a heat correct to the monitor photodiod commonli integr to the ld packag . the fraction wavelength stabil achiev is limit by the solid etalon to 7 * 10 \/ sup -6\/ degre c \/ sup -1\/","ordered_present_kp":[22,188,213,504,524,581,91,115],"keyphrases":["visible laser diode","single-transverse-mode","single-longitudinal-mode","constant current bias","constant operating temperature","heating correction","monitor photodiode","fractional wavelength stability","etalon-stabilized laser diode","index-guided multi-quantum-well","closed-loop operation","feedback loop"],"prmu":["P","P","P","P","P","P","P","P","R","U","M","U"]} {"id":"1018","title":"Fabrication of polymeric microlens of hemispherical shape using micromolding","abstract":"Polymeric microlenses play an important role in reducing the size, weight, and cost of optical data storage and optical communication systems. We fabricate polymeric microlenses using the microcompression molding process. The design and fabrication procedures for mold insertion is simplified using silicon instead of metal. PMMA powder is used as the molding material. Governed by process parameters such as temperature and pressure histories, the micromolding process is controlled to minimize various defects that develop during the molding process. The radius of curvature and magnification ratio of fabricated microlens are measured as 150 mu m and over 3.0, respectively","tok_text":"fabric of polymer microlen of hemispher shape use micromold \n polymer microlens play an import role in reduc the size , weight , and cost of optic data storag and optic commun system . we fabric polymer microlens use the microcompress mold process . the design and fabric procedur for mold insert is simplifi use silicon instead of metal . pmma powder is use as the mold materi . govern by process paramet such as temperatur and pressur histori , the micromold process is control to minim variou defect that develop dure the mold process . the radiu of curvatur and magnif ratio of fabric microlen are measur as 150 mu m and over 3.0 , respect","ordered_present_kp":[50,113,120,133,141,163,62,221,265,285,313,340,366,390,414,429,451,566],"keyphrases":["micromolding","polymeric microlenses","size","weight","cost","optical data storage","optical communication systems","microcompression molding process","fabrication procedures","mold insertion","silicon","PMMA powder","molding material","process parameters","temperature","pressure","micromolding process","magnification ratio","polymeric microlens fabrication","hemispherical shape microlens","design procedures","300 micron"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","U"]} {"id":"987","title":"Proof that the election problem belongs to NF-completeness problems in asynchronous distributed systems","abstract":"This paper is about the hardness of the election problem in asynchronous distributed systems in which processes can crash but links are reliable. The hardness of the problem is defined with respect to the difficulty to solve it despite failures. It is shown that problems encountered in the system are classified as three classes of problems: F (fault-tolerant), NF (Not fault-tolerant) and NFC (NF-completeness). Among those, the class NFC are the hardest problems to solve. In this paper, we prove that the Election problem is the most difficult problem which belongs to the class NFC","tok_text":"proof that the elect problem belong to nf-complet problem in asynchron distribut system \n thi paper is about the hard of the elect problem in asynchron distribut system in which process can crash but link are reliabl . the hard of the problem is defin with respect to the difficulti to solv it despit failur . it is shown that problem encount in the system are classifi as three class of problem : f ( fault-toler ) , nf ( not fault-toler ) and nfc ( nf-complet ) . among those , the class nfc are the hardest problem to solv . in thi paper , we prove that the elect problem is the most difficult problem which belong to the class nfc","ordered_present_kp":[15,39,61],"keyphrases":["election problem","NF-completeness problems","asynchronous distributed systems","distributed computing","leader election","failure detectors","fault-tolerant problems","not-fault-tolerant problems"],"prmu":["P","P","P","M","M","M","R","M"]} {"id":"550","title":"Market watch - air conditioning","abstract":"After a boom period in the late nineties, the air conditioning market finds itself in something of a lull at present, but manufacturers aren't panicking","tok_text":"market watch - air condit \n after a boom period in the late nineti , the air condit market find itself in someth of a lull at present , but manufactur are n't panick","ordered_present_kp":[15,0],"keyphrases":["market","air conditioning"],"prmu":["P","P"]} {"id":"1105","title":"Fuzzy business [Halden Reactor Project]","abstract":"The Halden Reactor Project has developed two systems to investigate how signal validation and thermal performance monitoring techniques can be improved. PEANO is an online calibration monitoring system that makes use of artificial intelligence techniques. The system has been tested in cooperation with EPRI and Edan Engineering, using real data from a US PWR plant. These tests showed that PEANO could reliably assess the performance of the process instrumentation at different plant conditions. Real cases of zero and span drifts were successfully detected by the system. TEMPO is a system for thermal performance monitoring and optimisation, which relies on plant-wide first principle models. The system has been installed on a Swedish BWR plant. Results obtained show an overall rms deviation from measured values of a few tenths of a percent, and giving goodness-of-fits in the order of 95%. The high accuracy demonstrated is a good basis for detecting possible faults and efficiency losses in steam turbine cycles","tok_text":"fuzzi busi [ halden reactor project ] \n the halden reactor project ha develop two system to investig how signal valid and thermal perform monitor techniqu can be improv . peano is an onlin calibr monitor system that make use of artifici intellig techniqu . the system ha been test in cooper with epri and edan engin , use real data from a us pwr plant . these test show that peano could reliabl assess the perform of the process instrument at differ plant condit . real case of zero and span drift were success detect by the system . tempo is a system for thermal perform monitor and optimis , which reli on plant-wid first principl model . the system ha been instal on a swedish bwr plant . result obtain show an overal rm deviat from measur valu of a few tenth of a percent , and give goodness-of-fit in the order of 95 % . the high accuraci demonstr is a good basi for detect possibl fault and effici loss in steam turbin cycl","ordered_present_kp":[13,171,189,228,342,534,122,680,912],"keyphrases":["Halden Reactor Project","thermal performance monitoring","PEANO","calibration","artificial intelligence","PWR","TEMPO","BWR","steam turbine cycles","fuzzy logic","steam generators","feedwater flow"],"prmu":["P","P","P","P","P","P","P","P","P","M","M","U"]} {"id":"1140","title":"Computer aided classification of masses in ultrasonic mammography","abstract":"Frequency compounding was recently investigated for computer aided classification of masses in ultrasonic B-mode images as benign or malignant. The classification was performed using the normalized parameters of the Nakagami distribution at a single region of interest at the site of the mass. A combination of normalized Nakagami parameters from two different images of a mass was undertaken to improve the performance of classification. Receiver operating characteristic (ROC) analysis showed that such an approach resulted in an area of 0.83 under the ROC curve. The aim of the work described in this paper is to see whether a feature describing the characteristic of the boundary can be extracted and combined with the Nakagami parameter to further improve the performance of classification. The combination of the features has been performed using a weighted summation. Results indicate a 10% improvement in specificity at a sensitivity of 96% after combining the information at the site and at the boundary. Moreover, the technique requires minimal clinical intervention and has a performance that reaches that of the trained radiologist. It is hence suggested that this technique may be utilized in practice to characterize breast masses","tok_text":"comput aid classif of mass in ultrason mammographi \n frequenc compound wa recent investig for comput aid classif of mass in ultrason b-mode imag as benign or malign . the classif wa perform use the normal paramet of the nakagami distribut at a singl region of interest at the site of the mass . a combin of normal nakagami paramet from two differ imag of a mass wa undertaken to improv the perform of classif . receiv oper characterist ( roc ) analysi show that such an approach result in an area of 0.83 under the roc curv . the aim of the work describ in thi paper is to see whether a featur describ the characterist of the boundari can be extract and combin with the nakagami paramet to further improv the perform of classif . the combin of the featur ha been perform use a weight summat . result indic a 10 % improv in specif at a sensit of 96 % after combin the inform at the site and at the boundari . moreov , the techniqu requir minim clinic intervent and ha a perform that reach that of the train radiologist . it is henc suggest that thi techniqu may be util in practic to character breast mass","ordered_present_kp":[30,1093,0,53,124,148,158,198,220,244,307,411,515,777,823,835,937],"keyphrases":["computer aided classification","ultrasonic mammography","frequency compounding","ultrasonic B-mode images","benign","malignant","normalized parameters","Nakagami distribution","single region of interest","normalized Nakagami parameters","receiver operating characteristic","ROC curve","weighted summation","specificity","sensitivity","minimal clinical intervention","breast masses"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"96","title":"OMS battle heating up as Chicago Equity ousts LongView for Macgregor","abstract":"Chicago Equity Partners LLC has gone into full production with Macgregor's Financial Trading Platform. This marks a concentrated effort to achieve straight-through processing","tok_text":"om battl heat up as chicago equiti oust longview for macgregor \n chicago equiti partner llc ha gone into full product with macgregor 's financi trade platform . thi mark a concentr effort to achiev straight-through process","ordered_present_kp":[65,53,136,198,40],"keyphrases":["LongView","Macgregor","Chicago Equity Partners","Financial Trading Platform","straight-through processing"],"prmu":["P","P","P","P","P"]} {"id":"614","title":"An on-line distributed intelligent fault section estimation system for large-scale power networks","abstract":"In this paper, a novel distributed intelligent system is suggested for on-line fault section estimation (FSE) of large-scale power networks. As the first step, a multi-way graph partitioning method based on weighted minimum degree reordering is proposed for effectively partitioning the original large-scale power network into the desired number of connected sub-networks with quasi-balanced FSE burdens and minimum frontier elements. After partitioning, a distributed intelligent system based on radial basis function neural network (RBF NN) and companion fuzzy system is suggested for FSE. The relevant theoretical analysis and procedure are presented in the paper. The proposed distributed intelligent FSE method has been implemented with sparse storage technique and tested on the IEEE 14, 30 and 118-bus systems, respectively. Computer simulation results show that the proposed FSE method works successfully for large-scale power networks","tok_text":"an on-lin distribut intellig fault section estim system for large-scal power network \n in thi paper , a novel distribut intellig system is suggest for on-lin fault section estim ( fse ) of large-scal power network . as the first step , a multi-way graph partit method base on weight minimum degre reorder is propos for effect partit the origin large-scal power network into the desir number of connect sub-network with quasi-balanc fse burden and minimum frontier element . after partit , a distribut intellig system base on radial basi function neural network ( rbf nn ) and companion fuzzi system is suggest for fse . the relev theoret analysi and procedur are present in the paper . the propos distribut intellig fse method ha been implement with spars storag techniqu and test on the ieee 14 , 30 and 118-bu system , respect . comput simul result show that the propos fse method work success for large-scal power network","ordered_present_kp":[3,60,151,238,276,394,419,447,110,525,586,750,831],"keyphrases":["on-line distributed intelligent fault section estimation system","large-scale power networks","distributed intelligent system","on-line fault section estimation","multi-way graph partitioning method based","weighted minimum degree reordering","connected sub-networks","quasi-balanced FSE burdens","minimum frontier elements","radial basis function neural network","fuzzy system","sparse storage technique","computer simulation","IEEE 14-bus systems","IEEE 30-bus systems","IEEE 118-bus systems"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","M","M","R"]} {"id":"651","title":"Application-layer multicasting with Delaunay triangulation overlays","abstract":"Application-layer multicast supports group applications without the need for a network-layer multicast protocol. Here, applications arrange themselves in a logical overlay network and transfer data within the overlay. We present an application-layer multicast solution that uses a Delaunay triangulation as an overlay network topology. An advantage of using a Delaunay triangulation is that it allows each application to locally derive next-hop routing information without requiring a routing protocol in the overlay. A disadvantage of using a Delaunay triangulation is that the mapping of the overlay to the network topology at the network and data link layer may be suboptimal. We present a protocol, called Delaunay triangulation (DT protocol), which constructs Delaunay triangulation overlay networks. We present measurement experiments of the DT protocol for overlay networks with up to 10 000 members, that are running on a local PC cluster with 100 Linux PCs. The results show that the protocol stabilizes quickly, e.g., an overlay network with 10 000 nodes can be built in just over 30 s. The traffic measurements indicate that the average overhead of a node is only a few kilobits per second if the overlay network is in a steady state. Results of throughput experiments of multicast transmissions (using TCP unicast connections between neighbors in the overlay network) show an achievable throughput of approximately 15 Mb\/s in an overlay with 100 nodes and 2 Mb\/s in an overlay with 1000 nodes","tok_text":"application-lay multicast with delaunay triangul overlay \n application-lay multicast support group applic without the need for a network-lay multicast protocol . here , applic arrang themselv in a logic overlay network and transfer data within the overlay . we present an application-lay multicast solut that use a delaunay triangul as an overlay network topolog . an advantag of use a delaunay triangul is that it allow each applic to local deriv next-hop rout inform without requir a rout protocol in the overlay . a disadvantag of use a delaunay triangul is that the map of the overlay to the network topolog at the network and data link layer may be suboptim . we present a protocol , call delaunay triangul ( dt protocol ) , which construct delaunay triangul overlay network . we present measur experi of the dt protocol for overlay network with up to 10 000 member , that are run on a local pc cluster with 100 linux pc . the result show that the protocol stabil quickli , e.g. , an overlay network with 10 000 node can be built in just over 30 s. the traffic measur indic that the averag overhead of a node is onli a few kilobit per second if the overlay network is in a steadi state . result of throughput experi of multicast transmiss ( use tcp unicast connect between neighbor in the overlay network ) show an achiev throughput of approxim 15 mb \/ s in an overlay with 100 node and 2 mb \/ s in an overlay with 1000 node","ordered_present_kp":[0,31,93,129,197,793,203,891,917,339,448,631,714,1058,1088,1203,1224,1250],"keyphrases":["application-layer multicasting","Delaunay triangulation overlays","group applications","network-layer multicast protocol","logical overlay network","overlay networks","overlay network topology","next-hop routing information","data link layer","DT protocol","measurement experiments","local PC cluster","Linux PC","traffic measurements","average overhead","throughput experiments","multicast transmissions","TCP unicast connections","data transfer","Delaunay triangulation protocol","network nodes","15 Mbit\/s","2 Mbit\/s"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","M","M"]} {"id":"1204","title":"Design and prototype of a performance tool interface for OpenMP","abstract":"This paper proposes a performance tools interface for OpenMP, similar in spirit to the MPI profiling interface in its intent to define a clear and portable API that makes OpenMP execution events visible to runtime performance tools. We present our design using a source-level instrumentation approach based on OpenMP directive rewriting. Rules to instrument each directive and their combination are applied to generate calls to the interface consistent with directive semantics and to pass context information (e.g., source code locations) in a portable and efficient way. Our proposed OpenMP performance API further allows user functions and arbitrary code regions to be marked and performance measurement to be controlled using new OpenMP directives. To prototype the proposed OpenMP performance interface, we have developed compatible performance libraries for the EXPERT automatic event trace analyzer [17, 18] and the TAU performance analysis framework [13]. The directive instrumentation transformations we define are implemented in a source-to-source translation tool called OPARI. Application examples are presented for both EXPERT and TAU to show the OpenMP performance interface and OPARI instrumentation tool in operation. When used together with the MPI profiling interface (as the examples also demonstrate), our proposed approach provides a portable and robust solution to performance analysis of OpenMP and mixed-mode (OpenMP + MPI) applications","tok_text":"design and prototyp of a perform tool interfac for openmp \n thi paper propos a perform tool interfac for openmp , similar in spirit to the mpi profil interfac in it intent to defin a clear and portabl api that make openmp execut event visibl to runtim perform tool . we present our design use a source-level instrument approach base on openmp direct rewrit . rule to instrument each direct and their combin are appli to gener call to the interfac consist with direct semant and to pass context inform ( e.g. , sourc code locat ) in a portabl and effici way . our propos openmp perform api further allow user function and arbitrari code region to be mark and perform measur to be control use new openmp direct . to prototyp the propos openmp perform interfac , we have develop compat perform librari for the expert automat event trace analyz [ 17 , 18 ] and the tau perform analysi framework [ 13 ] . the direct instrument transform we defin are implement in a source-to-sourc translat tool call opari . applic exampl are present for both expert and tau to show the openmp perform interfac and opari instrument tool in oper . when use togeth with the mpi profil interfac ( as the exampl also demonstr ) , our propos approach provid a portabl and robust solut to perform analysi of openmp and mixed-mod ( openmp + mpi ) applic","ordered_present_kp":[25,139,201,295,336,460,621,783,807,861,960,995],"keyphrases":["performance tool interface","MPI profiling interface","API","source-level instrumentation approach","OpenMP directive rewriting","directive semantics","arbitrary code regions","performance libraries","EXPERT automatic event trace analyzer","TAU performance analysis framework","source-to-source translation tool","OPARI","parallel programming"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","U"]} {"id":"1241","title":"Code generator for the HPF Library and Fortran 95 transformational functions","abstract":"One of the language features of the core language of HPF 2.0 (High Performance Fortran) is the HPF Library. The HPF Library consists of 55 generic functions. The implementation of this library presents the challenge that all data types, data kinds, array ranks and input distributions need to be supported. For instance, more than 2 billion separate functions are required to support COPY-SCATTER fully. The efficient support of these billions of specific functions is one of the outstanding problems of HPF. We have solved this problem by developing a library generator which utilizes the mechanism of parameterized templates. This mechanism allows the procedures to be instantiated at compile time for arguments with a specific type, kind, rank and distribution over a specific processor array. We describe the algorithms used in the different library functions. The implementation gives the ease of generating a large number of library routines from a single template. The templates can be extended with special code for specific combinations of the input arguments. We describe in detail the implementation and performance of the matrix multiplication template for the Fujitsu VPP5000 platform","tok_text":"code gener for the hpf librari and fortran 95 transform function \n one of the languag featur of the core languag of hpf 2.0 ( high perform fortran ) is the hpf librari . the hpf librari consist of 55 gener function . the implement of thi librari present the challeng that all data type , data kind , array rank and input distribut need to be support . for instanc , more than 2 billion separ function are requir to support copy-scatt fulli . the effici support of these billion of specif function is one of the outstand problem of hpf . we have solv thi problem by develop a librari gener which util the mechan of parameter templat . thi mechan allow the procedur to be instanti at compil time for argument with a specif type , kind , rank and distribut over a specif processor array . we describ the algorithm use in the differ librari function . the implement give the eas of gener a larg number of librari routin from a singl templat . the templat can be extend with special code for specif combin of the input argument . we describ in detail the implement and perform of the matrix multipl templat for the fujitsu vpp5000 platform","ordered_present_kp":[19,126,19,200,276,575,829,0,614,1079],"keyphrases":["code generation","HPF","HPF Library","High Performance Fortran","generic functions","data types","library generator","parameterized templates","library functions","matrix multiplication","parallel computing","parallel languages"],"prmu":["P","P","P","P","P","P","P","P","P","P","U","M"]} {"id":"139","title":"Equilibrium swelling and kinetics of pH-responsive hydrogels: models, experiments, and simulations","abstract":"The widespread application of ionic hydrogels in a number of applications like control of microfluidic flow, development of muscle-like actuators, filtration\/separation and drug delivery makes it important to properly understand these materials. Understanding hydrogel properties is also important from the standpoint of their similarity to many biological tissues. Typically, gel size is sensitive to outer solution pH and salt concentration. In this paper, we develop models to predict the swelling\/deswelling of hydrogels in buffered pH solutions. An equilibrium model has been developed to predict the degree of swelling of the hydrogel at a given pH and salt concentration in the solution. A kinetic model has been developed to predict the rate of swelling of the hydrogel when the solution pH is changed. Experiments are performed to characterize the mechanical properties of the hydrogel in different pH solutions. The degree of swelling as well as the rate of swelling of the hydrogel are also studied through experiments. The simulations are compared with experimental results and the models are found to predict the swelling\/deswelling processes accurately","tok_text":"equilibrium swell and kinet of ph-respons hydrogel : model , experi , and simul \n the widespread applic of ionic hydrogel in a number of applic like control of microfluid flow , develop of muscle-lik actuat , filtrat \/ separ and drug deliveri make it import to properli understand these materi . understand hydrogel properti is also import from the standpoint of their similar to mani biolog tissu . typic , gel size is sensit to outer solut ph and salt concentr . in thi paper , we develop model to predict the swell \/ deswel of hydrogel in buffer ph solut . an equilibrium model ha been develop to predict the degre of swell of the hydrogel at a given ph and salt concentr in the solut . a kinet model ha been develop to predict the rate of swell of the hydrogel when the solut ph is chang . experi are perform to character the mechan properti of the hydrogel in differ ph solut . the degre of swell as well as the rate of swell of the hydrogel are also studi through experi . the simul are compar with experiment result and the model are found to predict the swell \/ deswel process accur","ordered_present_kp":[31,107,160,189,209,229,408,512,542,563,830],"keyphrases":["pH-responsive hydrogels","ionic hydrogels","microfluidic flow","muscle-like actuators","filtration\/separation","drug delivery","gel size","swelling\/deswelling","buffered pH solutions","equilibrium model","mechanical properties"],"prmu":["P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1365","title":"Deadlock-free scheduling in flexible manufacturing systems using Petri nets","abstract":"This paper addresses the deadlock-free scheduling problem in Flexible Manufacturing Systems. An efficient deadlock-free scheduling algorithm was developed, using timed Petri nets, for a class of FMSs called Systems of Sequential Systems with Shared Resources (S\/sup 4\/ R). The algorithm generates a partial reachability graph to find the optimal or near-optimal deadlock-free schedule in terms of the firing sequence of the transitions of the Petri net model. The objective is to minimize the mean flow time (MFT). An efficient truncation technique, based on the siphon concept, has been developed and used to generate the minimum necessary portion of the reachability graph to be searched. It has been shown experimentally that the developed siphon truncation technique enhances the ability to develop deadlock-free schedules of systems with a high number of deadlocks, which cannot be achieved using standard Petri net scheduling approaches. It may be necessary, in some cases, to relax the optimality condition for large FMSs in order to make the search effort reasonable. Hence, a User Control Factor (UCF) was defined and used in the scheduling algorithm. The objective of using the UCF is to achieve an acceptable trade-off between the solution quality and the search effort. Its effect on the MFT and the CPU time has been investigated. Randomly generated examples are used for illustration and comparison. Although the effect of UCF did not affect the mean flow time, it was shown that increasing it reduces the search effort (CPU time) significantly","tok_text":"deadlock-fre schedul in flexibl manufactur system use petri net \n thi paper address the deadlock-fre schedul problem in flexibl manufactur system . an effici deadlock-fre schedul algorithm wa develop , use time petri net , for a class of fmss call system of sequenti system with share resourc ( s \/ sup 4\/ r ) . the algorithm gener a partial reachabl graph to find the optim or near-optim deadlock-fre schedul in term of the fire sequenc of the transit of the petri net model . the object is to minim the mean flow time ( mft ) . an effici truncat techniqu , base on the siphon concept , ha been develop and use to gener the minimum necessari portion of the reachabl graph to be search . it ha been shown experiment that the develop siphon truncat techniqu enhanc the abil to develop deadlock-fre schedul of system with a high number of deadlock , which can not be achiev use standard petri net schedul approach . it may be necessari , in some case , to relax the optim condit for larg fmss in order to make the search effort reason . henc , a user control factor ( ucf ) wa defin and use in the schedul algorithm . the object of use the ucf is to achiev an accept trade-off between the solut qualiti and the search effort . it effect on the mft and the cpu time ha been investig . randomli gener exampl are use for illustr and comparison . although the effect of ucf did not affect the mean flow time , it wa shown that increas it reduc the search effort ( cpu time ) significantli","ordered_present_kp":[24,0,54,248,334,378,733,1044,1254,1282],"keyphrases":["deadlock-free scheduling","flexible manufacturing systems","Petri nets","systems of sequential systems with shared resources","partial reachability graph","near-optimal deadlock-free schedule","siphon truncation technique","user control factor","CPU time","randomly generated examples","optimal deadlock-free schedule","Petri net model transitions firing sequence","mean flow time minimization","optimality condition relaxation"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R","R","R"]} {"id":"1320","title":"Securing the Internet routing infrastructure","abstract":"The unprecedented growth of the Internet over the last years, and the expectation of an even faster increase in the numbers of users and networked systems, resulted in the Internet assuming its position as a mass communication medium. At the same time, the emergence of an increasingly large number of application areas and the evolution of the networking technology suggest that in the near future the Internet may become the single integrated communication infrastructure. However, as the dependence on the networking infrastructure grows, its security becomes a major concern, in light of the increased attempt to compromise the infrastructure. In particular, the routing operation is a highly visible target that must be shielded against a wide range of attacks. The injection of false routing information can easily degrade network performance, or even cause denial of service for a large number of hosts and networks over a long period of time. Different approaches have been proposed to secure the routing protocols, with a variety of countermeasures, which, nonetheless, have not eradicated the vulnerability of the routing infrastructure. In this article, we survey the up-to-date secure routing schemes. that appeared over the last few years. Our critical point of view and thorough review of the literature are an attempt to identify directions for future research on an indeed difficult and still largely open problem","tok_text":"secur the internet rout infrastructur \n the unpreced growth of the internet over the last year , and the expect of an even faster increas in the number of user and network system , result in the internet assum it posit as a mass commun medium . at the same time , the emerg of an increasingli larg number of applic area and the evolut of the network technolog suggest that in the near futur the internet may becom the singl integr commun infrastructur . howev , as the depend on the network infrastructur grow , it secur becom a major concern , in light of the increas attempt to compromis the infrastructur . in particular , the rout oper is a highli visibl target that must be shield against a wide rang of attack . the inject of fals rout inform can easili degrad network perform , or even caus denial of servic for a larg number of host and network over a long period of time . differ approach have been propos to secur the rout protocol , with a varieti of countermeasur , which , nonetheless , have not erad the vulner of the rout infrastructur . in thi articl , we survey the up-to-d secur rout scheme . that appear over the last few year . our critic point of view and thorough review of the literatur are an attempt to identifi direct for futur research on an inde difficult and still larg open problem","ordered_present_kp":[164,342,424,483,732,767,928,962,19,1091,1254],"keyphrases":["routing infrastructure","networked systems","networking technology","integrated communication infrastructure","networking infrastructure","false routing information","network performance","routing protocols","countermeasures","secure routing schemes","research","Internet routing infrastructure security","preventive security mechanisms","link state protocols"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","M","M"]} {"id":"775","title":"Disability-related special libraries","abstract":"One of the ways that the federal government works to improve services to people with disabilities is to fund disability-related information centers and clearinghouses that provide information resources and referrals to disabled individuals, their family members, service providers, and the general public. The Teaching Research Division of Western Oregon University operates two federally funded information centers for people with disabilities: OBIRN (the Oregon Brain Injury Resource Network) and DB-LINK (the National Information Clearinghouse on Children who are Deaf-Blind). Both have developed in-depth library collections and services in addition to typical clearinghouse services. The authors describe how OBIRN and DB-LINK were designed and developed, and how they are currently structured and maintained. Both information centers use many of the same strategies and tools in day-to-day operations, but differ in a number of ways, including materials and clientele","tok_text":"disability-rel special librari \n one of the way that the feder govern work to improv servic to peopl with disabl is to fund disability-rel inform center and clearinghous that provid inform resourc and referr to disabl individu , their famili member , servic provid , and the gener public . the teach research divis of western oregon univers oper two feder fund inform center for peopl with disabl : obirn ( the oregon brain injuri resourc network ) and db-link ( the nation inform clearinghous on children who are deaf-blind ) . both have develop in-depth librari collect and servic in addit to typic clearinghous servic . the author describ how obirn and db-link were design and develop , and how they are current structur and maintain . both inform center use mani of the same strategi and tool in day-to-day oper , but differ in a number of way , includ materi and clientel","ordered_present_kp":[0,57,124,182,318,399,411,453,467,556],"keyphrases":["disability-related special libraries","federal government","disability-related information centers","information resources","Western Oregon University","OBIRN","Oregon Brain Injury Resource Network","DB-LINK","National Information Clearinghouse on Children who are Deaf-Blind","library collections","disability-related clearinghouses","information referrals"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"730","title":"Multi-hour design of survivable classical IP networks","abstract":"Most of Internet intra-domain routing protocols (OSPF, RIP, and IS-IS) are based on shortest path routing. The path length is defined as the sum of metrics associated with the path links. These metrics are often managed by the network administrator. In this context, the design of an Internet backbone network consists in dimensioning the network (routers and transmission links) and establishing the metric. Many requirements have to be satisfied. First, Internet traffic is not static as significant variations can be observed during the day. Second, many failures can occur (cable cuts, hardware failures, software failures, etc.). We present algorithms (meta-heuristics and greedy heuristic) to design Internet backbone networks, taking into account the multi-hour behaviour of traffic and some survivability requirements. Many multi-hour and protection strategies are studied and numerically compared. Our algorithms can be extended to integrate other quality of service constraints","tok_text":"multi-hour design of surviv classic ip network \n most of internet intra-domain rout protocol ( ospf , rip , and is-i ) are base on shortest path rout . the path length is defin as the sum of metric associ with the path link . these metric are often manag by the network administr . in thi context , the design of an internet backbon network consist in dimens the network ( router and transmiss link ) and establish the metric . mani requir have to be satisfi . first , internet traffic is not static as signific variat can be observ dure the day . second , mani failur can occur ( cabl cut , hardwar failur , softwar failur , etc . ) . we present algorithm ( meta-heurist and greedi heurist ) to design internet backbon network , take into account the multi-hour behaviour of traffic and some surviv requir . mani multi-hour and protect strategi are studi and numer compar . our algorithm can be extend to integr other qualiti of servic constraint","ordered_present_kp":[0,21,57,95,102,112,131,156,214,262,316,469,384,793,919],"keyphrases":["multi-hour design","survivable classical IP networks","Internet intra-domain routing protocols","OSPF","RIP","IS-IS","shortest path routing","path length","path links","network administrator","Internet backbone network","transmission links","Internet traffic","survivability requirements","quality of service constraints","network dimensioning","network routers","network failures","meta-heuristics algorithm","greedy heuristic algorithm","network protection","QoS constraints"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R","R","R","M"]} {"id":"1448","title":"Implementing equals for mixed-type comparison","abstract":"The idea of comparing objects of different types is not entirely off base, in particular for classes from the same class hierarchy. After all, objects from the same class hierarchy (and by class hierarchy we mean all classes derived from a common superclass other than Object) have something in common, namely at least the superclass part. As we demonstrated in a previous paper (2002), providing a correct implementation of a mixed-type comparison is a non-trivial task. In this article, we will show one way of implementing a mixed-type comparison of objects from the same class hierarchy that meets the requirements of the equals contract","tok_text":"implement equal for mixed-typ comparison \n the idea of compar object of differ type is not entir off base , in particular for class from the same class hierarchi . after all , object from the same class hierarchi ( and by class hierarchi we mean all class deriv from a common superclass other than object ) have someth in common , name at least the superclass part . as we demonstr in a previou paper ( 2002 ) , provid a correct implement of a mixed-typ comparison is a non-trivi task . in thi articl , we will show one way of implement a mixed-typ comparison of object from the same class hierarchi that meet the requir of the equal contract","ordered_present_kp":[628,20,276],"keyphrases":["mixed-type comparison","superclass","equals contract","Java","transitivity requirement"],"prmu":["P","P","P","U","M"]} {"id":"1099","title":"WebCAD: A computer aided design tool constrained with explicit 'design for manufacturability' rules for computer numerical control milling","abstract":"A key element in the overall efficiency of a manufacturing enterprise is the compatibility between the features that have been created in a newly designed part, and the capabilities of the downstream manufacturing processes. With this in mind, a process-aware computer aided design (CAD) system called WebCAD has been developed. The system restricts the freedom of the designer in such a way that the designed parts can be manufactured on a three-axis computer numerical control milling machine. This paper discusses the vision of WebCAD and explains the rationale for its development in comparison with commercial CAD\/CAM (computer aided design\/manufacture) systems. The paper then goes on to describe the implementation issues that enforce the manufacturability rules. Finally, certain design tools are described that aid a user during the design process. Some examples are given of the parts designed and manufactured with WebCAD","tok_text":"webcad : a comput aid design tool constrain with explicit ' design for manufactur ' rule for comput numer control mill \n a key element in the overal effici of a manufactur enterpris is the compat between the featur that have been creat in a newli design part , and the capabl of the downstream manufactur process . with thi in mind , a process-awar comput aid design ( cad ) system call webcad ha been develop . the system restrict the freedom of the design in such a way that the design part can be manufactur on a three-axi comput numer control mill machin . thi paper discuss the vision of webcad and explain the rational for it develop in comparison with commerci cad \/ cam ( comput aid design \/ manufactur ) system . the paper then goe on to describ the implement issu that enforc the manufactur rule . final , certain design tool are describ that aid a user dure the design process . some exampl are given of the part design and manufactur with webcad","ordered_present_kp":[0,11,93,790,22],"keyphrases":["WebCAD","computer aided design tool","design tools","computer numerical control milling","manufacturability rules","design for manufacturability rules","manufacturing enterprise efficiency","process-aware CAD system","three-axis CNC milling machine","CAD\/CAM systems","Internet-based CAD\/CAM"],"prmu":["P","P","P","P","P","R","R","R","M","R","M"]} {"id":"1064","title":"Quantum-controlled measurement device for quantum-state discrimination","abstract":"We propose a \"programmable\" quantum device that is able to perform a specific generalized measurement from a certain set of measurements depending on a quantum state of a \"program register.\" In particular, we study a situation when the programmable measurement device serves for the unambiguous discrimination between nonorthogonal states. The particular pair of states that can be unambiguously discriminated is specified by the state of a program qubit. The probability of successful discrimination is not optimal for all admissible pairs. However, for some subsets it can be very close to the optimal value","tok_text":"quantum-control measur devic for quantum-st discrimin \n we propos a \" programm \" quantum devic that is abl to perform a specif gener measur from a certain set of measur depend on a quantum state of a \" program regist . \" in particular , we studi a situat when the programm measur devic serv for the unambigu discrimin between nonorthogon state . the particular pair of state that can be unambigu discrimin is specifi by the state of a program qubit . the probabl of success discrimin is not optim for all admiss pair . howev , for some subset it can be veri close to the optim valu","ordered_present_kp":[0,33,181,202,326,435],"keyphrases":["quantum-controlled measurement device","quantum-state discrimination","quantum state","program register","nonorthogonal states","program qubit","programmable quantum device"],"prmu":["P","P","P","P","P","P","R"]} {"id":"1021","title":"Error-probability analysis of MIL-STD-1773 optical fiber data buses","abstract":"We have analyzed the error probabilities of MIL-STD-1773 optical fiber data buses with three modulation schemes, namely, original Manchester II bi-phase coding, PTMBC, and EMBC-BSF. Using these derived expressions of error probabilities, we can also compare the receiver sensitivities of such optical fiber data buses","tok_text":"error-prob analysi of mil-std-1773 optic fiber data buse \n we have analyz the error probabl of mil-std-1773 optic fiber data buse with three modul scheme , name , origin manchest ii bi-phas code , ptmbc , and embc-bsf . use these deriv express of error probabl , we can also compar the receiv sensit of such optic fiber data buse","ordered_present_kp":[78,35,141,286],"keyphrases":["optical fiber data buses","error probabilities","modulation schemes","receiver sensitivities","Manchester bi-phase coding"],"prmu":["P","P","P","P","R"]} {"id":"788","title":"Rise of the supercompany [CRM]","abstract":"All the thoughts, conversations and notes of employees help the firm create a wider picture of business. Customer relationship management (CRM) feeds on data, and it is hungry","tok_text":"rise of the supercompani [ crm ] \n all the thought , convers and note of employe help the firm creat a wider pictur of busi . custom relationship manag ( crm ) feed on data , and it is hungri","ordered_present_kp":[126],"keyphrases":["customer relationship management","central data repository","database","staff trained"],"prmu":["P","M","U","U"]} {"id":"1398","title":"Swamped by data [storage]","abstract":"While the cost of storage has plummeted, the demand continued to climb and there are plenty of players out there offering solutions to a company's burgeoning storage needs","tok_text":"swamp by data [ storag ] \n while the cost of storag ha plummet , the demand continu to climb and there are plenti of player out there offer solut to a compani 's burgeon storag need","ordered_present_kp":[37],"keyphrases":["cost of storage","IT personnel","resource management","disk capacity management","disk optimisation","file system automation","storage virtualisation","storage area networks","network attached storage"],"prmu":["P","U","U","U","U","U","M","M","M"]} {"id":"9","title":"Achieving competitive capabilities in e-services","abstract":"What implications does the Internet have for service operations strategy? How can business performance of e-service companies be improved in today's knowledge-based economy? These research questions are the subject of the paper. We propose a model that links the e-service company's knowledge-based competencies with their competitive capabilities. Drawing from the current literature, our analysis suggests that services that strategically build a portfolio of knowledge-based competencies, namely human capital, structural capital, and absorptive capacity have more operations-based options, than their counterparts who are less apt to invest. We assume that the combinative capabilities of service quality, delivery, flexibility, and cost are determined by the investment in intellectual capital. Arguably, with the advent of the Internet, different operating models (e.g., bricks-and-mortar, clicks-and-mortar, or pure dot-com) have different strategic imperatives in terms of knowledge-based competencies. Thus, the new e-operations paradigm can be viewed as a configuration of knowledge-based competencies and capabilities","tok_text":"achiev competit capabl in e-servic \n what implic doe the internet have for servic oper strategi ? how can busi perform of e-servic compani be improv in today 's knowledge-bas economi ? these research question are the subject of the paper . we propos a model that link the e-servic compani 's knowledge-bas compet with their competit capabl . draw from the current literatur , our analysi suggest that servic that strateg build a portfolio of knowledge-bas compet , name human capit , structur capit , and absorpt capac have more operations-bas option , than their counterpart who are less apt to invest . we assum that the combin capabl of servic qualiti , deliveri , flexibl , and cost are determin by the invest in intellectu capit . arguabl , with the advent of the internet , differ oper model ( e.g. , bricks-and-mortar , clicks-and-mortar , or pure dot-com ) have differ strateg imper in term of knowledge-bas compet . thu , the new e-oper paradigm can be view as a configur of knowledge-bas compet and capabl","ordered_present_kp":[7,26,57,75,106,161,292,470,484,505,529,623,640,657,668,682,596,717,807,827,855,877],"keyphrases":["competitive capabilities","e-services","Internet","service operations strategy","business performance","knowledge-based economy","knowledge-based competencies","human capital","structural capital","absorptive capacity","operations-based options","investment","combinative capabilities","service quality","delivery","flexibility","cost","intellectual capital","bricks-and-mortar","clicks-and-mortar","dot-com","strategic imperatives"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"569","title":"Application of an internally consistent material model to determine the effect of tool edge geometry in orthogonal machining","abstract":"It is well known that the edge geometry of a cutting tool affects the forces measured in metal cutting. Two experimental methods have been suggested in the past to extract the ploughing (non-cutting) component from the total measured force: (1) the extrapolation approach, and (2) the dwell force technique. This study reports the behavior of zinc during orthogonal machining using tools of controlled edge radius. Applications of both the extrapolation and dwell approaches show that neither produces an analysis that yields a material response consistent with the known behavior of zinc. Further analysis shows that the edge geometry modifies the shear zone of the material and thereby modifies the forces. When analyzed this way, the measured force data yield the expected material response without requiring recourse to an additional ploughing component","tok_text":"applic of an intern consist materi model to determin the effect of tool edg geometri in orthogon machin \n it is well known that the edg geometri of a cut tool affect the forc measur in metal cut . two experiment method have been suggest in the past to extract the plough ( non-cut ) compon from the total measur forc : ( 1 ) the extrapol approach , and ( 2 ) the dwell forc techniqu . thi studi report the behavior of zinc dure orthogon machin use tool of control edg radiu . applic of both the extrapol and dwell approach show that neither produc an analysi that yield a materi respons consist with the known behavior of zinc . further analysi show that the edg geometri modifi the shear zone of the materi and therebi modifi the forc . when analyz thi way , the measur forc data yield the expect materi respons without requir recours to an addit plough compon","ordered_present_kp":[67,150,185,848,329,363,418,72,88],"keyphrases":["tool edge geometry","edge geometry","orthogonal machining","cutting tool","metal cutting","extrapolation","dwell force","zinc","ploughing component"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"1179","title":"Evolution complexity of the elementary cellular automaton rule 18","abstract":"Cellular automata are classes of mathematical systems characterized by discreteness (in space, time, and state values), determinism, and local interaction. Using symbolic dynamical theory, we coarse-grain the temporal evolution orbits of cellular automata. By means of formal languages and automata theory, we study the evolution complexity of the elementary cellular automaton with local rule number 18 and prove that its width 1-evolution language is regular, but for every n >or= 2 its width n-evolution language is not context free but context sensitive","tok_text":"evolut complex of the elementari cellular automaton rule 18 \n cellular automata are class of mathemat system character by discret ( in space , time , and state valu ) , determin , and local interact . use symbol dynam theori , we coarse-grain the tempor evolut orbit of cellular automata . by mean of formal languag and automata theori , we studi the evolut complex of the elementari cellular automaton with local rule number 18 and prove that it width 1-evolut languag is regular , but for everi n > or= 2 it width n-evolut languag is not context free but context sensit","ordered_present_kp":[62,205,301,7,0,22],"keyphrases":["evolution complexity","complexity","elementary cellular automaton","cellular automata","symbolic dynamical theory","formal languages"],"prmu":["P","P","P","P","P","P"]} {"id":"1285","title":"On fractal dimension in information systems. Toward exact sets in infinite information systems","abstract":"The notions of an exact as well as a rough set are well-grounded as basic notions in rough set theory. They are however defined in the setting of a finite information system i.e. an information system having finite numbers of objects as well as attributes. In theoretical studies e.g. of topological properties of rough sets, one has to trespass this limitation and to consider information systems with potentially unbound number of attributes. In such setting, the notions of rough and exact sets may be defined in terms of topological operators of interior and closure with respect to an appropriate topology following the ideas from the finite case, where it is noticed that in the finite case rough-set-theoretic operators of lower and upper approximation are identical with the interior, respectively, closure operators in topology induced by equivalence classes of the indiscernibility relation. Extensions of finite information systems are also desirable from application point of view in the area of knowledge discovery and data mining, when demands of e.g. mass collaboration and\/or huge experimental data call for need of working with large data tables so the sound theoretical generalization of these cases is an information system with the number of attributes not bound in advance by a fixed integer i.e. an information system with countably but infinitely many attributes, In large information systems, a need arises for qualitative measures of complexity of concepts involved free of parameters, cf. e.g. applications for the Vapnik-Czervonenkis dimension. We study here in the theoretical setting of infinite information system a proposal to apply fractal dimensions suitably modified as measures of concept complexity","tok_text":"on fractal dimens in inform system . toward exact set in infinit inform system \n the notion of an exact as well as a rough set are well-ground as basic notion in rough set theori . they are howev defin in the set of a finit inform system i.e. an inform system have finit number of object as well as attribut . in theoret studi e.g. of topolog properti of rough set , one ha to trespass thi limit and to consid inform system with potenti unbound number of attribut . in such set , the notion of rough and exact set may be defin in term of topolog oper of interior and closur with respect to an appropri topolog follow the idea from the finit case , where it is notic that in the finit case rough-set-theoret oper of lower and upper approxim are ident with the interior , respect , closur oper in topolog induc by equival class of the indiscern relat . extens of finit inform system are also desir from applic point of view in the area of knowledg discoveri and data mine , when demand of e.g. mass collabor and\/or huge experiment data call for need of work with larg data tabl so the sound theoret gener of these case is an inform system with the number of attribut not bound in advanc by a fix integ i.e. an inform system with countabl but infinit mani attribut , in larg inform system , a need aris for qualit measur of complex of concept involv free of paramet , cf . e.g. applic for the vapnik-czervonenki dimens . we studi here in the theoret set of infinit inform system a propos to appli fractal dimens suitabl modifi as measur of concept complex","ordered_present_kp":[3,21,44,57,117,335,780,812,937,960,1304,1321],"keyphrases":["fractal dimension","information systems","exact sets","infinite information systems","rough set","topological properties","closure operators","equivalence classes","knowledge discovery","data mining","qualitative measures","complexity"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"695","title":"Design of high-performance wavelets for image coding using a perceptual time domain criterion","abstract":"This paper presents a new biorthogonal linear-phase wavelet design for image compression. Instead of calculating the prototype filters as spectral factors of a half-band filter, the design is based on the direct optimization of the low pass analysis filter using an objective function directly related to a perceptual criterion for image compression. This function is defined as the product of the theoretical coding gain and an index called the peak-to-peak ratio, which was shown to have high correlation with perceptual quality. A distinctive feature of the proposed technique is a procedure by which, given a \"good\" starting filter, \"good\" filters of longer lengths are generated. The results are excellent, showing a clear improvement in perceptual image quality. Also, we devised a criterion for constraining the coefficients of the filters in order to design wavelets with minimum ringing","tok_text":"design of high-perform wavelet for imag code use a perceptu time domain criterion \n thi paper present a new biorthogon linear-phas wavelet design for imag compress . instead of calcul the prototyp filter as spectral factor of a half-band filter , the design is base on the direct optim of the low pass analysi filter use an object function directli relat to a perceptu criterion for imag compress . thi function is defin as the product of the theoret code gain and an index call the peak-to-peak ratio , which wa shown to have high correl with perceptu qualiti . a distinct featur of the propos techniqu is a procedur by which , given a \" good \" start filter , \" good \" filter of longer length are gener . the result are excel , show a clear improv in perceptu imag qualiti . also , we devis a criterion for constrain the coeffici of the filter in order to design wavelet with minimum ring","ordered_present_kp":[10,35,51,108,150,188,228,302,324,451,483,752],"keyphrases":["high-performance wavelets","image coding","perceptual time domain criterion","biorthogonal linear-phase wavelet design","image compression","prototype filters","half-band filter","analysis filter","objective function","coding gain","peak-to-peak ratio","perceptual image quality","low pass filter","filter banks"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","M"]} {"id":"1278","title":"Verification of timed automata based on similarity","abstract":"The paper presents a modification of the standard partitioning technique to generate abstract state spaces preserving similarity for Timed Automata. Since this relation is weaker than bisimilarity, most of the obtained models (state spaces) are smaller than bisimilar ones, but still preserve the universal fragments of branching time temporal logics. The theoretical results are exemplified for strong, delay, and observational simulation relations","tok_text":"verif of time automata base on similar \n the paper present a modif of the standard partit techniqu to gener abstract state space preserv similar for time automata . sinc thi relat is weaker than bisimilar , most of the obtain model ( state space ) are smaller than bisimilar one , but still preserv the univers fragment of branch time tempor logic . the theoret result are exemplifi for strong , delay , and observ simul relat","ordered_present_kp":[83,108,195,303,323,408],"keyphrases":["partitioning technique","abstract state spaces","bisimilarity","universal fragments","branching time temporal logics","observational simulation relations","timed automata verification"],"prmu":["P","P","P","P","P","P","R"]} {"id":"1184","title":"Measuring return: revealing ROI","abstract":"The most critical part of the return-on-investment odyssey is to develop metrics that matter to the business and to measure systems in terms of their ability to help achieve those business goals. Everything must flow from those key metrics. And don't forget to revisit those every now and then, too. Since all systems wind down over time, it's important to keep tabs on how well your automation investment is meeting the metrics established by your company. Manufacturers are clamoring for a tool to help quantify returns and analyze the results","tok_text":"measur return : reveal roi \n the most critic part of the return-on-invest odyssey is to develop metric that matter to the busi and to measur system in term of their abil to help achiev those busi goal . everyth must flow from those key metric . and do n't forget to revisit those everi now and then , too . sinc all system wind down over time , it 's import to keep tab on how well your autom invest is meet the metric establish by your compani . manufactur are clamor for a tool to help quantifi return and analyz the result","ordered_present_kp":[57,23,232,387],"keyphrases":["ROI","return-on-investment","key metrics","automation investment","technology purchases"],"prmu":["P","P","P","P","U"]} {"id":"100","title":"Separate accounts go mainstream [investment]","abstract":"New entrants are shaking up the separate-account industry by supplying Web-based platforms that give advisers the tools to pick independent money managers","tok_text":"separ account go mainstream [ invest ] \n new entrant are shake up the separate-account industri by suppli web-bas platform that give advis the tool to pick independ money manag","ordered_present_kp":[70,106,156,30],"keyphrases":["investment","separate-account industry","Web-based platforms","independent money managers","financial advisors"],"prmu":["P","P","P","P","U"]} {"id":"943","title":"Implementation of universal quantum gates based on nonadiabatic geometric phases","abstract":"We propose an experimentally feasible scheme to achieve quantum computation based on nonadiabatic geometric phase shifts, in which a cyclic geometric phase is used to realize a set of universal quantum gates. Physical implementation of this set of gates is designed for Josephson junctions and for NMR systems. Interestingly, we find that the nonadiabatic phase shift may be independent of the operation time under appropriate controllable conditions. A remarkable feature of the present nonadiabatic geometric gates is that there is no intrinsic limitation on the operation time","tok_text":"implement of univers quantum gate base on nonadiabat geometr phase \n we propos an experiment feasibl scheme to achiev quantum comput base on nonadiabat geometr phase shift , in which a cyclic geometr phase is use to realiz a set of univers quantum gate . physic implement of thi set of gate is design for josephson junction and for nmr system . interestingli , we find that the nonadiabat phase shift may be independ of the oper time under appropri control condit . a remark featur of the present nonadiabat geometr gate is that there is no intrins limit on the oper time","ordered_present_kp":[118,141,185,13,305,332,378,424,497],"keyphrases":["universal quantum gates","quantum computation","nonadiabatic geometric phase shifts","cyclic geometric phase","Josephson junctions","NMR systems","nonadiabatic phase shift","operation time","nonadiabatic geometric gates"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"145","title":"If the RedBoot fits [open-source ROM monitor]","abstract":"Many embedded developers today use a ROM- or flash-resident software program that provides functionality such as loading and running application software, scripting, read\/write access to processor registers, and memory dumps. A ROM monitor, as it is often called, can be a useful and far less expensive debugging tool than an in-circuit emulator. This article describes the RedBoot ROM monitor. It takes a look at the features offered by the RedBoot ROM monitor and sees how it can be configured. It also walks through the steps of rebuilding and installing a new RedBoot image on a target platform. Finally, it looks at future enhancements that are coming in new releases and how to get support and additional information when using RedBoot. Although RedBoot uses software modules from the eCos real-time operating system (RTOS) and is often used in systems running embedded Linux, it is completely independent of both operating systems. RedBoot can be used with any operating system or RTOS, or even without one","tok_text":"if the redboot fit [ open-sourc rom monitor ] \n mani embed develop today use a rom- or flash-resid softwar program that provid function such as load and run applic softwar , script , read \/ write access to processor regist , and memori dump . a rom monitor , as it is often call , can be a use and far less expens debug tool than an in-circuit emul . thi articl describ the redboot rom monitor . it take a look at the featur offer by the redboot rom monitor and see how it can be configur . it also walk through the step of rebuild and instal a new redboot imag on a target platform . final , it look at futur enhanc that are come in new releas and how to get support and addit inform when use redboot . although redboot use softwar modul from the eco real-tim oper system ( rto ) and is often use in system run embed linux , it is complet independ of both oper system . redboot can be use with ani oper system or rto , or even without one","ordered_present_kp":[7,21,87,174,229,314,748,752,812],"keyphrases":["RedBoot","open-source ROM monitor","flash-resident software program","scripting","memory dumps","debugging tool","eCos","real-time operating system","embedded Linux","embedded systems","processor register access","bootstrapping"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","U"]} {"id":"906","title":"High-performance servo systems based on multirate sampling control","abstract":"In this paper, novel multirate two-degree-of-freedom controllers are proposed for digital control systems, in which the sampling period of plant output is restricted to be relatively longer than the control period of plant input. The proposed feedforward controller assures perfect tracking at M inter-sampling points. On the other hand, the proposed feedback controller assures perfect disturbance rejection at M inter-sample points in the steady state. Illustrative examples of position control for hard disk drive are presented, and advantages of these approaches are demonstrated","tok_text":"high-perform servo system base on multir sampl control \n in thi paper , novel multir two-degree-of-freedom control are propos for digit control system , in which the sampl period of plant output is restrict to be rel longer than the control period of plant input . the propos feedforward control assur perfect track at m inter-sampl point . on the other hand , the propos feedback control assur perfect disturb reject at m inter-sampl point in the steadi state . illustr exampl of posit control for hard disk drive are present , and advantag of these approach are demonstr","ordered_present_kp":[34,13,130,310,372,276,481,499,403],"keyphrases":["servo system","multirate sampling control","digital control systems","feedforward","tracking","feedback","disturbance rejection","position control","hard disk drive"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"594","title":"Improved analysis for the nonlinear performance of CMOS current mirrors with device mismatch","abstract":"The nonlinear performance of the simple and complementary MOSFET current mirrors are analyzed. Closed-form expressions are obtained for the harmonic and intermodulation components resulting from a multisinusoidal input current. These expressions can be used for predicting the limiting values of the input current under prespecified conditions of threshold-voltage mismatches and\/or transconductance mismatches. The case of a single input sinusoid is discussed in detail and the results are compared with SPICE simulations","tok_text":"improv analysi for the nonlinear perform of cmo current mirror with devic mismatch \n the nonlinear perform of the simpl and complementari mosfet current mirror are analyz . closed-form express are obtain for the harmon and intermodul compon result from a multisinusoid input current . these express can be use for predict the limit valu of the input current under prespecifi condit of threshold-voltag mismatch and\/or transconduct mismatch . the case of a singl input sinusoid is discuss in detail and the result are compar with spice simul","ordered_present_kp":[23,44,68,124,173,223,255,269,385,529,418],"keyphrases":["nonlinear performance","CMOS current mirrors","device mismatch","complementary MOSFET current mirrors","closed-form expressions","intermodulation components","multisinusoidal input current","input current","threshold-voltage mismatch","transconductance mismatch","SPICE simulations","harmonic components","simulation results"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"610","title":"AGC for autonomous power system using combined intelligent techniques","abstract":"In the present work two intelligent load frequency controllers have been developed to regulate the power output and system frequency by controlling the speed of the generator with the help of fuel rack position control. The first controller is obtained using fuzzy logic (FL) only, whereas the second one by using a combination of FL, genetic algorithms and neural networks. The aim of the proposed controller(s) is to restore in a very smooth way the frequency to its nominal value in the shortest time possible whenever there is any change in the load demand etc. The action of these controller(s) provides a satisfactory balance between frequency overshoot and transient oscillations with zero steady-state error. The design and performance evaluation of the proposed controller(s) structure are illustrated with the help of case studies applied (without loss of generality) to a typical single-area power system. It is found that the proposed controllers exhibit satisfactory overall dynamic performance and overcome the possible drawbacks associated with other competing techniques","tok_text":"agc for autonom power system use combin intellig techniqu \n in the present work two intellig load frequenc control have been develop to regul the power output and system frequenc by control the speed of the gener with the help of fuel rack posit control . the first control is obtain use fuzzi logic ( fl ) onli , wherea the second one by use a combin of fl , genet algorithm and neural network . the aim of the propos controller( ) is to restor in a veri smooth way the frequenc to it nomin valu in the shortest time possibl whenev there is ani chang in the load demand etc . the action of these controller( ) provid a satisfactori balanc between frequenc overshoot and transient oscil with zero steady-st error . the design and perform evalu of the propos controller( ) structur are illustr with the help of case studi appli ( without loss of gener ) to a typic single-area power system . it is found that the propos control exhibit satisfactori overal dynam perform and overcom the possibl drawback associ with other compet techniqu","ordered_present_kp":[8,33,230,288,360,380,559,648,671,692,730,864,948,1020,98],"keyphrases":["autonomous power system","combined intelligent techniques","frequency control","fuel rack position control","fuzzy logic","genetic algorithms","neural networks","load demand","frequency overshoot","transient oscillations","zero steady-state error","performance evaluation","single-area power system","overall dynamic performance","competing techniques","power output regulation","generator speed control","controller design"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"655","title":"Mapping CCF to MARC21: an experimental approach","abstract":"The purpose of this article is to raise and address a number of issues pertaining to the conversion of Common Communication Format (CCF) into MARC21. In this era of global resource sharing, exchange of bibliographic records from one system to another is imperative in today's library communities. Instead of using a single standard to create machine-readable catalogue records, more than 20 standards have emerged and are being used by different institutions. Because of these variations in standards, sharing of resources and transfer of data from one system to another among the institutions locally and globally has become a significant problem. Addressing this problem requires keeping in mind that countries such as India and others in southeast Asia are using the CCF as a standard for creating bibliographic cataloguing records. This paper describes a way to map the bibliographic catalogue records from CCF to MARC21, although 100% mapping is not possible. In addition, the paper describes an experimental approach that enumerates problems that may occur during the mapping of records\/exchanging of records and how these problems can be overcome","tok_text":"map ccf to marc21 : an experiment approach \n the purpos of thi articl is to rais and address a number of issu pertain to the convers of common commun format ( ccf ) into marc21 . in thi era of global resourc share , exchang of bibliograph record from one system to anoth is imper in today 's librari commun . instead of use a singl standard to creat machine-read catalogu record , more than 20 standard have emerg and are be use by differ institut . becaus of these variat in standard , share of resourc and transfer of data from one system to anoth among the institut local and global ha becom a signific problem . address thi problem requir keep in mind that countri such as india and other in southeast asia are use the ccf as a standard for creat bibliograph catalogu record . thi paper describ a way to map the bibliograph catalogu record from ccf to marc21 , although 100 % map is not possibl . in addit , the paper describ an experiment approach that enumer problem that may occur dure the map of record \/ exchang of record and how these problem can be overcom","ordered_present_kp":[11,193,292,350,332,677,696],"keyphrases":["MARC21","global resource sharing","library communities","standards","machine-readable catalogue records","India","southeast Asia","Common Communication Format conversion","bibliographic records exchange","data transfer","CCF to MARC21 mapping"],"prmu":["P","P","P","P","P","P","P","R","R","R","R"]} {"id":"1200","title":"From continuous recovery to discrete filtering in numerical approximations of conservation laws","abstract":"Modern numerical approximations of conservation laws rely on numerical dissipation as a means of stabilization. The older, alternative approach is the use of central differencing with a dose of artificial dissipation. In this paper we review the successful class of weighted essentially non-oscillatory finite volume schemes which comprise sophisticated methods of the first kind. New developments in image processing have made new devices possible which can serve as highly nonlinear artificial dissipation terms. We view artificial dissipation as discrete filter operation and introduce several new algorithms inspired by image processing","tok_text":"from continu recoveri to discret filter in numer approxim of conserv law \n modern numer approxim of conserv law reli on numer dissip as a mean of stabil . the older , altern approach is the use of central differenc with a dose of artifici dissip . in thi paper we review the success class of weight essenti non-oscillatori finit volum scheme which compris sophist method of the first kind . new develop in imag process have made new devic possibl which can serv as highli nonlinear artifici dissip term . we view artifici dissip as discret filter oper and introduc sever new algorithm inspir by imag process","ordered_present_kp":[5,25,43,61,120,197,230,323,406,465,532],"keyphrases":["continuous recovery","discrete filtering","numerical approximations","conservation laws","numerical dissipation","central differencing","artificial dissipation","finite volume schemes","image processing","highly nonlinear artificial dissipation terms","discrete filter operation"],"prmu":["P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1245","title":"A brief guide to competitive intelligence: how to gather and use information on competitors","abstract":"The author outlines the processes involved in competitive intelligence, and discusses what it is, how to do it and gives examples of what happens when companies fail to monitor their competitive environment effectively. The author presents a case study, showing how the company that produced the pre-cursor to the Barbie doll failed to look at their business environment and how this led to the firm's failure. The author discusses what competitive intelligence is, and what it is not, and why it is important for businesses, and presents three models used to describe the competitive intelligence process, going through the various steps involved in defining intelligence requirements and collecting, analyzing, communicating and utilizing competitive intelligence","tok_text":"a brief guid to competit intellig : how to gather and use inform on competitor \n the author outlin the process involv in competit intellig , and discuss what it is , how to do it and give exampl of what happen when compani fail to monitor their competit environ effect . the author present a case studi , show how the compani that produc the pre-cursor to the barbi doll fail to look at their busi environ and how thi led to the firm 's failur . the author discuss what competit intellig is , and what it is not , and whi it is import for busi , and present three model use to describ the competit intellig process , go through the variou step involv in defin intellig requir and collect , analyz , commun and util competit intellig","ordered_present_kp":[16,360,393],"keyphrases":["competitive intelligence","Barbie doll","business environment","competitor information","intelligence collection","intelligence analysis","intelligence communication","intelligence utilization"],"prmu":["P","P","P","R","R","M","R","R"]} {"id":"983","title":"Limitations of delayed state feedback: a numerical study","abstract":"Stabilization of a class of linear time-delay systems can be achieved by a numerical procedure, called the continuous pole placement method [Michiels et al., 2000]. This method can be seen as an extension of the classical pole placement algorithm for ordinary differential equations to a class of delay differential equations. In [Michiels et al., 2000] it was applied to the stabilization of a linear time-invariant system with an input delay using static state feedback. In this paper we study the limitations of such delayed state feedback laws. More precisely we completely characterize the class of stabilizable plants in the 2D-case. For that purpose we make use of numerical continuation techniques. The use of delayed state feedback in various control applications and the effect of its limitations are briefly discussed","tok_text":"limit of delay state feedback : a numer studi \n stabil of a class of linear time-delay system can be achiev by a numer procedur , call the continu pole placement method [ michiel et al . , 2000 ] . thi method can be seen as an extens of the classic pole placement algorithm for ordinari differenti equat to a class of delay differenti equat . in [ michiel et al . , 2000 ] it wa appli to the stabil of a linear time-invari system with an input delay use static state feedback . in thi paper we studi the limit of such delay state feedback law . more precis we complet character the class of stabiliz plant in the 2d-case . for that purpos we make use of numer continu techniqu . the use of delay state feedback in variou control applic and the effect of it limit are briefli discuss","ordered_present_kp":[69,139,318,454,9,654],"keyphrases":["delayed state feedback","linear time-delay systems","continuous pole placement method","delay differential equations","static state feedback","numerical continuation"],"prmu":["P","P","P","P","P","P"]} {"id":"554","title":"A scalable and lightweight QoS monitoring technique combining passive and active approaches: on the mathematical formulation of CoMPACT monitor","abstract":"To make a scalable and lightweight QoS monitoring system, we (2002) have proposed a new QoS monitoring technique, called the change-of-measure based passive\/active monitoring (CoMPACT Monitor), which is based on the change-of-measure framework and is an active measurement transformed by using passively monitored data. This technique enables us to measure detailed QoS information for individual users, applications and organizations, in a scalable and lightweight manner. In this paper, we present the mathematical foundation of CoMPACT Monitor. In addition, we show its characteristics through simulations in terms of typical implementation issues for inferring the delay distributions. The results show that CoMPACT Monitor gives accurate QoS estimations with only a small amount of extra traffic for active measurement","tok_text":"a scalabl and lightweight qo monitor techniqu combin passiv and activ approach : on the mathemat formul of compact monitor \n to make a scalabl and lightweight qo monitor system , we ( 2002 ) have propos a new qo monitor techniqu , call the change-of-measur base passiv \/ activ monitor ( compact monitor ) , which is base on the change-of-measur framework and is an activ measur transform by use passiv monitor data . thi techniqu enabl us to measur detail qo inform for individu user , applic and organ , in a scalabl and lightweight manner . in thi paper , we present the mathemat foundat of compact monitor . in addit , we show it characterist through simul in term of typic implement issu for infer the delay distribut . the result show that compact monitor give accur qo estim with onli a small amount of extra traffic for activ measur","ordered_present_kp":[240,395,271,107,706,26],"keyphrases":["QoS monitoring","CoMPACT Monitor","change-of-measure","active monitoring","passive monitoring","delay distributions","quality of service","Internet","network performance"],"prmu":["P","P","P","P","P","P","M","U","U"]} {"id":"1101","title":"Evaluation of existing and new feature recognition algorithms. 1. Theory and implementation","abstract":"This is the first of two papers evaluating the performance of general-purpose feature detection techniques for geometric models. In this paper, six different methods are described to identify sets of faces that bound depression and protrusion faces. Each algorithm has been implemented and tested on eight components from the National Design Repository. The algorithms studied include previously published general-purpose feature detection algorithms such as the single-face inner-loop and concavity techniques. Others are improvements to existing algorithms such as extensions of the two-dimensional convex hull method to handle curved faces as well as protrusions. Lastly, new algorithms based on the three-dimensional convex hull, minimum concave, visible and multiple-face inner-loop face sets are described","tok_text":"evalu of exist and new featur recognit algorithm . 1 . theori and implement \n thi is the first of two paper evalu the perform of general-purpos featur detect techniqu for geometr model . in thi paper , six differ method are describ to identifi set of face that bound depress and protrus face . each algorithm ha been implement and test on eight compon from the nation design repositori . the algorithm studi includ previous publish general-purpos featur detect algorithm such as the single-fac inner-loop and concav techniqu . other are improv to exist algorithm such as extens of the two-dimension convex hull method to handl curv face as well as protrus . lastli , new algorithm base on the three-dimension convex hull , minimum concav , visibl and multiple-fac inner-loop face set are describ","ordered_present_kp":[23,171,129,244,279,361,509,585,627,693,723,751],"keyphrases":["feature recognition algorithms","general-purpose feature detection techniques","geometric models","sets of faces","protrusion faces","National Design Repository","concavity technique","two-dimensional convex hull method","curved faces","three-dimensional convex hull","minimum concave","multiple-face inner-loop face sets","depression faces","single-face inner-loop technique","CAD\/CAM software","geometric reasoning algorithms","visible inner-loop face sets"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R","U","M","R"]} {"id":"1144","title":"Simultaneous iterative reconstruction of emission and attenuation images in positron emission tomography from emission data only","abstract":"For quantitative image reconstruction in positron emission tomography attenuation correction is mandatory. In case that no data are available for the calculation of the attenuation correction factors one can try to determine them from the emission data alone. However, it is not clear if the information content is sufficient to yield an adequate attenuation correction together with a satisfactory activity distribution. Therefore, we determined the log likelihood distribution for a thorax phantom depending on the choice of attenuation and activity pixel values to measure the crosstalk between both. In addition an iterative image reconstruction (one-dimensional Newton-type algorithm with a maximum likelihood estimator), which simultaneously reconstructs the images of the activity distribution and the attenuation coefficients is used to demonstrate the problems and possibilities of such a reconstruction. As result we show that for a change of the log likelihood in the range of statistical noise, the associated change in the activity value of a structure is between 6% and 263%. In addition, we show that it is not possible to choose the best maximum on the basis of the log likelihood when a regularization is used, because the coupling between different structures mediated by the (smoothing) regularization prevents an adequate solution due to crosstalk. We conclude that taking into account the attenuation information in the emission data improves the performance of image reconstruction with respect to the bias of the activities, however, the reconstruction still is not quantitative","tok_text":"simultan iter reconstruct of emiss and attenu imag in positron emiss tomographi from emiss data onli \n for quantit imag reconstruct in positron emiss tomographi attenu correct is mandatori . in case that no data are avail for the calcul of the attenu correct factor one can tri to determin them from the emiss data alon . howev , it is not clear if the inform content is suffici to yield an adequ attenu correct togeth with a satisfactori activ distribut . therefor , we determin the log likelihood distribut for a thorax phantom depend on the choic of attenu and activ pixel valu to measur the crosstalk between both . in addit an iter imag reconstruct ( one-dimension newton-typ algorithm with a maximum likelihood estim ) , which simultan reconstruct the imag of the activ distribut and the attenu coeffici is use to demonstr the problem and possibl of such a reconstruct . as result we show that for a chang of the log likelihood in the rang of statist nois , the associ chang in the activ valu of a structur is between 6 % and 263 % . in addit , we show that it is not possibl to choos the best maximum on the basi of the log likelihood when a regular is use , becaus the coupl between differ structur mediat by the ( smooth ) regular prevent an adequ solut due to crosstalk . we conclud that take into account the attenu inform in the emiss data improv the perform of imag reconstruct with respect to the bia of the activ , howev , the reconstruct still is not quantit","ordered_present_kp":[115,135,244,484,515,564,595,632,656,698,439,794,949,1223,1320],"keyphrases":["image reconstruction","positron emission tomography attenuation correction","attenuation correction factors","activity distribution","log likelihood distribution","thorax phantom","activity pixel values","crosstalk","iterative image reconstruction","one-dimensional Newton-type algorithm","maximum likelihood estimator","attenuation coefficients","statistical noise","smoothing","attenuation information"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"92","title":"Wireless-retail financial services: adoption can't justify the cost","abstract":"Slow adoption by retail investors, costly services and bankrupt vendors has prompted banks and brokerage firms to turn off their wireless applications","tok_text":"wireless-retail financi servic : adopt ca n't justifi the cost \n slow adopt by retail investor , costli servic and bankrupt vendor ha prompt bank and brokerag firm to turn off their wireless applic","ordered_present_kp":[115,150,182],"keyphrases":["banks","brokerage firms","wireless applications"],"prmu":["P","P","P"]} {"id":"1059","title":"Mustering motivation to enact decisions: how decision process characteristics influence goal realization","abstract":"Decision scientists tend to focus mainly on decision antecedents, studying how people make decisions. Action psychologists, in contrast, study post-decision issues, investigating how decisions, once formed, are maintained, protected, and enacted. Through the research presented here, we seek to bridge these two disciplines, proposing that the process by which decisions are reached motivates subsequent pursuit and benefits eventual realization. We identify three characteristics of the decision process (DP) as having motivation-mustering potential: DP effort investment, DP importance, and DP confidence. Through two field studies tracking participants' decision processes, pursuit and realization, we find that after controlling for the influence of the motivational mechanisms of goal intention and implementation intention, the three decision process characteristics significantly influence the successful enactment of the chosen decision directly. The theoretical and practical implications of these findings are considered and future research opportunities are identified","tok_text":"muster motiv to enact decis : how decis process characterist influenc goal realiz \n decis scientist tend to focu mainli on decis anteced , studi how peopl make decis . action psychologist , in contrast , studi post-decis issu , investig how decis , onc form , are maintain , protect , and enact . through the research present here , we seek to bridg these two disciplin , propos that the process by which decis are reach motiv subsequ pursuit and benefit eventu realiz . we identifi three characterist of the decis process ( dp ) as have motivation-must potenti : dp effort invest , dp import , and dp confid . through two field studi track particip ' decis process , pursuit and realiz , we find that after control for the influenc of the motiv mechan of goal intent and implement intent , the three decis process characterist significantli influenc the success enact of the chosen decis directli . the theoret and practic implic of these find are consid and futur research opportun are identifi","ordered_present_kp":[7,70,34,168,210,538,756,966,84],"keyphrases":["motivation","decision process characteristics","goal realization","decision scientists","action psychologists","post-decision issues","motivation-mustering potential","goal intention","research opportunities","decision enactment","decision process investment","decision process importance","decision process confidence"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R","R"]} {"id":"1358","title":"Analysis of the surface roughness and dimensional accuracy capability of fused deposition modelling processes","abstract":"Building up materials in layers poses significant challenges from the viewpoint of material science, heat transfer and applied mechanics. However, numerous aspects of the use of these technologies have yet to be studied. One of these aspects is the characterization of the surface roughness and dimensional precision obtainable in layered manufacturing processes. In this paper, a study of roughness parameters obtained through the use of these manufacturing processes was made. Prototype parts were manufactured using FDM techniques and an experimental analysis of the resulting roughness average (R\/sub a\/) and rms roughness (R\/sub q\/) obtained through the use of these manufacturing processes was carried out. Dimensional parameters were also studied in order to determine the capability of the Fused Deposition Modelling process for manufacturing parts","tok_text":"analysi of the surfac rough and dimension accuraci capabl of fuse deposit model process \n build up materi in layer pose signific challeng from the viewpoint of materi scienc , heat transfer and appli mechan . howev , numer aspect of the use of these technolog have yet to be studi . one of these aspect is the character of the surfac rough and dimension precis obtain in layer manufactur process . in thi paper , a studi of rough paramet obtain through the use of these manufactur process wa made . prototyp part were manufactur use fdm techniqu and an experiment analysi of the result rough averag ( r \/ sub a\/ ) and rm rough ( r \/ sub q\/ ) obtain through the use of these manufactur process wa carri out . dimension paramet were also studi in order to determin the capabl of the fuse deposit model process for manufactur part","ordered_present_kp":[61,15,32,344,371,499,586,618],"keyphrases":["surface roughness","dimensional accuracy capability","fused deposition modelling processes","dimensional precision","layered manufacturing processes","prototype parts","roughness average","rms roughness","rapid prototyping","three-dimensional solid objects","CAD model","CNC-controlled robot","extrusion head"],"prmu":["P","P","P","P","P","P","P","P","M","U","M","U","U"]} {"id":"748","title":"Simulation study of the cardiovascular functional status in hypertensive situation","abstract":"An extended cardiovascular model was established based on our previous work to study the consequences of physiological or pathological changes to the homeostatic functions of the cardiovascular system. To study hemodynamic changes in hypertensive situations, the impacts of cardiovascular parameter variations (peripheral vascular resistance, arterial vessel wall stiffness and baroreflex gain) upon hemodynamics and the short-term regulation of the cardiovascular system were investigated. For the purpose of analyzing baroregulation function, the short-term regulation of arterial pressure in response to moderate dynamic exercise for normotensive and hypertensive cases was studied through computer simulation and clinical experiments. The simulation results agree well with clinical data. The results of this work suggest that the model presented in this paper provides a useful tool to investigate the functional status of the cardiovascular system in normal or pathological conditions","tok_text":"simul studi of the cardiovascular function statu in hypertens situat \n an extend cardiovascular model wa establish base on our previou work to studi the consequ of physiolog or patholog chang to the homeostat function of the cardiovascular system . to studi hemodynam chang in hypertens situat , the impact of cardiovascular paramet variat ( peripher vascular resist , arteri vessel wall stiff and baroreflex gain ) upon hemodynam and the short-term regul of the cardiovascular system were investig . for the purpos of analyz baroregul function , the short-term regul of arteri pressur in respons to moder dynam exercis for normotens and hypertens case wa studi through comput simul and clinic experi . the simul result agre well with clinic data . the result of thi work suggest that the model present in thi paper provid a use tool to investig the function statu of the cardiovascular system in normal or patholog condit","ordered_present_kp":[74,177,199,19,52,310,342,369,398,258,439,571,600,670,687],"keyphrases":["cardiovascular functional status","hypertensive situation","extended cardiovascular model","pathological changes","homeostatic functions","hemodynamics","cardiovascular parameter variations","peripheral vascular resistance","arterial vessel wall stiffness","baroreflex gain","short-term regulation","arterial pressure","moderate dynamic exercise","computer simulation","clinical experiments","physiological changes","normotensive cases"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"826","title":"A round of cash, a pound of flesh [telecom]","abstract":"Despite the upheaval across telecom, venture capital firms are still investing in start-ups. But while a promising idea and a catchy name were enough to guarantee millions in funding at the peak of the dotcom frenzy, now start-ups must prove-their long-term viability, and be willing to concede control of their business to their VC suitors","tok_text":"a round of cash , a pound of flesh [ telecom ] \n despit the upheav across telecom , ventur capit firm are still invest in start-up . but while a promis idea and a catchi name were enough to guarante million in fund at the peak of the dotcom frenzi , now start-up must prove-their long-term viabil , and be will to conced control of their busi to their vc suitor","ordered_present_kp":[37,84,290],"keyphrases":["telecom","venture capital firms","viability"],"prmu":["P","P","P"]} {"id":"863","title":"A scanline-based algorithm for the 2D free-form bin packing problem","abstract":"This paper describes a heuristic algorithm for the 2D free-form bin packing (2D-FBP) problem. Given a set of 2D free-form bins and a set of 2D free-form items, the 2D-FBP problem is to lay out items inside one or more bins in such a way that the number of bins used is minimized, and for each bin, the yield is maximized. The proposed algorithm handles the problem as a variant of the 1D problem; i.e., items and bins are approximated as sets of scanlines, and scanlines are packed. The details of the algorithm are given, and its application to a nesting problem in a shipbuilding company is reported. The proposed algorithm consists of the basic and the group placement algorithms. The basic placement algorithm is a variant of the first-fit decreasing algorithm which is simply extended from the 1D case to the 2D case by a novel scanline approximation. A numerical study with real instances shows that the basic placement algorithm has sufficient performance for most of the instances, however, the group placement algorithm is required when items must be aligned in columns. The qualities of the resulting layouts are good enough for practical use, and the processing times are good","tok_text":"a scanline-bas algorithm for the 2d free-form bin pack problem \n thi paper describ a heurist algorithm for the 2d free-form bin pack ( 2d-fbp ) problem . given a set of 2d free-form bin and a set of 2d free-form item , the 2d-fbp problem is to lay out item insid one or more bin in such a way that the number of bin use is minim , and for each bin , the yield is maxim . the propos algorithm handl the problem as a variant of the 1d problem ; i.e. , item and bin are approxim as set of scanlin , and scanlin are pack . the detail of the algorithm are given , and it applic to a nest problem in a shipbuild compani is report . the propos algorithm consist of the basic and the group placement algorithm . the basic placement algorithm is a variant of the first-fit decreas algorithm which is simpli extend from the 1d case to the 2d case by a novel scanlin approxim . a numer studi with real instanc show that the basic placement algorithm ha suffici perform for most of the instanc , howev , the group placement algorithm is requir when item must be align in column . the qualiti of the result layout are good enough for practic use , and the process time are good","ordered_present_kp":[2,33,85,223,578,323,596,676,754],"keyphrases":["scanline-based algorithm","2D free-form bin packing problem","heuristic algorithm","2D-FBP problem","minimization","nesting problem","shipbuilding company","group placement algorithm","first-fit decreasing algorithm","irregular cutting","irregular packing","yield maximization"],"prmu":["P","P","P","P","P","P","P","P","P","U","M","R"]} {"id":"1430","title":"The free lunch is over: online content subscriptions on the rise","abstract":"High need, rather than high use, may be what really determines a user's willingness to pay. Retooling and targeting content may be a sharper strategy than trying to re-educate users that it is time to pay up for material that has been free. Waiting for a paradigm shift in general user attitudes about paying for online content Could be a fool's errand","tok_text":"the free lunch is over : onlin content subscript on the rise \n high need , rather than high use , may be what realli determin a user 's willing to pay . retool and target content may be a sharper strategi than tri to re-educ user that it is time to pay up for materi that ha been free . wait for a paradigm shift in gener user attitud about pay for onlin content could be a fool 's errand","ordered_present_kp":[25],"keyphrases":["online content subscriptions","content retooling","content targeting","pay-to-play business models"],"prmu":["P","R","R","U"]} {"id":"570","title":"Prediction and compensation of dynamic errors for coordinate measuring machines","abstract":"Coordinate measuring machines (CMMs) are already widely utilized as measuring tools in the modem manufacturing industry. Rapidly approaching now is the trend for next-generation CMMs. However, the increases in measuring velocity of CMM applications are limited by dynamic errors that occur in CMMs. In this paper a systematic approach for modeling the dynamic errors of a touch-trigger probe CMM is developed through theoretical analysis and experimental study. An overall analysis of the dynamic errors of CMMs is conducted, with weak components of the CMM identified by a laser interferometer. The probing process, as conducted with a touch-trigger probe, is analyzed. The dynamic errors are measured, modeled, and predicted using neural networks. The results indicate that, using this mode, it is possible to compensate for the dynamic errors of CMMs","tok_text":"predict and compens of dynam error for coordin measur machin \n coordin measur machin ( cmm ) are alreadi wide util as measur tool in the modem manufactur industri . rapidli approach now is the trend for next-gener cmm . howev , the increas in measur veloc of cmm applic are limit by dynam error that occur in cmm . in thi paper a systemat approach for model the dynam error of a touch-trigg probe cmm is develop through theoret analysi and experiment studi . an overal analysi of the dynam error of cmm is conduct , with weak compon of the cmm identifi by a laser interferomet . the probe process , as conduct with a touch-trigg probe , is analyz . the dynam error are measur , model , and predict use neural network . the result indic that , use thi mode , it is possibl to compens for the dynam error of cmm","ordered_present_kp":[39,23,379,558,702,143,12],"keyphrases":["compensation","dynamic errors","coordinate measuring machines","manufacturing industry","touch-trigger probe","laser interferometer","neural networks","inertial forces"],"prmu":["P","P","P","P","P","P","P","U"]} {"id":"1125","title":"Structure of weakly invertible semi-input-memory finite automata with delay 1","abstract":"Semi-input-memory finite automata, a kind of finite automata introduced by the first author of this paper for studying error propagation, are a generalization of input memory finite automata by appending an autonomous finite automaton component. In this paper, we give a characterization of the structure of weakly invertible semi-input-memory finite automata with delay 1, in which the state graph of each autonomous finite automaton is a cycle. From a result on mutual invertibility of finite automata obtained by the authors recently, it leads to a characterization of the structure of feedforward inverse finite automata with delay 1","tok_text":"structur of weakli invert semi-input-memori finit automata with delay 1 \n semi-input-memori finit automata , a kind of finit automata introduc by the first author of thi paper for studi error propag , are a gener of input memori finit automata by append an autonom finit automaton compon . in thi paper , we give a character of the structur of weakli invert semi-input-memori finit automata with delay 1 , in which the state graph of each autonom finit automaton is a cycl . from a result on mutual invert of finit automata obtain by the author recent , it lead to a character of the structur of feedforward invers finit automata with delay 1","ordered_present_kp":[44,26,19,26,12,64,419,596],"keyphrases":["weakly invertible","invertibility","semi-input-memory","semi-input-memory finite automata","finite automata","delay 1","state graph","feedforward inverse finite automata"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1160","title":"Monoids all polygons over which are omega -stable: proof of the Mustafin-Poizat conjecture","abstract":"A monoid S is called an omega -stabilizer (superstabilizer, or stabilizer) if every S-polygon has an omega -stable (superstable, or stable) theory. It is proved that every omega -stabilizer is a regular monoid. This confirms the Mustafin-Poizat conjecture and allows us to end up the description of omega -stabilizers","tok_text":"monoid all polygon over which are omega -stabl : proof of the mustafin-poizat conjectur \n a monoid s is call an omega -stabil ( superstabil , or stabil ) if everi s-polygon ha an omega -stabl ( superst , or stabl ) theori . it is prove that everi omega -stabil is a regular monoid . thi confirm the mustafin-poizat conjectur and allow us to end up the descript of omega -stabil","ordered_present_kp":[0,112,163,266,62],"keyphrases":["monoids all polygons","Mustafin-Poizat conjecture","omega -stabilizer","S-polygon","regular monoid"],"prmu":["P","P","P","P","P"]} {"id":"119","title":"JPEG2000: standard for interactive imaging","abstract":"JPEG2000 is the latest image compression standard to emerge from the Joint Photographic Experts Group (JPEG) working under the auspices of the International Standards Organization. Although the new standard does offer superior compression performance to JPEG, JPEG2000 provides a whole new way of interacting with compressed imagery in a scalable and interoperable fashion. This paper provides a tutorial-style review of the new standard, explaining the technology on which it is based and drawing comparisons with JPEG and other compression standards. The paper also describes new work, exploiting the capabilities of JPEG2000 in client-server systems for efficient interactive browsing of images over the Internet","tok_text":"jpeg2000 : standard for interact imag \n jpeg2000 is the latest imag compress standard to emerg from the joint photograph expert group ( jpeg ) work under the auspic of the intern standard organ . although the new standard doe offer superior compress perform to jpeg , jpeg2000 provid a whole new way of interact with compress imageri in a scalabl and interoper fashion . thi paper provid a tutorial-styl review of the new standard , explain the technolog on which it is base and draw comparison with jpeg and other compress standard . the paper also describ new work , exploit the capabl of jpeg2000 in client-serv system for effici interact brows of imag over the internet","ordered_present_kp":[0,24,63,104,172,404,603],"keyphrases":["JPEG2000","interactive imaging","image compression","Joint Photographic Experts Group","International Standards Organization","review","client-server systems","scalable compression","interoperable compression"],"prmu":["P","P","P","P","P","P","P","R","R"]} {"id":"634","title":"An approximation to the F distribution using the chi-square distribution","abstract":"For the cumulative distribution function (c.d.f.) of the F distribution, F(x; k, n), with associated degrees of freedom, k and n, a shrinking factor approximation (SFA), G( lambda kx; k), is proposed for large n and any fixed k, where G(x; k) is the chi-square c.d.f. with degrees of freedom, k, and lambda = lambda (kx; n) is the shrinking factor. Numerical analysis indicates that for n\/k >or= 3, approximation accuracy of the SFA is to the fourth decimal place for most small values of k. This is a substantial improvement on the accuracy that is achievable using the normal, ordinary chi-square, and Scheffe-Tukey approximations. In addition, it is shown that the theoretical approximation error of the SFA, |F(x; k,n)-G( lambda kx; k)|, is O(1\/n\/sup 2\/) uniformly over x","tok_text":"an approxim to the f distribut use the chi-squar distribut \n for the cumul distribut function ( c.d.f . ) of the f distribut , f(x ; k , n ) , with associ degre of freedom , k and n , a shrink factor approxim ( sfa ) , g ( lambda kx ; k ) , is propos for larg n and ani fix k , where g(x ; k ) is the chi-squar c.d.f . with degre of freedom , k , and lambda = lambda ( kx ; n ) is the shrink factor . numer analysi indic that for n \/ k > or= 3 , approxim accuraci of the sfa is to the fourth decim place for most small valu of k. thi is a substanti improv on the accuraci that is achiev use the normal , ordinari chi-squar , and scheffe-tukey approxim . in addit , it is shown that the theoret approxim error of the sfa , |f(x ; k , n)-g ( lambda kx ; k)| , is o(1 \/ n \/ sup 2\/ ) uniformli over x","ordered_present_kp":[19,69,155,186,39,401],"keyphrases":["F distribution","chi-square distribution","cumulative distribution function","degrees of freedom","shrinking factor approximation","numerical analysis"],"prmu":["P","P","P","P","P","P"]} {"id":"671","title":"Expert advice - how can my organisation take advantage of reverse auctions without jeopardising existing supplier relationships?","abstract":"In a recent survey, AMR Research found that companies that use reverse auctions to negotiate prices with suppliers typically achieve savings of between 10% and 15% on direct goods and between 20% and 25% on indirect goods, and can slash sourcing cycle times from months to weeks. Suppliers, however, are less enthusiastic. They believe that these savings are achieved only by stripping the human element out of negotiations and evaluating bids on price alone, which drives down their profit margins. As a result, reverse auctions carry the risk of jeopardising long-term and trusted relationships. Suppliers that have not been involved in a reverse auction before typically fear the bidding event itself - arguably the most theatrical and, therefore, most hyped-up part of the process. Although it may only last one hour, weeks of preparation go into setting up a successful bidding event","tok_text":"expert advic - how can my organis take advantag of revers auction without jeopardis exist supplier relationship ? \n in a recent survey , amr research found that compani that use revers auction to negoti price with supplier typic achiev save of between 10 % and 15 % on direct good and between 20 % and 25 % on indirect good , and can slash sourc cycl time from month to week . supplier , howev , are less enthusiast . they believ that these save are achiev onli by strip the human element out of negoti and evalu bid on price alon , which drive down their profit margin . as a result , revers auction carri the risk of jeopardis long-term and trust relationship . supplier that have not been involv in a revers auction befor typic fear the bid event itself - arguabl the most theatric and , therefor , most hyped-up part of the process . although it may onli last one hour , week of prepar go into set up a success bid event","ordered_present_kp":[51,90,883],"keyphrases":["reverse auctions","supplier relationships","preparation","Request For Quotation"],"prmu":["P","P","P","U"]} {"id":"1224","title":"Formalization of weighted factors analysis","abstract":"Weighted factors analysis (WeFA) has been proposed as a new approach for elicitation, representation, and manipulation of knowledge about a given problem, generally at a high and strategic level. Central to this proposal is that a group of experts in the area of the problem can identify a hierarchy of factors with positive or negative influences on the problem outcome. The tangible output of WeFA is a directed weighted graph called a WeFA graph. This is a set of nodes denoting factors that can directly or indirectly influence an overall aim of the graph. The aim is also represented by a node. Each directed arc is a direct influence of one factor on another. A chain of directed arcs indicates an indirect influence. The influences may be identified as either positive or negative. For example, sales and costs are two factors that influence the aim of profitability in an organization. Sales has a positive influence on profitability and costs has a negative influence on profitability. In addition, the relative significance of each influence is represented by a weight. We develop Binary WeFA which is a variant of WeFA where the factors in the graph are restricted to being either true or false. Imposing this restriction on a WeFA graph allows us to be more precise about the meaning of the graph and of reasoning in it. Binary WeFA is a new proposal that provides a formal yet sufficiently simple language for logic-based argumentation for use by business people in decision-support and knowledge management. Whilst Binary WeFA is expressively simpler than other logic-based argumentation formalisms, it does incorporate a novel formalization of the notion of significance","tok_text":"formal of weight factor analysi \n weight factor analysi ( wefa ) ha been propos as a new approach for elicit , represent , and manipul of knowledg about a given problem , gener at a high and strateg level . central to thi propos is that a group of expert in the area of the problem can identifi a hierarchi of factor with posit or neg influenc on the problem outcom . the tangibl output of wefa is a direct weight graph call a wefa graph . thi is a set of node denot factor that can directli or indirectli influenc an overal aim of the graph . the aim is also repres by a node . each direct arc is a direct influenc of one factor on anoth . a chain of direct arc indic an indirect influenc . the influenc may be identifi as either posit or neg . for exampl , sale and cost are two factor that influenc the aim of profit in an organ . sale ha a posit influenc on profit and cost ha a neg influenc on profit . in addit , the rel signific of each influenc is repres by a weight . we develop binari wefa which is a variant of wefa where the factor in the graph are restrict to be either true or fals . impos thi restrict on a wefa graph allow us to be more precis about the mean of the graph and of reason in it . binari wefa is a new propos that provid a formal yet suffici simpl languag for logic-bas argument for use by busi peopl in decision-support and knowledg manag . whilst binari wefa is express simpler than other logic-bas argument formal , it doe incorpor a novel formal of the notion of signific","ordered_present_kp":[10,927,427,584,813,826,988,1195,1289,1333,1354,400],"keyphrases":["weighted factors analysis","directed weighted graph","WeFA graph","directed arc","profitability","organization","significance","Binary WeFA","reasoning","logic-based argumentation","decision-support","knowledge management","knowledge elicitation","knowledge representation","knowledge manipulation"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1261","title":"Topology-adaptive modeling of objects using surface evolutions based on 3D mathematical morphology","abstract":"Level set methods were proposed mainly by mathematicians for constructing a model of a 3D object of arbitrary topology. However, those methods are computationally inefficient due to repeated distance transformations and increased dimensions. In the paper, we propose a new method of modeling fast objects of arbitrary topology by using a surface evolution approach based on mathematical morphology. Given sensor data covering the whole object surface, the method begins with an initial approximation of the object by evolving a closed surface into a model topologically equivalent to the real object. The refined approximation is then performed using energy minimization. The method has been applied in several experiments using range data, and the results are reported in the paper","tok_text":"topology-adapt model of object use surfac evolut base on 3d mathemat morpholog \n level set method were propos mainli by mathematician for construct a model of a 3d object of arbitrari topolog . howev , those method are comput ineffici due to repeat distanc transform and increas dimens . in the paper , we propos a new method of model fast object of arbitrari topolog by use a surfac evolut approach base on mathemat morpholog . given sensor data cover the whole object surfac , the method begin with an initi approxim of the object by evolv a close surfac into a model topolog equival to the real object . the refin approxim is then perform use energi minim . the method ha been appli in sever experi use rang data , and the result are report in the paper","ordered_present_kp":[81,0,35,57,161,174,242,504,611,646,706],"keyphrases":["topology-adaptive modeling","surface evolutions","3D mathematical morphology","level set methods","3D object","arbitrary topology","repeated distance transformations","initial approximation","refined approximation","energy minimization","range data","pseudo curvature flow"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","U"]} {"id":"802","title":"A brief history of electronic reserves","abstract":"Electronic reserves has existed as a library service for barely ten years, yet its history, however brief, is important as an indicator of the direction being taken by the profession of Librarianship as a whole. Recent improvements in technology and a desire to provide better service to students and faculty have resulted in the implementation of e-reserves by ever greater numbers of academic libraries. Yet a great deal of confusion still surrounds the issue of copyright compliance. Negotiation, litigation, and legislation in particular have framed the debate over the application of fair use to an e-reserves environment, and the question of whether or not permission fees should be paid to rights holders, but as of yet no definitive answers or standards have emerged","tok_text":"a brief histori of electron reserv \n electron reserv ha exist as a librari servic for bare ten year , yet it histori , howev brief , is import as an indic of the direct be taken by the profess of librarianship as a whole . recent improv in technolog and a desir to provid better servic to student and faculti have result in the implement of e-reserv by ever greater number of academ librari . yet a great deal of confus still surround the issu of copyright complianc . negoti , litig , and legisl in particular have frame the debat over the applic of fair use to an e-reserv environ , and the question of whether or not permiss fee should be paid to right holder , but as of yet no definit answer or standard have emerg","ordered_present_kp":[19,67,196,289,301,376,447,469,478,490,566,620],"keyphrases":["electronic reserves","library service","librarianship","students","faculty","academic libraries","copyright compliance","negotiation","litigation","legislation","e-reserves environment","permission fees"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"847","title":"A gendered view of computer professionals: preliminary results of a survey","abstract":"The under-representation of women in the computing profession in many parts the western world has received our attention through numerous publications, the noticeable low representation of women at computer science conferences and in the lecture halls. Over the past two decades, the situation had become worse. This paper seeks to add to the dialogue by presenting preliminary findings from a research project conducted in four countries. The aim of this research was to gain an insight into the perceptions future computer professionals hold on the category of employment loosely defined under the term of \"a computer professional.\" One goal was to get insight into whether or not there is a difference between female and mate students regarding their view of computer professionals. Other goals were to determine if there was any difference between female and male students in different parts of the world, as well as who or what most influences the students to undertake their courses in computing","tok_text":"a gender view of comput profession : preliminari result of a survey \n the under-represent of women in the comput profess in mani part the western world ha receiv our attent through numer public , the notic low represent of women at comput scienc confer and in the lectur hall . over the past two decad , the situat had becom wors . thi paper seek to add to the dialogu by present preliminari find from a research project conduct in four countri . the aim of thi research wa to gain an insight into the percept futur comput profession hold on the categori of employ loos defin under the term of \" a comput profession . \" one goal wa to get insight into whether or not there is a differ between femal and mate student regard their view of comput profession . other goal were to determin if there wa ani differ between femal and male student in differ part of the world , as well as who or what most influenc the student to undertak their cours in comput","ordered_present_kp":[17,558,703],"keyphrases":["computing profession","employment","mate students","women under-representation","future computer professional perceptions","female students","computing courses"],"prmu":["P","P","P","R","R","R","R"]} {"id":"1080","title":"Car-caravan snaking. 2 Active caravan braking","abstract":"For part 1, see ibid., p.707-22. Founded on the review and results of Part 1, Part 2 contains a description of the virtual design of an active braking system for caravans or other types of trailer, to suppress snaking vibrations, while being simple from a practical viewpoint. The design process and the design itself are explained. The performance is examined by simulations and it is concluded that the system is effective, robust and realizable with modest and available components","tok_text":"car-caravan snake . 2 activ caravan brake \n for part 1 , see ibid . , p.707 - 22 . found on the review and result of part 1 , part 2 contain a descript of the virtual design of an activ brake system for caravan or other type of trailer , to suppress snake vibrat , while be simpl from a practic viewpoint . the design process and the design itself are explain . the perform is examin by simul and it is conclud that the system is effect , robust and realiz with modest and avail compon","ordered_present_kp":[0,22,159,228],"keyphrases":["car-caravan snaking","active caravan braking","virtual design","trailer","snaking vibrations suppression","dynamics"],"prmu":["P","P","P","P","R","U"]} {"id":"1451","title":"From information gateway to digital library management system: a case analysis","abstract":"This paper discusses the design, implementation and evolution of the Cornell University Library Gateway using the case analysis method. It diagnoses the Gateway within the conceptual framework of definitions and best practices associated with information gateways, portals, and emerging digital library management systems, in particular the product ENCompass","tok_text":"from inform gateway to digit librari manag system : a case analysi \n thi paper discuss the design , implement and evolut of the cornel univers librari gateway use the case analysi method . it diagnos the gateway within the conceptu framework of definit and best practic associ with inform gateway , portal , and emerg digit librari manag system , in particular the product encompass","ordered_present_kp":[23,128,5,299,373],"keyphrases":["information gateways","digital library management system","Cornell University Library Gateway","portals","ENCompass","metadata"],"prmu":["P","P","P","P","P","U"]} {"id":"1414","title":"Survey says! [online world of polls and surveys]","abstract":"Many content managers miss the fundamental interactivity of the Web by not using polls and surveys. Using interactive features-like a poll or quiz-offers your readers an opportunity to become more engaged in your content. Using a survey to gather feedback about your content provides cost-effective data to help make modifications or plot the appropriate course of action. The Web has allowed us to take traditional market research and turn it on its ear. Surveys and polls can be conducted faster and cheaper than with telephone and mail. But if you are running a Web site, should you care about polls and surveys? Do you know the difference between the two in Web-speak?","tok_text":"survey say ! [ onlin world of poll and survey ] \n mani content manag miss the fundament interact of the web by not use poll and survey . use interact features-lik a poll or quiz-off your reader an opportun to becom more engag in your content . use a survey to gather feedback about your content provid cost-effect data to help make modif or plot the appropri cours of action . the web ha allow us to take tradit market research and turn it on it ear . survey and poll can be conduct faster and cheaper than with telephon and mail . but if you are run a web site , should you care about poll and survey ? do you know the differ between the two in web-speak ?","ordered_present_kp":[30,0,55],"keyphrases":["surveys","polls","content managers","site owners","World Wide Web","site feedback"],"prmu":["P","P","P","M","M","R"]} {"id":"1339","title":"Edge-colorings with no large polychromatic stars","abstract":"Given a graph G and a positive integer r, let f\/sub r\/(G) denote the largest number of colors that can be used in a coloring of E(G) such that each vertex is incident to at most r colors. For all positive integers n and r, we determine f\/sub r\/(K\/sub n,n\/) exactly and f\/sub r\/(K\/sub n\/) within 1. In doing so, we disprove a conjecture by Y. Manoussakis et al. (1996)","tok_text":"edge-color with no larg polychromat star \n given a graph g and a posit integ r , let f \/ sub r\/(g ) denot the largest number of color that can be use in a color of e(g ) such that each vertex is incid to at most r color . for all posit integ n and r , we determin f \/ sub r\/(k \/ sub n , n\/ ) exactli and f \/ sub r\/(k \/ sub n\/ ) within 1 . in do so , we disprov a conjectur by y. manoussaki et al . ( 1996 )","ordered_present_kp":[24,65,65],"keyphrases":["polychromatic stars","positive integer","positive integer","edge colorings","positive integers"],"prmu":["P","P","P","M","P"]} {"id":"791","title":"The rise and fall and rise again of customer care","abstract":"Taking care of customers has never gone out of style, but as the recession fades, interest is picking up in a significant retooling of the CRM solutions banks have been using. The goal: usable knowledge to help improve service","tok_text":"the rise and fall and rise again of custom care \n take care of custom ha never gone out of style , but as the recess fade , interest is pick up in a signific retool of the crm solut bank have been use . the goal : usabl knowledg to help improv servic","ordered_present_kp":[214,182],"keyphrases":["banks","usable knowledge","customer relationship management"],"prmu":["P","P","M"]} {"id":"1381","title":"An augmented spatial digital tree algorithm for contact detection in computational mechanics","abstract":"Based on the understanding of existing spatial digital tree-based contact detection approaches, and the alternating digital tree (ADT) algorithm in particular, a more efficient algorithm, termed the augmented spatial digital tree (ASDT) algorithm, is proposed in the present work. The ASDT algorithm adopts a different point representation scheme that uses only the lower comer vertex to represent a (hyper-)rectangle, with the upper comer vertex serving as the augmented information. Consequently, the ASDT algorithm can keep the working space the same as the original n-dimensional space and, in general, a much better balanced tree can be expected. This, together with the introduction of an additional bounding subregion for the rectangles associated with each tree node, makes it possible to significantly reduce the number of node visits in the region search, although each node visit may be slightly more expensive. Three examples arising in computational mechanics are presented to provide an assessment of the performance of the ASDT. The numerical results indicate that the ASDT is, at least, over 3.9 times faster than the ADT","tok_text":"an augment spatial digit tree algorithm for contact detect in comput mechan \n base on the understand of exist spatial digit tree-bas contact detect approach , and the altern digit tree ( adt ) algorithm in particular , a more effici algorithm , term the augment spatial digit tree ( asdt ) algorithm , is propos in the present work . the asdt algorithm adopt a differ point represent scheme that use onli the lower comer vertex to repres a ( hyper-)rectangl , with the upper comer vertex serv as the augment inform . consequ , the asdt algorithm can keep the work space the same as the origin n-dimension space and , in gener , a much better balanc tree can be expect . thi , togeth with the introduct of an addit bound subregion for the rectangl associ with each tree node , make it possibl to significantli reduc the number of node visit in the region search , although each node visit may be slightli more expens . three exampl aris in comput mechan are present to provid an assess of the perform of the asdt . the numer result indic that the asdt is , at least , over 3.9 time faster than the adt","ordered_present_kp":[3,44,62,469],"keyphrases":["augmented spatial digital tree algorithm","contact detection","computational mechanics","upper comer vertex","alternating digital tree algorithm","augmented data structure","spatial binary tree-based contact detection approaches"],"prmu":["P","P","P","P","R","M","M"]} {"id":"1038","title":"The analysis and control of longitudinal vibrations from wave viewpoint","abstract":"The analysis and control of longitudinal vibrations in a rod from feedback wave viewpoint are synthesized. Both collocated and noncollocated feedback wave control strategies are explored. The control design is based on the local properties of wave transmission and reflection in the vicinity of the control force applied area, hence there is no complex closed form solution involved. The controller is designed to achieve various goals, such as absorbing the incoming vibration energy, creating a vibration free zone and eliminating standing waves in the structure. The findings appear to be very useful in practice due to the simplicity in the implementation of the controllers","tok_text":"the analysi and control of longitudin vibrat from wave viewpoint \n the analysi and control of longitudin vibrat in a rod from feedback wave viewpoint are synthes . both colloc and noncolloc feedback wave control strategi are explor . the control design is base on the local properti of wave transmiss and reflect in the vicin of the control forc appli area , henc there is no complex close form solut involv . the control is design to achiev variou goal , such as absorb the incom vibrat energi , creat a vibrat free zone and elimin stand wave in the structur . the find appear to be veri use in practic due to the simplic in the implement of the control","ordered_present_kp":[126,180,238,286,333,376,533,481,505],"keyphrases":["feedback waves","noncollocated feedback wave control","control design","wave transmission","control force","complex closed form solution","vibration energy","vibration free zone","standing waves","longitudinal vibration control","collocated feedback wave control","wave reflection"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1040","title":"CRONE control: principles and extension to time-variant plants with asymptotically constant coefficients","abstract":"The principles of CRONE control, a frequency-domain robust control design methodology based on fractional differentiation, are presented. Continuous time-variant plants with asymptotically constant coefficients are analysed in the frequency domain, through their representation using time-variant frequency responses. A stability theorem for feedback systems including time-variant plants with asymptotically constant coefficients is proposed. Finally, CRONE control is extended to robust control of these plants","tok_text":"crone control : principl and extens to time-vari plant with asymptot constant coeffici \n the principl of crone control , a frequency-domain robust control design methodolog base on fraction differenti , are present . continu time-vari plant with asymptot constant coeffici are analys in the frequenc domain , through their represent use time-vari frequenc respons . a stabil theorem for feedback system includ time-vari plant with asymptot constant coeffici is propos . final , crone control is extend to robust control of these plant","ordered_present_kp":[0,39,60,123,181,337,368,387,140],"keyphrases":["CRONE control","time-variant plants","asymptotically constant coefficients","frequency-domain robust control design","robust control","fractional differentiation","time-variant frequency responses","stability theorem","feedback systems","automatic control"],"prmu":["P","P","P","P","P","P","P","P","P","M"]} {"id":"1005","title":"The average-case identifiability and controllability of large scale systems","abstract":"Needs for increased product quality, reduced pollution, and reduced energy and material consumption are driving enhanced process integration. This increases the number of manipulated and measured variables required by the control system to achieve its objectives. This paper addresses the question of whether processes tend to become increasingly more difficult to identify and control as the process dimension increases. Tools and results of multivariable statistics are used to show that, under a variety of assumed distributions on the elements, square processes of higher dimension tend to be more difficult to identify and control, whereas the expected controllability and identifiability of nonsquare processes depends on the relative numbers of measured and manipulated variables. These results suggest that the procedure of simplifying the control problem so that only a square process is considered is a poor practice for large scale systems","tok_text":"the average-cas identifi and control of larg scale system \n need for increas product qualiti , reduc pollut , and reduc energi and materi consumpt are drive enhanc process integr . thi increas the number of manipul and measur variabl requir by the control system to achiev it object . thi paper address the question of whether process tend to becom increasingli more difficult to identifi and control as the process dimens increas . tool and result of multivari statist are use to show that , under a varieti of assum distribut on the element , squar process of higher dimens tend to be more difficult to identifi and control , wherea the expect control and identifi of nonsquar process depend on the rel number of measur and manipul variabl . these result suggest that the procedur of simplifi the control problem so that onli a squar process is consid is a poor practic for larg scale system","ordered_present_kp":[40,4,157,452,670,219,726],"keyphrases":["average-case identifiability","large scale systems","enhanced process integration","measured variables","multivariable statistics","nonsquare processes","manipulated variables","average-case controllability","process control","high dimension square processes","process identification","Monte Carlo simulations","chemical engineering"],"prmu":["P","P","P","P","P","P","P","R","R","M","M","U","U"]} {"id":"887","title":"Towards strong stability of concurrent repetitive processes sharing resources","abstract":"The paper presents a method for design of stability conditions of concurrent, repetitive processes sharing common resources. Steady-state behaviour of the system with m cyclic processes utilising a resource with the mutual exclusion is considered. Based on a recurrent equations framework necessary and sufficient conditions for the existence of maximal performance steady-state are presented. It was shown that if the conditions hold then the m-process system is marginally stable, i.e., a steady-state of the system depends on the perturbations. The problem of finding the relative positions of the processes leading to waiting-free (maximal efficiency) steady-states of the system is formulated as a constraint logic programming problem. An example illustrating the solving of the problem for a 3-process system using object-oriented, constraint logic programming language Oz is presented. A condition sufficient for strong stability of the m-process system is given. When the condition holds then for any initial phases of the processes a waiting-free steady-state will be reached","tok_text":"toward strong stabil of concurr repetit process share resourc \n the paper present a method for design of stabil condit of concurr , repetit process share common resourc . steady-st behaviour of the system with m cyclic process utilis a resourc with the mutual exclus is consid . base on a recurr equat framework necessari and suffici condit for the exist of maxim perform steady-st are present . it wa shown that if the condit hold then the m-process system is margin stabl , i.e. , a steady-st of the system depend on the perturb . the problem of find the rel posit of the process lead to waiting-fre ( maxim effici ) steady-st of the system is formul as a constraint logic program problem . an exampl illustr the solv of the problem for a 3-process system use object-ori , constraint logic program languag oz is present . a condit suffici for strong stabil of the m-process system is given . when the condit hold then for ani initi phase of the process a waiting-fre steady-st will be reach","ordered_present_kp":[7,24,154,171,212,253,289,312,358,957,658,741],"keyphrases":["strong stability","concurrent repetitive processes","common resources","steady-state behaviour","cyclic processes","mutual exclusion","recurrent equations framework","necessary and sufficient conditions","maximal performance steady-state","constraint logic programming","3-process system","waiting-free steady-states","Oz language"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1429","title":"Online coverage of the Olympic Games","abstract":"In 1956 a new medium was evolving which helped shape not only the presentation of the Games to a worldwide audience, but created entirely new avenues for marketing and sponsorship which changed the entire economic relevance of the Games. The medium in 1956 was television, and the medium now, of course, is the Internet. Not since 1956 has Olympic coverage been so impacted by the onset of new technology as the current Olympiad has been. But now the IOC finds itself in another set of circumstances not altogether different from 1956","tok_text":"onlin coverag of the olymp game \n in 1956 a new medium wa evolv which help shape not onli the present of the game to a worldwid audienc , but creat entir new avenu for market and sponsorship which chang the entir econom relev of the game . the medium in 1956 wa televis , and the medium now , of cours , is the internet . not sinc 1956 ha olymp coverag been so impact by the onset of new technolog as the current olympiad ha been . but now the ioc find itself in anoth set of circumst not altogeth differ from 1956","ordered_present_kp":[21,0,168,179,213,413,444],"keyphrases":["online coverage","Olympic Games","marketing","sponsorship","economic relevance","Olympiad","IOC","online rights","e-broadcast"],"prmu":["P","P","P","P","P","P","P","M","U"]} {"id":"1341","title":"STEM: Secure Telephony Enabled Middlebox","abstract":"Dynamic applications, including IP telephony, have not seen wide acceptance within enterprises because of problems caused by the existing network infrastructure. Static elements, including firewalls and network address translation devices, are not capable of allowing dynamic applications to operate properly. The Secure Telephony Enabled Middlebox (STEM) architecture is an enhancement of the existing network design to remove the issues surrounding static devices. The architecture incorporates an improved firewall that can interpret and utilize information in the application layer of packets to ensure proper functionality. In addition to allowing dynamic applications to function normally, the STEM architecture also incorporates several detection and response mechanisms for well-known network-based vulnerabilities. This article describes the key components of the architecture with respect to the SIP protocol","tok_text":"stem : secur telephoni enabl middlebox \n dynam applic , includ ip telephoni , have not seen wide accept within enterpris becaus of problem caus by the exist network infrastructur . static element , includ firewal and network address translat devic , are not capabl of allow dynam applic to oper properli . the secur telephoni enabl middlebox ( stem ) architectur is an enhanc of the exist network design to remov the issu surround static devic . the architectur incorpor an improv firewal that can interpret and util inform in the applic layer of packet to ensur proper function . in addit to allow dynam applic to function normal , the stem architectur also incorpor sever detect and respons mechan for well-known network-bas vulner . thi articl describ the key compon of the architectur with respect to the sip protocol","ordered_present_kp":[7,0,63,157,205,217,41,637,389,431,531,685,715,809],"keyphrases":["STEM","Secure Telephony Enabled Middlebox","dynamic applications","IP telephony","network infrastructure","firewalls","network address translation devices","network design","static devices","application layer","STEM architecture","response mechanisms","network-based vulnerabilities","SIP protocol","detection mechanisms"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1304","title":"Center-crossing recurrent neural networks for the evolution of rhythmic behavior","abstract":"A center-crossing recurrent neural network is one in which the null(hyper)surfaces of each neuron intersect at their exact centers of symmetry, ensuring that each neuron's activation function is centered over the range of net inputs that it receives. We demonstrate that relative to a random initial population, seeding the initial population of an evolutionary search with center-crossing networks significantly improves both the frequency and the speed with which high-fitness oscillatory circuits evolve on a simple walking task. The improvement is especially striking at low mutation variances. Our results suggest that seeding with center-crossing networks may often be beneficial, since a wider range of dynamics is more likely to be easily accessible from a population of center-crossing networks than from a population of random networks","tok_text":"center-cross recurr neural network for the evolut of rhythmic behavior \n a center-cross recurr neural network is one in which the null(hyper)surfac of each neuron intersect at their exact center of symmetri , ensur that each neuron 's activ function is center over the rang of net input that it receiv . we demonstr that rel to a random initi popul , seed the initi popul of an evolutionari search with center-cross network significantli improv both the frequenc and the speed with which high-fit oscillatori circuit evolv on a simpl walk task . the improv is especi strike at low mutat varianc . our result suggest that seed with center-cross network may often be benefici , sinc a wider rang of dynam is more like to be easili access from a popul of center-cross network than from a popul of random network","ordered_present_kp":[0,198,235,330,378,488,577,794],"keyphrases":["center-crossing recurrent neural networks","symmetry","activation function","random initial population","evolutionary search","high-fitness oscillatory circuits","low mutation variance","random networks","rhythmic behavior evolution","null surfaces","evolutionary algorithm","learning"],"prmu":["P","P","P","P","P","P","P","P","R","U","M","U"]} {"id":"751","title":"A new method of regression on latent variables. Application to spectral data","abstract":"Several applications are based on the assessment of a linear model linking a set of variables Y to a set of predictors X. In the presence of strong colinearity among predictors, as in the case with spectral data, several alternative procedures to ordinary least squares (OLS) are proposed, We discuss a new alternative approach which we refer to as regression models through constrained principal components analysis (RM-CPCA). This method basically shares certain common characteristics with PLS regression as the dependent variables play a central role in determining the latent variables to be used as predictors. Unlike PLS, however, the approach discussed leads to straightforward models. This method also bears some similarity to latent root regression analysis (LRR) that was discussed by several authors. Moreover, a tuning parameter that ranges between 0 and 1 is introduced and the family of models thus formed includes several other methods as particular cases","tok_text":"a new method of regress on latent variabl . applic to spectral data \n sever applic are base on the assess of a linear model link a set of variabl y to a set of predictor x. in the presenc of strong colinear among predictor , as in the case with spectral data , sever altern procedur to ordinari least squar ( ol ) are propos , we discuss a new altern approach which we refer to as regress model through constrain princip compon analysi ( rm-cpca ) . thi method basic share certain common characterist with pl regress as the depend variabl play a central role in determin the latent variabl to be use as predictor . unlik pl , howev , the approach discuss lead to straightforward model . thi method also bear some similar to latent root regress analysi ( lrr ) that wa discuss by sever author . moreov , a tune paramet that rang between 0 and 1 is introduc and the famili of model thu form includ sever other method as particular case","ordered_present_kp":[27,54,111,160,191,381,524,724,805],"keyphrases":["latent variables","spectral data","linear model","predictors","strong colinearity","regression models through constrained principal components analysis","dependent variables","latent root regression analysis","tuning parameter","near-IR spectroscopy"],"prmu":["P","P","P","P","P","P","P","P","P","U"]} {"id":"714","title":"Embeddings of planar graphs that minimize the number of long-face cycles","abstract":"We consider the problem of finding embeddings of planar graphs that minimize the number of long-face cycles. We prove that for any k >or= 4, it is NP-complete to find an embedding that minimizes the number of face cycles of length at least k","tok_text":"embed of planar graph that minim the number of long-fac cycl \n we consid the problem of find embed of planar graph that minim the number of long-fac cycl . we prove that for ani k > or= 4 , it is np-complet to find an embed that minim the number of face cycl of length at least k","ordered_present_kp":[0,9,47],"keyphrases":["embeddings","planar graphs","long-face cycles","NP-complete problem","graph drawing"],"prmu":["P","P","P","R","M"]} {"id":"124","title":"High-speed CMOS circuits with parallel dynamic logic and speed-enhanced skewed static logic","abstract":"In this paper, we describe parallel dynamic logic (PDL) which exhibits high speed without charge sharing problem. PDL uses only parallel-connected transistors for fast logic evaluation and is a good candidate for high-speed low-voltage operation. It has less back-bias effect compared to other logic styles, which use stacked transistors. Furthermore, PDL needs no signal ordering or tapering. PDL with speed-enhanced skewed static logic renders straightforward logic synthesis without the usual area penalty due to logic duplication. Our experimental results on two 32-bit carry lookahead adders using 0.25- mu m CMOS technology show that PDL with speed-enhanced skewed static (SSS) look reduces the delay over clock-delayed(CD)-domino by 15%-27% and the power-delay product by 20%-37%","tok_text":"high-spe cmo circuit with parallel dynam logic and speed-enhanc skew static logic \n in thi paper , we describ parallel dynam logic ( pdl ) which exhibit high speed without charg share problem . pdl use onli parallel-connect transistor for fast logic evalu and is a good candid for high-spe low-voltag oper . it ha less back-bia effect compar to other logic style , which use stack transistor . furthermor , pdl need no signal order or taper . pdl with speed-enhanc skew static logic render straightforward logic synthesi without the usual area penalti due to logic duplic . our experiment result on two 32-bit carri lookahead adder use 0.25- mu m cmo technolog show that pdl with speed-enhanc skew static ( sss ) look reduc the delay over clock-delayed(cd)-domino by 15%-27 % and the power-delay product by 20%-37 %","ordered_present_kp":[0,26,51,207,290,506,610,728,784,319,375],"keyphrases":["high-speed CMOS circuits","parallel dynamic logic","speed-enhanced skewed static logic","parallel-connected transistors","low-voltage operation","back-bias effect","stacked transistors","logic synthesis","carry lookahead adders","delay","power-delay product","32 bit","0.25 micron"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","U","U"]} {"id":"967","title":"On the relationship between parametric variation and state feedback in chaos control","abstract":"In this Letter, we study the popular parametric variation chaos control and state-feedback methodologies in chaos control, and point out for the first time that they are actually equivalent in the sense that there exist diffeomorphisms that can convert one to the other for most smooth chaotic systems. Detailed conversions are worked out for typical discrete chaotic maps (logistic, Henon) and continuous flows (Rossler, Lorenz) for illustration. This unifies the two seemingly different approaches from the physics and the engineering communities on chaos control. This new perspective reveals some new potential applications such as chaos synchronization and normal form analysis from a unified mathematical point of view","tok_text":"on the relationship between parametr variat and state feedback in chao control \n in thi letter , we studi the popular parametr variat chao control and state-feedback methodolog in chao control , and point out for the first time that they are actual equival in the sens that there exist diffeomorph that can convert one to the other for most smooth chaotic system . detail convers are work out for typic discret chaotic map ( logist , henon ) and continu flow ( rossler , lorenz ) for illustr . thi unifi the two seemingli differ approach from the physic and the engin commun on chao control . thi new perspect reveal some new potenti applic such as chao synchron and normal form analysi from a unifi mathemat point of view","ordered_present_kp":[28,66,151,425,446,286],"keyphrases":["parametric variation","chaos control","state-feedback","diffeomorphisms","logistic","continuous flows","Henon map","Rossler system","Lorenz system"],"prmu":["P","P","P","P","P","P","R","R","R"]} {"id":"922","title":"Smart collision information processing sensors for fast moving objects","abstract":"In this technical note we survey the area of smart collision information processing sensors. We review the existing technologies to detect collision or overlap between fast moving physical objects or objects in virtual environments, physical environments or a combination of physical and virtual objects. We report developments in the collision detection of fast moving objects at discrete time steps such as two consecutive time frames, as well as continuous time intervals such as in an interframe collision detection system. Our discussion of computational techniques in this paper is limited to convex objects. Techniques exist however to efficiently decompose non-convex objects into convex objects. We also discuss the tracking technologies for objects from the standpoint of collision detection or avoidance","tok_text":"smart collis inform process sensor for fast move object \n in thi technic note we survey the area of smart collis inform process sensor . we review the exist technolog to detect collis or overlap between fast move physic object or object in virtual environ , physic environ or a combin of physic and virtual object . we report develop in the collis detect of fast move object at discret time step such as two consecut time frame , as well as continu time interv such as in an interfram collis detect system . our discuss of comput techniqu in thi paper is limit to convex object . techniqu exist howev to effici decompos non-convex object into convex object . we also discuss the track technolog for object from the standpoint of collis detect or avoid","ordered_present_kp":[240,258,341,378,408,441,475,6,39,564,679],"keyphrases":["collision information processing","fast moving objects","virtual environments","physical environments","collision detection","discrete time steps","consecutive time frames","continuous time intervals","interframe collision detection","convex objects","tracking","nonconvex objects","air traffic control","smart sensors","military training","high speed machining"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","M","U","R","U","U"]} {"id":"76","title":"Reaching strong consensus in a general network","abstract":"The strong consensus (SC) problem is a variant of the conventional distributed consensus problem (also known as the Byzantine agreement problem). The SC problem requires that the agreed value among fault-free processors be one of the fault-free processor's initial values. Originally, the problem was studied in a fully connected network with malicious faulty processors. In this paper, the SC problem is re-examined in a general network, in which the components (processors and communication links) may be subjected to different faulty types simultaneously (also called the hybrid fault model or mixed faulty types) and the network topology does not have to be fully connected. The proposed protocol can tolerate the maximum number of tolerable faulty components such that each fault-free processor obtains a common value for the SC problem in a general network","tok_text":"reach strong consensu in a gener network \n the strong consensu ( sc ) problem is a variant of the convent distribut consensu problem ( also known as the byzantin agreement problem ) . the sc problem requir that the agre valu among fault-fre processor be one of the fault-fre processor 's initi valu . origin , the problem wa studi in a fulli connect network with malici faulti processor . in thi paper , the sc problem is re-examin in a gener network , in which the compon ( processor and commun link ) may be subject to differ faulti type simultan ( also call the hybrid fault model or mix faulti type ) and the network topolog doe not have to be fulli connect . the propos protocol can toler the maximum number of toler faulti compon such that each fault-fre processor obtain a common valu for the sc problem in a gener network","ordered_present_kp":[106,153,231,336,565,6],"keyphrases":["strong consensus","distributed consensus problem","Byzantine agreement","fault-free processors","fully connected network","hybrid fault model","strong consensus problem","fault-tolerant distributed system"],"prmu":["P","P","P","P","P","P","R","M"]} {"id":"609","title":"Chemical production in the superlative [formaldehyde plant process control system and remote I\/O system]","abstract":"BASF commissioned the largest formaldehyde production plant in the world, in December 2000, with an annual capacity of 180000 t. The new plant, built to meet the growing demand for formaldehyde, sets new standards. Its size, technology and above all its cost-effectiveness give it a leading position internationally. To maintain such high standards by the automation technology, in addition to the trail-blazing Simatic PCS 7 process control system from Siemens, BASF selected the innovative remote I\/O system I.S.1 from R. STAHL Schaltgerate GmbH to record and to output field signals in hazardous areas Zone 1 and 2. This combination completely satisfied all technical requirements and also had the best price-performance ratio of all the solutions. 25 remote I\/O field stations were designed and matched to the needs of the formaldehyde plant","tok_text":"chemic product in the superl [ formaldehyd plant process control system and remot i \/ o system ] \n basf commiss the largest formaldehyd product plant in the world , in decemb 2000 , with an annual capac of 180000 t. the new plant , built to meet the grow demand for formaldehyd , set new standard . it size , technolog and abov all it cost-effect give it a lead posit intern . to maintain such high standard by the autom technolog , in addit to the trail-blaz simat pc 7 process control system from siemen , basf select the innov remot i \/ o system i.s.1 from r. stahl schaltger gmbh to record and to output field signal in hazard area zone 1 and 2 . thi combin complet satisfi all technic requir and also had the best price-perform ratio of all the solut . 25 remot i \/ o field station were design and match to the need of the formaldehyd plant","ordered_present_kp":[0,22,99,415,49,449,499,530,560,719],"keyphrases":["chemical production","superlative","process control system","BASF","automation technology","trail-blazing Simatic PCS 7","Siemens","remote I\/O system I.S.1","R. STAHL Schaltgerate GmbH","price-performance ratio","formaldehyde production plant construction","cost-effective plant","signal recording","Zone 1 hazardous area","Zone 2 hazardous area","remote I\/O field station design"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","R","R","R","R","R"]} {"id":"1219","title":"Knowledge organisation of product design blackboard systems via graph decomposition","abstract":"Knowledge organisation plays an important role in building a knowledge-based product design blackboard system. Well-organised knowledge sources will facilitate the effectiveness and efficiency of communication and data exchange in a blackboard system. In a previous investigation, an approach for constructing blackboard systems for product design using a non-directed graph decomposition algorithm was proposed. In this paper, the relationship between graph decomposition and the resultant blackboard system is further studied. A case study of a number of hypothetical blackboard systems that comprise different knowledge organisations is provided","tok_text":"knowledg organis of product design blackboard system via graph decomposit \n knowledg organis play an import role in build a knowledge-bas product design blackboard system . well-organis knowledg sourc will facilit the effect and effici of commun and data exchang in a blackboard system . in a previou investig , an approach for construct blackboard system for product design use a non-direct graph decomposit algorithm wa propos . in thi paper , the relationship between graph decomposit and the result blackboard system is further studi . a case studi of a number of hypothet blackboard system that compris differ knowledg organis is provid","ordered_present_kp":[0,20,57,124,250,542],"keyphrases":["knowledge organisation","product design blackboard systems","graph decomposition","knowledge-based product design","data exchange","case study"],"prmu":["P","P","P","P","P","P"]} {"id":"1118","title":"Run-time data-flow analysis","abstract":"Parallelizing compilers have made great progress in recent years. However, there still remains a gap between the current ability of parallelizing compilers and their final goals. In order to achieve the maximum parallelism, run-time techniques were used in parallelizing compilers during last few years. First, this paper presents a basic run-time privatization method. The definition of run-time dead code is given and its side effect is discussed. To eliminate the imprecision caused by the run-time dead code, backward data-flow information must be used. Proteus Test, which can use backward information in run-time, is then presented to exploit more dynamic parallelism. Also, a variation of Proteus Test, the Advanced Proteus Test, is offered to achieve partial parallelism. Proteus Test was implemented on the parallelizing compiler AFT. In the end of this paper the program fpppp.f of Spec95fp Benchmark is taken as an example, to show the effectiveness of Proteus Test","tok_text":"run-tim data-flow analysi \n parallel compil have made great progress in recent year . howev , there still remain a gap between the current abil of parallel compil and their final goal . in order to achiev the maximum parallel , run-tim techniqu were use in parallel compil dure last few year . first , thi paper present a basic run-tim privat method . the definit of run-tim dead code is given and it side effect is discuss . to elimin the imprecis caus by the run-tim dead code , backward data-flow inform must be use . proteu test , which can use backward inform in run-tim , is then present to exploit more dynam parallel . also , a variat of proteu test , the advanc proteu test , is offer to achiev partial parallel . proteu test wa implement on the parallel compil aft . in the end of thi paper the program fpppp.f of spec95fp benchmark is taken as an exampl , to show the effect of proteu test","ordered_present_kp":[28,328,367,481,521,610],"keyphrases":["parallelizing compilers","run-time privatization method","run-time dead code","backward data-flow information","Proteus Test","dynamic parallelism","run-time data flow analysis"],"prmu":["P","P","P","P","P","P","M"]} {"id":"1343","title":"Estimating the intrinsic dimension of data with a fractal-based method","abstract":"In this paper, the problem of estimating the intrinsic dimension of a data set is investigated. A fractal-based approach using the Grassberger-Procaccia algorithm is proposed. Since the Grassberger-Procaccia algorithm (1983) performs badly on sets of high dimensionality, an empirical procedure that improves the original algorithm has been developed. The procedure has been tested on data sets of known dimensionality and on time series of Santa Fe competition","tok_text":"estim the intrins dimens of data with a fractal-bas method \n in thi paper , the problem of estim the intrins dimens of a data set is investig . a fractal-bas approach use the grassberger-procaccia algorithm is propos . sinc the grassberger-procaccia algorithm ( 1983 ) perform badli on set of high dimension , an empir procedur that improv the origin algorithm ha been develop . the procedur ha been test on data set of known dimension and on time seri of santa fe competit","ordered_present_kp":[40,443,456],"keyphrases":["fractal-based method","time series","Santa Fe competition","data intrinsic dimension estimation","pattern recognition"],"prmu":["P","P","P","R","U"]} {"id":"1306","title":"Scalable hybrid computation with spikes","abstract":"We outline a hybrid analog-digital scheme for computing with three important features that enable it to scale to systems of large complexity: First, like digital computation, which uses several one-bit precise logical units to collectively compute a precise answer to a computation, the hybrid scheme uses several moderate-precision analog units to collectively compute a precise answer to a computation. Second, frequent discrete signal restoration of the analog information prevents analog noise and offset from degrading the computation. Third, a state machine enables complex computations to be created using a sequence of elementary computations. A natural choice for implementing this hybrid scheme is one based on spikes because spike-count codes are digital, while spike-time codes are analog. We illustrate how spikes afford easy ways to implement all three components of scalable hybrid computation. First, as an important example of distributed analog computation, we show how spikes can create a distributed modular representation of an analog number by implementing digital carry interactions between spiking analog neurons. Second, we show how signal restoration may be performed by recursive spike-count quantization of spike-time codes. Third, we use spikes from an analog dynamical system to trigger state transitions in a digital dynamical system, which reconfigures the analog dynamical system using a binary control vector; such feedback interactions between analog and digital dynamical systems create a hybrid state machine (HSM). The HSM extends and expands the concept of a digital finite-state-machine to the hybrid domain. We present experimental data from a two-neuron HSM on a chip that implements error-correcting analog-to-digital conversion with the concurrent use of spike-time and spike-count codes. We also present experimental data from silicon circuits that implement HSM-based pattern recognition using spike-time synchrony. We outline how HSMs may be used to perform learning, vector quantization, spike pattern recognition and generation, and how they may be reconfigured","tok_text":"scalabl hybrid comput with spike \n we outlin a hybrid analog-digit scheme for comput with three import featur that enabl it to scale to system of larg complex : first , like digit comput , which use sever one-bit precis logic unit to collect comput a precis answer to a comput , the hybrid scheme use sever moderate-precis analog unit to collect comput a precis answer to a comput . second , frequent discret signal restor of the analog inform prevent analog nois and offset from degrad the comput . third , a state machin enabl complex comput to be creat use a sequenc of elementari comput . a natur choic for implement thi hybrid scheme is one base on spike becaus spike-count code are digit , while spike-tim code are analog . we illustr how spike afford easi way to implement all three compon of scalabl hybrid comput . first , as an import exampl of distribut analog comput , we show how spike can creat a distribut modular represent of an analog number by implement digit carri interact between spike analog neuron . second , we show how signal restor may be perform by recurs spike-count quantiz of spike-tim code . third , we use spike from an analog dynam system to trigger state transit in a digit dynam system , which reconfigur the analog dynam system use a binari control vector ; such feedback interact between analog and digit dynam system creat a hybrid state machin ( hsm ) . the hsm extend and expand the concept of a digit finite-state-machin to the hybrid domain . we present experiment data from a two-neuron hsm on a chip that implement error-correct analog-to-digit convers with the concurr use of spike-tim and spike-count code . we also present experiment data from silicon circuit that implement hsm-base pattern recognit use spike-tim synchroni . we outlin how hsm may be use to perform learn , vector quantiz , spike pattern recognit and gener , and how they may be reconfigur","ordered_present_kp":[0,27,47,307,392,452,667,1442,855,702,972,1270,1299,1559,1691,1731,1814,1822],"keyphrases":["scalable hybrid computation","spikes","hybrid analog-digital scheme","moderate-precision analog units","frequent discrete signal restoration","analog noise","spike-count codes","spike-time codes","distributed analog computation","digital carry interactions","binary control vector","feedback interactions","finite-state-machine","error-correcting analog-to-digital conversion","silicon circuits","pattern recognition","learning","vector quantization","two neuron hybrid state machine"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M"]} {"id":"753","title":"In medias res [DVD formats]","abstract":"Four years in the making, the DVD format war rages on, no winner insight. meanwhile, the spoils of war abound, and DVD media manufacturers stand poised to profit","tok_text":"in media re [ dvd format ] \n four year in the make , the dvd format war rage on , no winner insight . meanwhil , the spoil of war abound , and dvd media manufactur stand pois to profit","ordered_present_kp":[143,57],"keyphrases":["DVD format war","DVD media manufacturers","DVD-RAM","DVD+RW","DVD+R","DVD-RW","DVD-R","compatibility","writable DVD"],"prmu":["P","P","U","U","U","U","U","U","M"]} {"id":"716","title":"Algorithmic results for ordered median problems","abstract":"In a series of papers a new type of objective function in location theory, called ordered median function, has been introduced and analyzed. This objective function unifies and generalizes most common objective functions used in location theory. In this paper we identify finite dominating sets for these models and develop polynomial time algorithms together with a detailed complexity analysis","tok_text":"algorithm result for order median problem \n in a seri of paper a new type of object function in locat theori , call order median function , ha been introduc and analyz . thi object function unifi and gener most common object function use in locat theori . in thi paper we identifi finit domin set for these model and develop polynomi time algorithm togeth with a detail complex analysi","ordered_present_kp":[0,21,77,96,116,281,325,363],"keyphrases":["algorithmic results","ordered median problems","objective function","location theory","ordered median function","finite dominating sets","polynomial time algorithms","detailed complexity analysis"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"878","title":"Girls, boys, and computers","abstract":"Today North American girls, boys, teachers, and parents frequently regard computer science and programming as something boys are better at. The author considers how many of the factors that contribute to the low participation of women in computing occur first, and perhaps most forcefully, in childhood. She presents four recommendations to address the situation","tok_text":"girl , boy , and comput \n today north american girl , boy , teacher , and parent frequent regard comput scienc and program as someth boy are better at . the author consid how mani of the factor that contribut to the low particip of women in comput occur first , and perhap most forc , in childhood . she present four recommend to address the situat","ordered_present_kp":[0,60,97,115,232,288,7],"keyphrases":["girls","boys","teachers","computer science","programming","women","childhood","gender issues"],"prmu":["P","P","P","P","P","P","P","U"]} {"id":"1042","title":"Chaotic phenomena and fractional-order dynamics in the trajectory control of redundant manipulators","abstract":"Redundant manipulators have some advantages when compared with classical arms because they allow the trajectory optimization, both on the free space and on the presence of obstacles, and the resolution of singularities. For this type of arms the proposed kinematic control algorithms adopt generalized inverse matrices but, in general, the corresponding trajectory planning schemes show important limitations. Motivated by these problems this paper studies the chaos revealed by the pseudoinverse-based trajectory planning algorithms, using the theory of fractional calculus","tok_text":"chaotic phenomena and fractional-ord dynam in the trajectori control of redund manipul \n redund manipul have some advantag when compar with classic arm becaus they allow the trajectori optim , both on the free space and on the presenc of obstacl , and the resolut of singular . for thi type of arm the propos kinemat control algorithm adopt gener invers matric but , in gener , the correspond trajectori plan scheme show import limit . motiv by these problem thi paper studi the chao reveal by the pseudoinverse-bas trajectori plan algorithm , use the theori of fraction calculu","ordered_present_kp":[0,22,50,72,140,174,309,341,393,562],"keyphrases":["chaotic phenomena","fractional-order dynamics","trajectory control","redundant manipulators","classical arms","trajectory optimization","kinematic control algorithms","generalized inverse matrices","trajectory planning schemes","fractional calculus"],"prmu":["P","P","P","P","P","P","P","P","P","P"]} {"id":"1007","title":"Conditions for decentralized integral controllability","abstract":"The term decentralized integral controllability (DIC) pertains to the existence of stable decentralized controllers with integral action that have closed-loop properties such as stable independent detuning. It is especially useful to select control structures systematically at the early stage of control system design because the only information needed for DIC is the steady-state process gain matrix. Here, a necessary and sufficient condition conjectured in the literature is proved. The real structured singular value which can exploit realness of the controller gain is used to describe computable conditions for DIC. The primary usage of DIC is to eliminate unworkable pairings. For this, two other simple necessary conditions are proposed. Examples are given to illustrate the effectiveness of the proposed conditions for DIC","tok_text":"condit for decentr integr control \n the term decentr integr control ( dic ) pertain to the exist of stabl decentr control with integr action that have closed-loop properti such as stabl independ detun . it is especi use to select control structur systemat at the earli stage of control system design becaus the onli inform need for dic is the steady-st process gain matrix . here , a necessari and suffici condit conjectur in the literatur is prove . the real structur singular valu which can exploit real of the control gain is use to describ comput condit for dic . the primari usag of dic is to elimin unwork pair . for thi , two other simpl necessari condit are propos . exampl are given to illustr the effect of the propos condit for dic","ordered_present_kp":[11,100,127,151,180,278,343,455],"keyphrases":["decentralized integral controllability","stable decentralized controllers","integral action","closed-loop properties","stable independent detuning","control system design","steady-state process gain matrix","real structured singular value","necessary sufficient conditions","systematic control structure selection","controller gain realness","unworkable pairing elimination","Schur complement"],"prmu":["P","P","P","P","P","P","P","P","R","R","R","R","U"]} {"id":"885","title":"Assignment of periods and priorities of messages and tasks in distributed control systems","abstract":"Presents a task and message-based scheduling method to guarantee the given end-to-end constraints including precedence constraints, time constraints, and period and priority of task and message. The method is an integrated one considering both tasks executed in each node and messages transmitted via the network and is designed to apply to a general distributed control system that has multiple loops and a single loop has sensor nodes with multiple sensors, actuator nodes with multiple actuators, controller nodes with multiple tasks, and several types of constraints. The assigning method of the optimal period and priority of task and message is proposed, using the presented task and message-based scheduling method","tok_text":"assign of period and prioriti of messag and task in distribut control system \n present a task and message-bas schedul method to guarante the given end-to-end constraint includ preced constraint , time constraint , and period and prioriti of task and messag . the method is an integr one consid both task execut in each node and messag transmit via the network and is design to appli to a gener distribut control system that ha multipl loop and a singl loop ha sensor node with multipl sensor , actuat node with multipl actuat , control node with multipl task , and sever type of constraint . the assign method of the optim period and prioriti of task and messag is propos , use the present task and message-bas schedul method","ordered_present_kp":[98,52,147,176,196],"keyphrases":["distributed control systems","message-based scheduling method","end-to-end constraints","precedence constraints","time constraints","periods assignment","priorities assignment","task-based scheduling method"],"prmu":["P","P","P","P","P","R","R","M"]} {"id":"998","title":"Discreteness and relevance: a reply to Roman Poznanski","abstract":"In reply to Poznanski (see ibid., p.435, 2002) on discreteness and relevance, Eliasmith claims that all of the concerns voiced by Poznanski in his reply fail to offer a serious challenge to the idea that continuity is irrelevant to a good understanding of cognitive systems. Eliasmith hopes that it is evident that he does not claim that the process in neural systems is discrete, but rather that a complete characterization of the process can be discrete; these of course are significantly different claims","tok_text":"discret and relev : a repli to roman poznanski \n in repli to poznanski ( see ibid . , p.435 , 2002 ) on discret and relev , eliasmith claim that all of the concern voic by poznanski in hi repli fail to offer a seriou challeng to the idea that continu is irrelev to a good understand of cognit system . eliasmith hope that it is evid that he doe not claim that the process in neural system is discret , but rather that a complet character of the process can be discret ; these of cours are significantli differ claim","ordered_present_kp":[0,12,243,286,375],"keyphrases":["discreteness","relevance","continuity","cognitive systems","neural systems"],"prmu":["P","P","P","P","P"]} {"id":"89","title":"A framework for rapid local area modeling for construction automation","abstract":"Rapid 3D positioning and modeling in construction can be used to more effectively plan, visualize, and communicate operations before execution. It can also help to optimize equipment operations, significantly improve safety, and enhance a remote operator's spatial perception of the workspace. A new framework for rapid local area sensing and 3D modeling for better planning and control of construction equipment operation is described and demonstrated. By combining human-assisted graphical workspace modeling with pre-stored Computer-Aided Design (CAD) models and simple sensors (such as single-axis laser rangefinders and remote video cameras), modeling time can be significantly reduced while potentially increasing modeling accuracy","tok_text":"a framework for rapid local area model for construct autom \n rapid 3d posit and model in construct can be use to more effect plan , visual , and commun oper befor execut . it can also help to optim equip oper , significantli improv safeti , and enhanc a remot oper 's spatial percept of the workspac . a new framework for rapid local area sens and 3d model for better plan and control of construct equip oper is describ and demonstr . by combin human-assist graphic workspac model with pre-stor computer-aid design ( cad ) model and simpl sensor ( such as single-axi laser rangefind and remot video camera ) , model time can be significantli reduc while potenti increas model accuraci","ordered_present_kp":[16,43,61,198,268,322,348,445,556,587],"keyphrases":["rapid local area modeling","construction automation","rapid 3D positioning","equipment operations","spatial perception","rapid local area sensing","3D modeling","human-assisted graphical workspace modeling","single-axis laser rangefinders","remote video cameras","pre-stored Computer-Aided Design models"],"prmu":["P","P","P","P","P","P","P","P","P","P","R"]} {"id":"74","title":"End-user perspectives on the uptake of computer supported cooperative working","abstract":"Researchers in information systems have produced a rich collection of meta-analyses and models to further understanding of factors influencing the uptake of information technologies. In the domain of CSCW, however, these models have largely been neglected, and while there are many case studies, no systematic account of uptake has been produced. We use findings from information systems research to structure a meta-analysis of uptake issues as reported in CSCW case studies, supplemented by a detailed re-examination of one of our own case studies from this perspective. This shows that while there are some factors which seem to be largely specific to CSCW introductions, many of the case study results are very similar to standard IS findings. We conclude by suggesting how the two communities of researchers might build on each other's work, and finally propose activity theory as a means of integrating the two perspectives","tok_text":"end-us perspect on the uptak of comput support cooper work \n research in inform system have produc a rich collect of meta-analys and model to further understand of factor influenc the uptak of inform technolog . in the domain of cscw , howev , these model have larg been neglect , and while there are mani case studi , no systemat account of uptak ha been produc . we use find from inform system research to structur a meta-analysi of uptak issu as report in cscw case studi , supplement by a detail re-examin of one of our own case studi from thi perspect . thi show that while there are some factor which seem to be larg specif to cscw introduct , mani of the case studi result are veri similar to standard is find . we conclud by suggest how the two commun of research might build on each other 's work , and final propos activ theori as a mean of integr the two perspect","ordered_present_kp":[32,0,73,117,193,229,825],"keyphrases":["end-user perspectives","computer supported cooperative work","information systems","meta-analyses","information technology","CSCW","activity theory"],"prmu":["P","P","P","P","P","P","P"]} {"id":"126","title":"A new architecture for implementing pipelined FIR ADF based on classification of coefficients","abstract":"In this paper, we propose a new method for implementing pipelined finite-impulse response (FIR) adaptive digital filter (ADF), with an aim of reducing the maximum delay of the filtering portion of conventional delayed least mean square (DLMS) pipelined ADF. We achieve a filtering section with a maximum delay of one by simplifying a pre-upsampled and a post-downsampled FIR filter using the concept of classification of coefficients. This reduction is independent of the order of the filter, which is an advantage when the order of the filter is very large, and as a result the method can also be applied to infinite impulse response (IIR) filters. Furthermore, when the proposed method is compared with the transpose ADF, which has a filtering section with zero delay, it is realized that it significantly reduces the maximum delay associated with updating the coefficients of FIR ADF. The effect of this is that, the proposed method exhibits a higher convergence speed in comparison to the transpose FIR ADF","tok_text":"a new architectur for implement pipelin fir adf base on classif of coeffici \n in thi paper , we propos a new method for implement pipelin finite-impuls respons ( fir ) adapt digit filter ( adf ) , with an aim of reduc the maximum delay of the filter portion of convent delay least mean squar ( dlm ) pipelin adf . we achiev a filter section with a maximum delay of one by simplifi a pre-upsampl and a post-downsampl fir filter use the concept of classif of coeffici . thi reduct is independ of the order of the filter , which is an advantag when the order of the filter is veri larg , and as a result the method can also be appli to infinit impuls respons ( iir ) filter . furthermor , when the propos method is compar with the transpos adf , which ha a filter section with zero delay , it is realiz that it significantli reduc the maximum delay associ with updat the coeffici of fir adf . the effect of thi is that , the propos method exhibit a higher converg speed in comparison to the transpos fir adf","ordered_present_kp":[32,168,222,953],"keyphrases":["pipelined FIR ADF","adaptive digital filter","maximum delay","convergence speed","coefficient classification","delayed least mean square filter","pre-upsampled filter","post-downsampled filter"],"prmu":["P","P","P","P","R","R","R","R"]} {"id":"965","title":"Sliding mode control of chaos in the cubic Chua's circuit system","abstract":"In this paper, a sliding mode controller is applied to control the cubic Chua's circuit system. The sliding surface of this paper used is one dimension higher than the traditional surface and guarantees its passage through the initial states of the controlled system. Therefore, using the characteristic of this sliding mode we aim to design a controller that can meet the desired specification and use less control energy by comparing with the result in the current existing literature. The results show that the proposed controller can steer Chua's circuit system to the desired state without the chattering phenomenon and abrupt state change","tok_text":"slide mode control of chao in the cubic chua 's circuit system \n in thi paper , a slide mode control is appli to control the cubic chua 's circuit system . the slide surfac of thi paper use is one dimens higher than the tradit surfac and guarante it passag through the initi state of the control system . therefor , use the characterist of thi slide mode we aim to design a control that can meet the desir specif and use less control energi by compar with the result in the current exist literatur . the result show that the propos control can steer chua 's circuit system to the desir state without the chatter phenomenon and abrupt state chang","ordered_present_kp":[0,22,160,604,634],"keyphrases":["sliding mode control","chaos","sliding surface","chattering","state change","cubic Chua circuit system","match disturbance","mismatch disturbance"],"prmu":["P","P","P","P","P","R","U","U"]} {"id":"920","title":"Three-dimensional periodic Voronoi grain models and micromechanical FE-simulations of a two-phase steel","abstract":"A three-dimensional model is proposed for modeling of microstructures. The model is based on the finite element method with periodic boundary conditions. The Voronoi algorithm is used to generate the geometrical model, which has a periodic grain structure that follows the original boundaries of the Voronoi cells. As an application, the model is used to model a two-phase ferrite\/pearlite steel. It is shown that periodic cells with only five grains generate representative stress-strain curves","tok_text":"three-dimension period voronoi grain model and micromechan fe-simul of a two-phas steel \n a three-dimension model is propos for model of microstructur . the model is base on the finit element method with period boundari condit . the voronoi algorithm is use to gener the geometr model , which ha a period grain structur that follow the origin boundari of the voronoi cell . as an applic , the model is use to model a two-phas ferrit \/ pearlit steel . it is shown that period cell with onli five grain gener repres stress-strain curv","ordered_present_kp":[73,92,16,204,233,271,514],"keyphrases":["periodic Voronoi grain models","two-phase steel","three-dimensional model","periodic boundary conditions","Voronoi algorithm","geometrical model","stress-strain curves","micromechanical FEM simulations","microstructures modeling","ferrite-pearlite steel","Voronoi tessellation","adaptive mesh generator","quadtree\/octree-based algorithm","kinematic constraints","computational time"],"prmu":["P","P","P","P","P","P","P","M","R","M","M","M","M","U","U"]} {"id":"636","title":"FLID-DL: congestion control for layered multicast","abstract":"We describe fair layered increase\/decrease with dynamic layering (FLID-DL): a new multirate congestion control algorithm for layered multicast sessions. FLID-DL generalizes the receiver-driven layered congestion control protocol (RLC) introduced by Vicisano et al. (Proc. IEEE INFOCOM, San Francisco, CA, , p.996-1003, Mar. 1998)ameliorating the problems associated with large Internet group management protocol (IGMP) leave latencies and abrupt rate increases. Like RLC, FLID-DL, is a scalable, receiver-driven congestion control mechanism in which receivers add layers at sender-initiated synchronization points and leave layers when they experience congestion. FLID-DL congestion control coexists with transmission control protocol (TCP) flows as well as other FLID-DL sessions and supports general rates on the different multicast layers. We demonstrate via simulations that our congestion control scheme exhibits better fairness properties and provides better throughput than previous methods. A key contribution that enables FLID-DL and may be useful elsewhere is dynamic layering (DL), which mitigates the negative impact of long IGMP leave latencies and eliminates the need for probe intervals present in RLC. We use DL to respond to congestion much faster than IGMP leave operations, which have proven to be a bottleneck in practice for prior work","tok_text":"flid-dl : congest control for layer multicast \n we describ fair layer increas \/ decreas with dynam layer ( flid-dl ): a new multir congest control algorithm for layer multicast session . flid-dl gener the receiver-driven layer congest control protocol ( rlc ) introduc by vicisano et al . ( proc . ieee infocom , san francisco , ca , , p.996 - 1003 , mar. 1998)amelior the problem associ with larg internet group manag protocol ( igmp ) leav latenc and abrupt rate increas . like rlc , flid-dl , is a scalabl , receiver-driven congest control mechan in which receiv add layer at sender-initi synchron point and leav layer when they experi congest . flid-dl congest control coexist with transmiss control protocol ( tcp ) flow as well as other flid-dl session and support gener rate on the differ multicast layer . we demonstr via simul that our congest control scheme exhibit better fair properti and provid better throughput than previou method . a key contribut that enabl flid-dl and may be use elsewher is dynam layer ( dl ) , which mitig the neg impact of long igmp leav latenc and elimin the need for probe interv present in rlc . we use dl to respond to congest much faster than igmp leav oper , which have proven to be a bottleneck in practic for prior work","ordered_present_kp":[0,10,59,124,161,205,398,430,686,579,796,830,915,93],"keyphrases":["FLID-DL","congestion control","fair layered increase\/decrease with dynamic layering","dynamic layering","multirate congestion control algorithm","layered multicast sessions","receiver-driven layered congestion control protocol","Internet group management protocol","IGMP","sender-initiated synchronization","transmission control protocol","multicast layers","simulations","throughput","scalable congestion control","Internet protocol multicast","TCP fairness"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"673","title":"The Information Age interview - Capital One","abstract":"Credit card company Capital One attributes its rapid customer growth to the innovative use of cutting-edge technology. European CIO Catherine Doran talks about the systems that have fuelled that runaway success","tok_text":"the inform age interview - capit one \n credit card compani capit one attribut it rapid custom growth to the innov use of cutting-edg technolog . european cio catherin doran talk about the system that have fuell that runaway success","ordered_present_kp":[39,27,87,121],"keyphrases":["Capital One","credit card company","customer growth","cutting-edge technology"],"prmu":["P","P","P","P"]} {"id":"1226","title":"Temp IT chief rallies troops [Mori]","abstract":"The appointment of a highly qualified interim IT manager enabled market research company Mori to rapidly restructure its IT department. Now the resulting improvements are allowing it to support an increasing role for technology in the assimilation and analysis of market research","tok_text":"temp it chief ralli troop [ mori ] \n the appoint of a highli qualifi interim it manag enabl market research compani mori to rapidli restructur it it depart . now the result improv are allow it to support an increas role for technolog in the assimil and analysi of market research","ordered_present_kp":[92,28,69],"keyphrases":["Mori","interim IT manager","market research company"],"prmu":["P","P","P"]} {"id":"1263","title":"Super high definition image (WHD: Wide\/Double HD) transmission system","abstract":"This paper describes a WHD image transmission system constructed from a display projector, CODECs, and a camera system imaging a super high definition image (WHD: Wide\/Double HD) corresponding to two screen portions of common high-vision images. This system was developed as a transmission system to communicate with or transmit information giving a reality-enhanced feeling to a remote location by using images of super high definition. In addition, the correction processing for the distortions of images occurring due to the structure of the camera system, an outline of the transmission experiments using the proposed system, and subjective evaluation experiments using WHD images are presented","tok_text":"super high definit imag ( whd : wide \/ doubl hd ) transmiss system \n thi paper describ a whd imag transmiss system construct from a display projector , codec , and a camera system imag a super high definit imag ( whd : wide \/ doubl hd ) correspond to two screen portion of common high-vis imag . thi system wa develop as a transmiss system to commun with or transmit inform give a reality-enhanc feel to a remot locat by use imag of super high definit . in addit , the correct process for the distort of imag occur due to the structur of the camera system , an outlin of the transmiss experi use the propos system , and subject evalu experi use whd imag are present","ordered_present_kp":[89,152,166,381],"keyphrases":["WHD image transmission system","CODECs","camera system imaging","reality-enhanced feeling","super high definition image transmission system"],"prmu":["P","P","P","P","R"]} {"id":"958","title":"Efficient combinational verification using overlapping local BDDs and a hash table","abstract":"We propose a novel methodology that combines local BDDs (binary decision diagrams) with a hash table for very efficient verification of combinational circuits. The main purpose of this technique is to remove the considerable overhead associated with case-by-case verification of internal node pairs in typical internal correspondence based verification methods. Two heuristics based on the number of structural levels of circuitry looked at and the total number of nodes in the BDD manager are used to control the BDD sizes and introduce new cutsets based on already found equivalent nodes. We verify the ISCAS85 benchmark circuits and demonstrate significant speedup over existing methods. We also verify several hard industrial circuits and show our superiority in extracting internal equivalences","tok_text":"effici combin verif use overlap local bdd and a hash tabl \n we propos a novel methodolog that combin local bdd ( binari decis diagram ) with a hash tabl for veri effici verif of combin circuit . the main purpos of thi techniqu is to remov the consider overhead associ with case-by-cas verif of intern node pair in typic intern correspond base verif method . two heurist base on the number of structur level of circuitri look at and the total number of node in the bdd manag are use to control the bdd size and introduc new cutset base on alreadi found equival node . we verifi the iscas85 benchmark circuit and demonstr signific speedup over exist method . we also verifi sever hard industri circuit and show our superior in extract intern equival","ordered_present_kp":[7,24,48,273,294,320,362,392,464,497,523,581,678,733,113],"keyphrases":["combinational verification","overlapping local BDDs","hash table","binary decision diagrams","case-by-case verification","internal node pairs","internal correspondence based verification","heuristics","structural levels","BDD manager","BDD sizes","cutsets","ISCAS85 benchmark circuits","hard industrial circuits","internal equivalences","combinational circuit verification","formal verification","internal correspondence-based verification"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","M","M"]} {"id":"572","title":"Characterization of sheet buckling subjected to controlled boundary constraints","abstract":"A wedge strip test is designed to study the onset and post-buckling behavior of a sheet under various boundary constraints. The device can be easily incorporated into a conventional tensile test machine, and material resistance to buckling is measured as the buckling height versus the in-plane strain state. The design yields different but consistent buckling modes with easy changes of boundary conditions (either clamped or freed) and sample geometry. Experimental results are then used to verify a hybrid approach to buckling prediction, i.e., the combination of the FEM analysis and an energy-based analytical wrinkling criterion. The FEM analysis is used to obtain the stress field and deformed geometry in a complex forming condition, while the analytical solution is to provide the predictions less sensitive to artificial numerical parameters. A good agreement between experimental data and numerical predictions is obtained","tok_text":"character of sheet buckl subject to control boundari constraint \n a wedg strip test is design to studi the onset and post-buckl behavior of a sheet under variou boundari constraint . the devic can be easili incorpor into a convent tensil test machin , and materi resist to buckl is measur as the buckl height versu the in-plan strain state . the design yield differ but consist buckl mode with easi chang of boundari condit ( either clamp or freed ) and sampl geometri . experiment result are then use to verifi a hybrid approach to buckl predict , i.e. , the combin of the fem analysi and an energy-bas analyt wrinkl criterion . the fem analysi is use to obtain the stress field and deform geometri in a complex form condit , while the analyt solut is to provid the predict less sensit to artifici numer paramet . a good agreement between experiment data and numer predict is obtain","ordered_present_kp":[68,44,13,231,327,593,667,684],"keyphrases":["sheet buckling","boundary constraints","wedge strip test","tensile test machine","strain state","energy-based analytical wrinkling criterion","stress field","deformed geometry","forming processes","finite element analysis"],"prmu":["P","P","P","P","P","P","P","P","M","M"]} {"id":"1127","title":"Repeated games with lack of information on one side: the dual differential approach","abstract":"We introduce the dual differential game of a repeated game with lack of information on one side as the natural continuous time version of the dual game introduced by De Meyer (1996). A traditional way to study the value of differential games is through discrete time approximations. Here, we follow the opposite approach: We identify the limit value of a repeated game in discrete time as the value of a differential game. Namely, we use the recursive structure for the finitely repeated version of the dual game to construct a differential game for which the upper values of the uniform discretization satisfy precisely the same property. The value of the dual differential game exists and is the unique viscosity solution of a first-order derivative equation with a limit condition. We identify the solution by translating viscosity properties in the primal","tok_text":"repeat game with lack of inform on one side : the dual differenti approach \n we introduc the dual differenti game of a repeat game with lack of inform on one side as the natur continu time version of the dual game introduc by de meyer ( 1996 ) . a tradit way to studi the valu of differenti game is through discret time approxim . here , we follow the opposit approach : we identifi the limit valu of a repeat game in discret time as the valu of a differenti game . name , we use the recurs structur for the finit repeat version of the dual game to construct a differenti game for which the upper valu of the uniform discret satisfi precis the same properti . the valu of the dual differenti game exist and is the uniqu viscos solut of a first-ord deriv equat with a limit condit . we identifi the solut by translat viscos properti in the primal","ordered_present_kp":[0,93,0,307,387,307,720,767],"keyphrases":["repeated games","repeated games","dual differential game","discrete time approximations","discrete time","limit value","viscosity solution","limit condition","repeated game"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"1162","title":"Recognition of finite simple groups S\/sub 4\/(q) by their element orders","abstract":"It is proved that among simple groups S\/sub 4\/(q) in the class of finite-groups, only the groups S\/sub 4\/(3\/sup n\/), where n is an odd number greater than unity, are recognizable by a set of their element orders. It is also shown that simple groups U\/sub 3\/(9), \/sup 3\/D\/sub 4\/(2), G\/sub 2\/(4), S\/sub 6\/(3), F\/sub 4\/(2), and \/sup 2\/E\/sub 6\/(2) are recognizable, but L\/sub 3\/(3) is not","tok_text":"recognit of finit simpl group s \/ sub 4\/(q ) by their element order \n it is prove that among simpl group s \/ sub 4\/(q ) in the class of finite-group , onli the group s \/ sub 4\/(3 \/ sup n\/ ) , where n is an odd number greater than uniti , are recogniz by a set of their element order . it is also shown that simpl group u \/ sub 3\/(9 ) , \/sup 3 \/ d \/ sub 4\/(2 ) , g \/ sub 2\/(4 ) , s \/ sub 6\/(3 ) , f \/ sub 4\/(2 ) , and \/sup 2 \/ e \/ sub 6\/(2 ) are recogniz , but l \/ sub 3\/(3 ) is not","ordered_present_kp":[54],"keyphrases":["element orders","finite simple groups recognition","divisibility relation"],"prmu":["P","R","U"]} {"id":"793","title":"Advancements during the past quarter century in on-line monitoring of motor and generator winding insulation","abstract":"Electrical insulation plays a critical role in the operation of motor and generator rotor and stator windings. Premature failure of the insulation can cost millions of dollars per day. With advancements in electronics, sensors, computers and software, tremendous progress has been made in the past 25 yr which has transformed on-line insulation monitoring from a rarely used and expensive tool, to the point where 50% of large utility generators in North America are now equipped for such monitoring. This review paper outlines the motivation for online monitoring, discusses the transition to today's technology, and describes the variety of methods now in use for rotor winding and stator winding monitoring","tok_text":"advanc dure the past quarter centuri in on-lin monitor of motor and gener wind insul \n electr insul play a critic role in the oper of motor and gener rotor and stator wind . prematur failur of the insul can cost million of dollar per day . with advanc in electron , sensor , comput and softwar , tremend progress ha been made in the past 25 yr which ha transform on-lin insul monitor from a rare use and expens tool , to the point where 50 % of larg util gener in north america are now equip for such monitor . thi review paper outlin the motiv for onlin monitor , discuss the transit to today 's technolog , and describ the varieti of method now in use for rotor wind and stator wind monitor","ordered_present_kp":[68,87,255,266,275,286,658,160],"keyphrases":["generator winding insulation","electrical insulation","stator windings","electronics","sensors","computers","software","rotor windings","motor generator winding insulation","winding insulation on-line monitoring","premature insulation failure","temperature monitoring","condition monitors","tagging compounds","ozone monitoring","PD monitoring","magnetic flux monitoring","partial discharge monitoring","endwinding vibration monitoring"],"prmu":["P","P","P","P","P","P","P","P","R","R","R","M","M","U","M","M","M","M","M"]} {"id":"1383","title":"Semantic data broadcast for a mobile environment based on dynamic and adaptive chunking","abstract":"Database broadcast is an effective and scalable approach to disseminate information of high affinity to a large collection of mobile clients. A common problem of existing broadcast approaches is the lack of knowledge for a client to determine if all data items satisfying its query could be obtained from the broadcast. We therefore propose a semantic-based broadcast approach. A semantic descriptor is attached to each broadcast unit, called a data chunk. This semantic descriptor allows a client to determine if a query can be answered entirely based on broadcast items and, if needed, identify the precise definition of the remaining items in the form of a \"supplementary\" query. Data chunks can be of static or dynamic sizes and organized hierarchically. Their boundary can be determined on-the-fly, adaptive to the nature of client queries. We investigate different ways of organizing the data chunks over a broadcast channel to improve access performance. We introduce the data affinity index metric, which more accurately reflects client-perceived performance. A simulation model is built to evaluate our semantic-based broadcast schemes","tok_text":"semant data broadcast for a mobil environ base on dynam and adapt chunk \n databas broadcast is an effect and scalabl approach to dissemin inform of high affin to a larg collect of mobil client . a common problem of exist broadcast approach is the lack of knowledg for a client to determin if all data item satisfi it queri could be obtain from the broadcast . we therefor propos a semantic-bas broadcast approach . a semant descriptor is attach to each broadcast unit , call a data chunk . thi semant descriptor allow a client to determin if a queri can be answer entir base on broadcast item and , if need , identifi the precis definit of the remain item in the form of a \" supplementari \" queri . data chunk can be of static or dynam size and organ hierarch . their boundari can be determin on-the-fli , adapt to the natur of client queri . we investig differ way of organ the data chunk over a broadcast channel to improv access perform . we introduc the data affin index metric , which more accur reflect client-perceiv perform . a simul model is built to evalu our semantic-bas broadcast scheme","ordered_present_kp":[0,180,417,477,958,60,557],"keyphrases":["semantic data broadcast","adaptive chunking","mobile clients","semantic descriptor","data chunking","answerability","data affinity index","mobile databases","mobile computing","query processing"],"prmu":["P","P","P","P","P","P","P","R","M","M"]} {"id":"800","title":"A model for choosing an electronic reserves system: a pre-implementation study at the library of Long Island University's Brooklyn campus","abstract":"This study explores the nature of electronic reserves (e-reserves) and investigates the possibilities of implementing the e-reserves at the Long Island University\/Brooklyn Campus Library (LIU\/BCL)","tok_text":"a model for choos an electron reserv system : a pre-implement studi at the librari of long island univers 's brooklyn campu \n thi studi explor the natur of electron reserv ( e-reserv ) and investig the possibl of implement the e-reserv at the long island univers \/ brooklyn campu librari ( liu \/ bcl )","ordered_present_kp":[21],"keyphrases":["electronic reserves system","Long Island University Brooklyn Campus Library"],"prmu":["P","R"]} {"id":"845","title":"Gender, software design, and occupational equity","abstract":"After reviewing the work on gender bias in software design, a model of gender-role influenced achievement choice taken from Eccles (1994) is presented. The paper concludes that (1) though laudable, reduction of gender bias in software design is not the most straightforward way to reduce gender inequity in the choice of computing as a career, (2) the model itself makes more clear some of the ethical issues involved in attempting to achieve gender equity on computing, and (3) efforts to reduce gender inequity in the choice of computing as a career need to be evaluated in the light of this model","tok_text":"gender , softwar design , and occup equiti \n after review the work on gender bia in softwar design , a model of gender-rol influenc achiev choic taken from eccl ( 1994 ) is present . the paper conclud that ( 1 ) though laudabl , reduct of gender bia in softwar design is not the most straightforward way to reduc gender inequ in the choic of comput as a career , ( 2 ) the model itself make more clear some of the ethic issu involv in attempt to achiev gender equiti on comput , and ( 3 ) effort to reduc gender inequ in the choic of comput as a career need to be evalu in the light of thi model","ordered_present_kp":[70,9,414,30],"keyphrases":["software design","occupational equity","gender bias","ethical issues","gender-role influenced achievement choice model","computing career"],"prmu":["P","P","P","P","R","R"]} {"id":"1082","title":"Numerical approximation of nonlinear BVPs by means of BVMs","abstract":"Boundary Value Methods (BVMs) would seem to be suitable candidates for the solution of nonlinear Boundary Value Problems (BVPs). They have been successfully used for solving linear BVPs together with a mesh selection strategy based on the conditioning of the linear systems. Our aim is to extend this approach so as to use them for the numerical approximation of nonlinear problems. For this reason, we consider the quasi-linearization technique that is an application of the Newton method to the nonlinear differential equation. Consequently, each iteration requires the solution of a linear BVP. In order to guarantee the convergence to the solution of the continuous nonlinear problem, it is necessary to determine how accurately the linear BVPs must be solved. For this goal, suitable stopping criteria on the residual and on the error for each linear BVP are given. Numerical experiments on stiff problems give rather satisfactory results, showing that the experimental code, called TOM, that uses a class of BVMs and the quasi-linearization technique, may be competitive with well known solvers for BVPs","tok_text":"numer approxim of nonlinear bvp by mean of bvm \n boundari valu method ( bvm ) would seem to be suitabl candid for the solut of nonlinear boundari valu problem ( bvp ) . they have been success use for solv linear bvp togeth with a mesh select strategi base on the condit of the linear system . our aim is to extend thi approach so as to use them for the numer approxim of nonlinear problem . for thi reason , we consid the quasi-linear techniqu that is an applic of the newton method to the nonlinear differenti equat . consequ , each iter requir the solut of a linear bvp . in order to guarante the converg to the solut of the continu nonlinear problem , it is necessari to determin how accur the linear bvp must be solv . for thi goal , suitabl stop criteria on the residu and on the error for each linear bvp are given . numer experi on stiff problem give rather satisfactori result , show that the experiment code , call tom , that use a class of bvm and the quasi-linear techniqu , may be competit with well known solver for bvp","ordered_present_kp":[0,127,49,230,422,469,490,746,839,43],"keyphrases":["numerical approximation","BVMs","boundary value methods","nonlinear boundary value problems","mesh selection strategy","quasi-linearization technique","Newton method","nonlinear differential equation","stopping criteria","stiff problems"],"prmu":["P","P","P","P","P","P","P","P","P","P"]} {"id":"1453","title":"Mobile banking's tough sell","abstract":"Banks are having to put their mobile-commerce projects on hold because the essential technology to make the services usable, in particular GPRS (general packet radio service) hasn't become widely available. It is estimated that by the end of 2002, only 5 per cent of adults will have GPRS phones. This will have a knock-on effect for other technologies such as clickable icons and multimedia messaging. In fact banking via WAP (wireless application protocol) has proved to be a frustrating and time-consuming process for the customer. Financial firms' hopes for higher mobile usage are stymied by the fact that improvements to the systems won't happen as fast as they want and the inadequacies of the system go beyond immature technology. Financial services institutions should not wait for customers to become au fait with their WAP. Instead they should be the ones \"driving the traffic\"","tok_text":"mobil bank 's tough sell \n bank are have to put their mobile-commerc project on hold becaus the essenti technolog to make the servic usabl , in particular gpr ( gener packet radio servic ) ha n't becom wide avail . it is estim that by the end of 2002 , onli 5 per cent of adult will have gpr phone . thi will have a knock-on effect for other technolog such as clickabl icon and multimedia messag . in fact bank via wap ( wireless applic protocol ) ha prove to be a frustrat and time-consum process for the custom . financi firm ' hope for higher mobil usag are stymi by the fact that improv to the system wo n't happen as fast as they want and the inadequaci of the system go beyond immatur technolog . financi servic institut should not wait for custom to becom au fait with their wap . instead they should be the one \" drive the traffic \"","ordered_present_kp":[6,54,155,421],"keyphrases":["banking","mobile-commerce","GPRS","wireless application protocol"],"prmu":["P","P","P","P"]} {"id":"1416","title":"Look into the future of content management","abstract":"Predictions of consolidation in the Content Management (CM) vendor arena have appeared in nearly every major industry prognosis over the past two years. Gartner Group, for example, recently reiterated its prediction that half the CM vendors in existence in mid-2001 would leave the marketplace by the end of 2002. Analysts consistently advise prospective CM buyers to tread carefully because their vendor may not stick around. But fortunately, the story goes, fewer vendor choices will finally bring greater clarity and sharper differentiators to this otherwise very messy product landscape. In fact, the number of CM vendors continues to rise. Industry growth has come through greater demand among CM buyers, but also expanding product functionality as well as successful partnerships. The marketplace certainly cannot sustain its current breadth of vendors in the long run, yet it remains unclear when and how any serious industry consolidation will occur. In the meantime, evolving business models and feature sets have created just the kind of clearer segmentation and transparent product differences that were supposed to emerge following an industry contraction","tok_text":"look into the futur of content manag \n predict of consolid in the content manag ( cm ) vendor arena have appear in nearli everi major industri prognosi over the past two year . gartner group , for exampl , recent reiter it predict that half the cm vendor in exist in mid-2001 would leav the marketplac by the end of 2002 . analyst consist advis prospect cm buyer to tread care becaus their vendor may not stick around . but fortun , the stori goe , fewer vendor choic will final bring greater clariti and sharper differenti to thi otherwis veri messi product landscap . in fact , the number of cm vendor continu to rise . industri growth ha come through greater demand among cm buyer , but also expand product function as well as success partnership . the marketplac certainli can not sustain it current breadth of vendor in the long run , yet it remain unclear when and how ani seriou industri consolid will occur . in the meantim , evolv busi model and featur set have creat just the kind of clearer segment and transpar product differ that were suppos to emerg follow an industri contract","ordered_present_kp":[23,702,738,886],"keyphrases":["content management","product functionality","partnerships","industry consolidation","enterprise systems"],"prmu":["P","P","P","P","U"]} {"id":"90","title":"LAN-based building maintenance and surveillance robot","abstract":"The building and construction industry is the major industry of Hong Kong as in many developed countries around the world. After the commissioning of a high-rise building or a large estate, substantial manpower, both inside the management centre under a standby manner, as well as surveillance for security purposes around the whole building, is required for daily operation to ensure a quality environment for the occupants. If the surveillance job can be done by robots, the efficiency can be highly enhanced, resulting in a great saving of manpower and the improved safety of the management staff as a by-product. Furthermore, if the robot can retrieve commands from the building management system via a local area network (LAN), further savings in manpower can be achieved in terms of first-line fault attendance by human management staff. This paper describes the development of a robot prototype here in Hong Kong, which can handle some daily routine maintenance works and surveillance responsibilities. The hardware structure of the robot and its on-board devices are described. Real-time images captured by a camera on the robot with pan\/tilt\/zoom functions can be transmitted back to the central management office via a local area network. The interface between the robot and the building automation system (BAS) of the building is discussed. This is the first key achievement of this project with a strong implication on reducing the number of human staff to manage a modem building. Teleoperation of the robot via the Internet or intranet is also possible, which is the second achievement of this project. Finally, the robot can identify its physical position inside the building by a landmark recognition method based on standard CAD drawings, which is the third achievement of this project. The main goal of this paper is not the description of some groundbreaking technology in robotic development. It is mainly intended to convince building designers and managers to incorporate robotic systems when they are managing modem buildings to save manpower and improve efficiency","tok_text":"lan-bas build mainten and surveil robot \n the build and construct industri is the major industri of hong kong as in mani develop countri around the world . after the commiss of a high-ris build or a larg estat , substanti manpow , both insid the manag centr under a standbi manner , as well as surveil for secur purpos around the whole build , is requir for daili oper to ensur a qualiti environ for the occup . if the surveil job can be done by robot , the effici can be highli enhanc , result in a great save of manpow and the improv safeti of the manag staff as a by-product . furthermor , if the robot can retriev command from the build manag system via a local area network ( lan ) , further save in manpow can be achiev in term of first-lin fault attend by human manag staff . thi paper describ the develop of a robot prototyp here in hong kong , which can handl some daili routin mainten work and surveil respons . the hardwar structur of the robot and it on-board devic are describ . real-tim imag captur by a camera on the robot with pan \/ tilt \/ zoom function can be transmit back to the central manag offic via a local area network . the interfac between the robot and the build autom system ( ba ) of the build is discuss . thi is the first key achiev of thi project with a strong implic on reduc the number of human staff to manag a modem build . teleoper of the robot via the internet or intranet is also possibl , which is the second achiev of thi project . final , the robot can identifi it physic posit insid the build by a landmark recognit method base on standard cad draw , which is the third achiev of thi project . the main goal of thi paper is not the descript of some groundbreak technolog in robot develop . it is mainli intend to convinc build design and manag to incorpor robot system when they are manag modem build to save manpow and improv effici","ordered_present_kp":[0,179,306,635,660,737,926,1043,1360,1541],"keyphrases":["LAN-based building maintenance and surveillance robot","high-rise building","security purposes","building management system","local area network","first-line fault attendance","hardware structure","pan\/tilt\/zoom functions","teleoperation","landmark recognition method"],"prmu":["P","P","P","P","P","P","P","P","P","P"]} {"id":"981","title":"Basin configuration of a six-dimensional model of an electric power system","abstract":"As part of an ongoing project on the stability of massively complex electrical power systems, we discuss the global geometric structure of contacts among the basins of attraction of a six-dimensional dynamical system. This system represents a simple model of an electrical power system involving three machines and an infinite bus. Apart from the possible occurrence of attractors representing pathological states, the contacts between the basins have a practical importance, from the point of view of the operation of a real electrical power system. With the aid of a global map of basins, one could hope to design an intervention strategy to boot the power system back into its normal state. Our method involves taking two-dimensional sections of the six-dimensional state space, and then determining the basins directly by numerical simulation from a dense grid of initial conditions. The relations among all the basins are given for a specific numerical example, that is, choosing particular values for the parameters in our model","tok_text":"basin configur of a six-dimension model of an electr power system \n as part of an ongo project on the stabil of massiv complex electr power system , we discuss the global geometr structur of contact among the basin of attract of a six-dimension dynam system . thi system repres a simpl model of an electr power system involv three machin and an infinit bu . apart from the possibl occurr of attractor repres patholog state , the contact between the basin have a practic import , from the point of view of the oper of a real electr power system . with the aid of a global map of basin , one could hope to design an intervent strategi to boot the power system back into it normal state . our method involv take two-dimension section of the six-dimension state space , and then determin the basin directli by numer simul from a dens grid of initi condit . the relat among all the basin are given for a specif numer exampl , that is , choos particular valu for the paramet in our model","ordered_present_kp":[0,20,46,112,164,345,391,408,564,752],"keyphrases":["basin configuration","six-dimensional model","electric power system","massively complex electrical power systems","global geometric structure","infinite bus","attractors","pathological states","global map","state space","power system stability"],"prmu":["P","P","P","P","P","P","P","P","P","P","R"]} {"id":"556","title":"Coarse-grained reduction and analysis of a network model of cortical response: I. Drifting grating stimuli","abstract":"We present a reduction of a large-scale network model of visual cortex developed by McLaughlin, Shapley, Shelley, and Wielaard. The reduction is from many integrate-and-fire neurons to a spatially coarse-grained system for firing rates of neuronal subpopulations. It accounts explicitly for spatially varying architecture, ordered cortical maps (such as orientation preference) that vary regularly across the cortical layer, and disordered cortical maps (such as spatial phase preference or stochastic input conductances) that may vary widely from cortical neuron to cortical neuron. The result of the reduction is a set of nonlinear spatiotemporal integral equations for \"phase-averaged\" firing rates of neuronal subpopulations across the model cortex, derived asymptotically from the full model without the addition of any extra phenomological constants. This reduced system is used to study the response of the model to drifting grating stimuli - where it is shown to be useful for numerical investigations that reproduce, at far less computational cost, the salient features of the point-neuron network and for analytical investigations that unveil cortical mechanisms behind the responses observed in the simulations of the large-scale computational model. For example, the reduced equations clearly show (1) phase averaging as the source of the time-invariance of cortico-cortical conductances, (2) the mechanisms in the model for higher firing rates and better orientation selectivity of simple cells which are near pinwheel centers, (3) the effects of the length-scales of cortico-cortical coupling, and (4) the role of noise in improving the contrast invariance of orientation selectivity","tok_text":"coarse-grain reduct and analysi of a network model of cortic respons : i. drift grate stimuli \n we present a reduct of a large-scal network model of visual cortex develop by mclaughlin , shapley , shelley , and wielaard . the reduct is from mani integrate-and-fir neuron to a spatial coarse-grain system for fire rate of neuron subpopul . it account explicitli for spatial vari architectur , order cortic map ( such as orient prefer ) that vari regularli across the cortic layer , and disord cortic map ( such as spatial phase prefer or stochast input conduct ) that may vari wide from cortic neuron to cortic neuron . the result of the reduct is a set of nonlinear spatiotempor integr equat for \" phase-averag \" fire rate of neuron subpopul across the model cortex , deriv asymptot from the full model without the addit of ani extra phenomolog constant . thi reduc system is use to studi the respons of the model to drift grate stimuli - where it is shown to be use for numer investig that reproduc , at far less comput cost , the salient featur of the point-neuron network and for analyt investig that unveil cortic mechan behind the respons observ in the simul of the large-scal comput model . for exampl , the reduc equat clearli show ( 1 ) phase averag as the sourc of the time-invari of cortico-cort conduct , ( 2 ) the mechan in the model for higher fire rate and better orient select of simpl cell which are near pinwheel center , ( 3 ) the effect of the length-scal of cortico-cort coupl , and ( 4 ) the role of nois in improv the contrast invari of orient select","ordered_present_kp":[121,149,0,1054,656,1378],"keyphrases":["coarse-graining","large-scale network model","visual cortex","nonlinear spatiotemporal integral equations","point-neuron network","orientation selectivity","neuronal networks","phase-averaged firing rates","dynamics"],"prmu":["P","P","P","P","P","P","R","R","U"]} {"id":"1103","title":"New age computing [autonomic computing]","abstract":"Autonomic computing (AC), sometimes called self-managed computing, is the name chosen by IBM to describe the company's new initiative aimed at making computing more reliable and problem-free. It is a response to a growing realization that the problem today with computers is not that they need more speed or have too little memory, but that they crash all too often. This article reviews current initiatives being carried out in the AC field by the IT industry, followed by key challenges which require to be addressed in its development and implementation","tok_text":"new age comput [ autonom comput ] \n autonom comput ( ac ) , sometim call self-manag comput , is the name chosen by ibm to describ the compani 's new initi aim at make comput more reliabl and problem-fre . it is a respons to a grow realiz that the problem today with comput is not that they need more speed or have too littl memori , but that they crash all too often . thi articl review current initi be carri out in the ac field by the it industri , follow by key challeng which requir to be address in it develop and implement","ordered_present_kp":[17,0,53,73],"keyphrases":["new age computing","autonomic computing","AC","self-managed computing","IBM initiative","computing reliability","problem-free computing","computer speed","computer memory","computer crash","IT industry initiatives","AC requirements","AC development","AC implementation","open standards","self-healing computing","adaptive algorithms"],"prmu":["P","P","P","P","R","R","R","R","R","R","R","R","R","R","U","M","U"]} {"id":"1146","title":"Mammogram synthesis using a 3D simulation. I. Breast tissue model and image acquisition simulation","abstract":"A method is proposed for generating synthetic mammograms based upon simulations of breast tissue and the mammographic imaging process. A computer breast model has been designed with a realistic distribution of large and medium scale tissue structures. Parameters controlling the size and placement of simulated structures (adipose compartments and ducts) provide a method for consistently modeling images of the same simulated breast with modified position or acquisition parameters. The mammographic imaging process is simulated using a compression model and a model of the X-ray image acquisition process. The compression model estimates breast deformation using tissue elasticity parameters found in the literature and clinical force values. The synthetic mammograms were generated by a mammogram acquisition model using a monoenergetic parallel beam approximation applied to the synthetically compressed breast phantom","tok_text":"mammogram synthesi use a 3d simul . i. breast tissu model and imag acquisit simul \n a method is propos for gener synthet mammogram base upon simul of breast tissu and the mammograph imag process . a comput breast model ha been design with a realist distribut of larg and medium scale tissu structur . paramet control the size and placement of simul structur ( adipos compart and duct ) provid a method for consist model imag of the same simul breast with modifi posit or acquisit paramet . the mammograph imag process is simul use a compress model and a model of the x-ray imag acquisit process . the compress model estim breast deform use tissu elast paramet found in the literatur and clinic forc valu . the synthet mammogram were gener by a mammogram acquisit model use a monoenerget parallel beam approxim appli to the synthet compress breast phantom","ordered_present_kp":[0,25,39,62,199,360,379,567,640,694,775],"keyphrases":["mammogram synthesis","3D simulation","breast tissue model","image acquisition simulation","computer breast model","adipose compartments","ducts","X-ray image acquisition","tissue elasticity parameters","force values","monoenergetic parallel beam approximation","mammographic compression","breast lesions","rectangular slice approximation","composite beam model","linear Young's moduli"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","M","M","M","U"]} {"id":"939","title":"Image reconstruction from fan-beam projections on less than a short scan","abstract":"This work is concerned with 2D image reconstruction from fan-beam projections. It is shown that exact and stable reconstruction of a given region-of-interest in the object does not require all lines passing through the object to be measured. Complete (non-truncated) fan-beam projections provide sufficient information for reconstruction when 'every line passing through the region-of-interest intersects the vertex path in a non-tangential way'. The practical implications of this condition are discussed and a new filtered-backprojection algorithm is derived for reconstruction. Experiments with computer-simulated data are performed to support the mathematical results","tok_text":"imag reconstruct from fan-beam project on less than a short scan \n thi work is concern with 2d imag reconstruct from fan-beam project . it is shown that exact and stabl reconstruct of a given region-of-interest in the object doe not requir all line pass through the object to be measur . complet ( non-trunc ) fan-beam project provid suffici inform for reconstruct when ' everi line pass through the region-of-interest intersect the vertex path in a non-tangenti way ' . the practic implic of thi condit are discuss and a new filtered-backproject algorithm is deriv for reconstruct . experi with computer-simul data are perform to support the mathemat result","ordered_present_kp":[22,92,192,433,526],"keyphrases":["fan-beam projections","2D image reconstruction","region-of-interest","vertex path","filtered-backprojection algorithm","exact stable reconstruction","X-ray computed tomography","short-scan condition","Hilbert transform","Radon transform","rebinning formula","convolution","linear interpolation","3D head phantom"],"prmu":["P","P","P","P","P","R","U","M","U","U","U","U","U","U"]} {"id":"612","title":"Analysis and operation of hybrid active filter for harmonic elimination","abstract":"This paper presents a hybrid active filter topology and its control to suppress the harmonic currents from entering the power source. The adopted hybrid active filter consists of one active filter and one passive filter connected in series. By controlling the equivalent output voltage of active filter, the harmonic currents generated by the nonlinear load are blocked and flowed into the passive filter. The power rating of the converter is reduced compared with the pure active filters to filter the harmonic currents. The harmonic current detecting approach and DC-link voltage regulation are proposed to obtain equivalent voltage of active filter. The effectiveness of the adopted topology and control scheme has been verified by the computer simulation and experimental results in a scaled-down laboratory prototype","tok_text":"analysi and oper of hybrid activ filter for harmon elimin \n thi paper present a hybrid activ filter topolog and it control to suppress the harmon current from enter the power sourc . the adopt hybrid activ filter consist of one activ filter and one passiv filter connect in seri . by control the equival output voltag of activ filter , the harmon current gener by the nonlinear load are block and flow into the passiv filter . the power rate of the convert is reduc compar with the pure activ filter to filter the harmon current . the harmon current detect approach and dc-link voltag regul are propos to obtain equival voltag of activ filter . the effect of the adopt topolog and control scheme ha been verifi by the comput simul and experiment result in a scaled-down laboratori prototyp","ordered_present_kp":[20,44,27,249,296,139,368,570,718,758],"keyphrases":["hybrid active filter","active filter","harmonic elimination","harmonic currents","passive filter","equivalent output voltage","nonlinear load","DC-link voltage regulation","computer simulation","scaled-down laboratory prototype","harmonic currents suppression","converter power rating reduction","active filter equivalent voltage","voltage source inverter"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","M","R","M"]} {"id":"657","title":"The web services agenda","abstract":"Even the most battle-scarred of CIOs have become excited at the prospect of what web services can do for their businesses. But there are still some shortcomings to be addressed","tok_text":"the web servic agenda \n even the most battle-scar of cio have becom excit at the prospect of what web servic can do for their busi . but there are still some shortcom to be address","ordered_present_kp":[4],"keyphrases":["web services","transaction support","security"],"prmu":["P","U","U"]} {"id":"1202","title":"More than the money [software project]","abstract":"Experiences creating budgets for large software projects have taught manufacturers that it is not about the money - it is about what one really needs. Before a company can begin to build a budget for a software. project, it has to have a good understanding of what business issues need to be addressed and what the business objectives are. This step is critical because it defines the business goals, outlines the metrics for success, sets the scope for the project, and defines the criteria for selecting the right software","tok_text":"more than the money [ softwar project ] \n experi creat budget for larg softwar project have taught manufactur that it is not about the money - it is about what one realli need . befor a compani can begin to build a budget for a softwar . project , it ha to have a good understand of what busi issu need to be address and what the busi object are . thi step is critic becaus it defin the busi goal , outlin the metric for success , set the scope for the project , and defin the criteria for select the right softwar","ordered_present_kp":[22,55],"keyphrases":["software projects","budgeting","manufacturing industry","management","software requirements"],"prmu":["P","P","M","U","M"]} {"id":"1247","title":"The changing landscape for multi access portals","abstract":"Discusses the factors that have made life difficult for consumer portal operators in recent years causing them, like others in the telecommunications, media and technology sector, to take a close look at their business models following the dot.com crash and the consequent reassessment of Internet-related project financing by the venture capital community. While the pressure is on to generate income from existing customers and users, portal operators must reach new markets and find realistic revenue streams. This search for real revenues has led to a move towards charging for content, a strategy being pursued by a large number of horizontal portal players, including MSN and Terra Lycos. This trend is particularly noticeable in China, where Chinadotcom operates a mainland portal and plans a range of fee-based services, including electronic mail. The nature of advertising itself is changing, with portals seeking blue-chip sponsorship and marketing deals that span a number of years. Players are struggling to redefine and reinvent themselves as a result of the changing environment and even the term \"portal\" is believed to be obsolete, partly due to its dot.com crash associations. Multi-access portals are expected to dominate the consumer sector, becoming bigger and better overall than their predecessors and playing a more powerful role in the consumer environment","tok_text":"the chang landscap for multi access portal \n discuss the factor that have made life difficult for consum portal oper in recent year caus them , like other in the telecommun , media and technolog sector , to take a close look at their busi model follow the dot.com crash and the consequ reassess of internet-rel project financ by the ventur capit commun . while the pressur is on to gener incom from exist custom and user , portal oper must reach new market and find realist revenu stream . thi search for real revenu ha led to a move toward charg for content , a strategi be pursu by a larg number of horizont portal player , includ msn and terra lyco . thi trend is particularli notic in china , where chinadotcom oper a mainland portal and plan a rang of fee-bas servic , includ electron mail . the natur of advertis itself is chang , with portal seek blue-chip sponsorship and market deal that span a number of year . player are struggl to redefin and reinvent themselv as a result of the chang environ and even the term \" portal \" is believ to be obsolet , partli due to it dot.com crash associ . multi-access portal are expect to domin the consum sector , becom bigger and better overal than their predecessor and play a more power role in the consum environ","ordered_present_kp":[1101,98,474,757,810,854],"keyphrases":["consumer portal operators","revenue streams","fee-based services","advertising","blue-chip sponsorship","multi-access portals"],"prmu":["P","P","P","P","P","P"]} {"id":"824","title":"The Internet, knowledge and the academy","abstract":"As knowledge is released from the bounds of libraries, as research becomes no longer confined to the academy, and education\/certification is available, any time\/any place, the university and the faculty must redefine themselves. Liberal studies, once the core, and currently eschewed in favor of science and technology, will be reborn in those institutions that can rise above the mundane and embrace an emerging \"third culture\"","tok_text":"the internet , knowledg and the academi \n as knowledg is releas from the bound of librari , as research becom no longer confin to the academi , and educ \/ certif is avail , ani time \/ ani place , the univers and the faculti must redefin themselv . liber studi , onc the core , and current eschew in favor of scienc and technolog , will be reborn in those institut that can rise abov the mundan and embrac an emerg \" third cultur \"","ordered_present_kp":[4,15,32,148,155,200,216,248],"keyphrases":["Internet","knowledge","academy","education","certification","university","faculty","liberal studies"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"861","title":"The decision procedure for profitability of investment projects using the internal rate of return of single-period projects","abstract":"The internal rate of return (IRR) criterion is often used to evaluate profitability of investment projects. In this paper, we focus on a single-period project which consists of two types of cash flows; an investment at one period and a return at a succeeding period, and a financing at one period and a repayment at a succeeding period. We decompose the given investment project into a series of the single-period projects. From the viewpoint of the single-period project, we point out the applicability issue of the IRR criterion, namely the IRR criterion cannot be applied in which a project is composed of both investment type and financing type. Investigating the properties of a series of the single-period projects, we resolve the applicability issue of the IRR criterion and propose the decision procedure for profitability judgment toward any type of investment project based on the comparison between the IRR and the capital cost. We develop a new algorithm to obtain the value of the project investment rate (PIR) for the given project, which is a function of the capital cost, only using the standard IRR computing routine. This outcome is a theoretical breakthrough to widen the utilization of IRR in practical applications","tok_text":"the decis procedur for profit of invest project use the intern rate of return of single-period project \n the intern rate of return ( irr ) criterion is often use to evalu profit of invest project . in thi paper , we focu on a single-period project which consist of two type of cash flow ; an invest at one period and a return at a succeed period , and a financ at one period and a repay at a succeed period . we decompos the given invest project into a seri of the single-period project . from the viewpoint of the single-period project , we point out the applic issu of the irr criterion , name the irr criterion can not be appli in which a project is compos of both invest type and financ type . investig the properti of a seri of the single-period project , we resolv the applic issu of the irr criterion and propos the decis procedur for profit judgment toward ani type of invest project base on the comparison between the irr and the capit cost . we develop a new algorithm to obtain the valu of the project invest rate ( pir ) for the given project , which is a function of the capit cost , onli use the standard irr comput routin . thi outcom is a theoret breakthrough to widen the util of irr in practic applic","ordered_present_kp":[4,56,81,23,277,575,1005,1027],"keyphrases":["decision procedure","profitability","internal rate of return","single-period projects","cash flows","IRR criterion","project investment rate","PIR","investment project profitability","investment project decomposition"],"prmu":["P","P","P","P","P","P","P","P","R","M"]} {"id":"1432","title":"To classify or not to classify, that is the question?","abstract":"In addressing classification issues, the librarian needs to decide what best suits the purpose and requirements of the user group and the organisation they work in. The author has used the well-established Moys Classification Scheme. This gives the level of detail required for current stock and allows for the incorporation of new material as the firm's specialisations develop. The scheme is widely used in other firms as well as in the local law society library, so it will be familiar to many users","tok_text":"to classifi or not to classifi , that is the question ? \n in address classif issu , the librarian need to decid what best suit the purpos and requir of the user group and the organis they work in . the author ha use the well-establish moy classif scheme . thi give the level of detail requir for current stock and allow for the incorpor of new materi as the firm 's specialis develop . the scheme is wide use in other firm as well as in the local law societi librari , so it will be familiar to mani user","ordered_present_kp":[235,447],"keyphrases":["Moys Classification Scheme","law society library"],"prmu":["P","P"]} {"id":"1066","title":"Application of artificial intelligence to search ground-state geometry of clusters","abstract":"We introduce a global optimization procedure, the neural-assisted genetic algorithm (NAGA). It combines the power of an artificial neural network (ANN) with the versatility of the genetic algorithm. This method is suitable to solve optimization problems that depend on some kind of heuristics to limit the search space. If a reasonable amount of data is available, the ANN can \"understand\" the problem and provide the genetic algorithm with a selected population of elements that will speed up the search for the optimum solution. We tested the method in a search for the ground-state geometry of silicon clusters. We trained the ANN with information about the geometry and energetics of small silicon clusters. Next, the ANN learned how to restrict the configurational space for larger silicon clusters. For Si\/sub 10\/ and Si\/sub 20\/, we noticed that the NAGA is at least three times faster than the \"pure\" genetic algorithm. As the size of the cluster increases, it is expected that the gain in terms of time will increase as well","tok_text":"applic of artifici intellig to search ground-stat geometri of cluster \n we introduc a global optim procedur , the neural-assist genet algorithm ( naga ) . it combin the power of an artifici neural network ( ann ) with the versatil of the genet algorithm . thi method is suitabl to solv optim problem that depend on some kind of heurist to limit the search space . if a reason amount of data is avail , the ann can \" understand \" the problem and provid the genet algorithm with a select popul of element that will speed up the search for the optimum solut . we test the method in a search for the ground-stat geometri of silicon cluster . we train the ann with inform about the geometri and energet of small silicon cluster . next , the ann learn how to restrict the configur space for larger silicon cluster . for si \/ sub 10\/ and si \/ sub 20\/ , we notic that the naga is at least three time faster than the \" pure \" genet algorithm . as the size of the cluster increas , it is expect that the gain in term of time will increas as well","ordered_present_kp":[10,38,86,114,181,486,541,620,814,831],"keyphrases":["artificial intelligence","ground-state geometry","global optimization procedure","neural-assisted genetic algorithm","artificial neural network","population","optimum solution","silicon clusters","Si\/sub 10\/","Si\/sub 20\/","atomic clusters","cluster size"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","R"]} {"id":"1023","title":"Simple nonlinear dual-window operator for edge detection","abstract":"We propose a nonlinear edge detection technique based on a two-concentric-circular-window operator. We perform a preliminary selection of edge candidates using a standard gradient and use the dual-window operator to reveal edges as zero-crossing points of a simple difference function depending only on the minimum and maximum values in the two windows. Comparisons with other well-established techniques are reported in terms of visual appearance and computational efficiency. They show that detected edges are surely comparable with Canny's and Laplacian of Gaussian algorithms, with a noteworthy reduction in terms of computational load","tok_text":"simpl nonlinear dual-window oper for edg detect \n we propos a nonlinear edg detect techniqu base on a two-concentric-circular-window oper . we perform a preliminari select of edg candid use a standard gradient and use the dual-window oper to reveal edg as zero-cross point of a simpl differ function depend onli on the minimum and maximum valu in the two window . comparison with other well-establish techniqu are report in term of visual appear and comput effici . they show that detect edg are sure compar with canni 's and laplacian of gaussian algorithm , with a noteworthi reduct in term of comput load","ordered_present_kp":[6,37,62,102,192,256,284,331,450,481,539,596],"keyphrases":["nonlinear dual-window operator","edge detection","nonlinear edge detection technique","two-concentric-circular-window operator","standard gradient","zero-crossing points","difference function","maximum values","computational efficiency","detected edges","Gaussian algorithms","computational load","dual window operator","minimum values","Laplacian algorithms","Canny's algorithms","nonlinear processing"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","M","R","R","R","M"]} {"id":"819","title":"Local search with constraint propagation and conflict-based heuristics","abstract":"Search algorithms for solving CSP (Constraint Satisfaction Problems) usually fall into one of two main families: local search algorithms and systematic algorithms. Both families have their advantages. Designing hybrid approaches seems promising since those advantages may be combined into a single approach. In this paper, we present a new hybrid technique. It performs a local search over partial assignments instead of complete assignments, and uses filtering techniques and conflict-based techniques to efficiently guide the search. This new technique benefits from both classical approaches: a priori pruning of the search space from filtering-based search and possible repair of early mistakes from local search. We focus on a specific version of this technique: tabu decision-repair. Experiments done on open-shop scheduling problems show that our approach competes well with the best highly specialized algorithms","tok_text":"local search with constraint propag and conflict-bas heurist \n search algorithm for solv csp ( constraint satisfact problem ) usual fall into one of two main famili : local search algorithm and systemat algorithm . both famili have their advantag . design hybrid approach seem promis sinc those advantag may be combin into a singl approach . in thi paper , we present a new hybrid techniqu . it perform a local search over partial assign instead of complet assign , and use filter techniqu and conflict-bas techniqu to effici guid the search . thi new techniqu benefit from both classic approach : a priori prune of the search space from filtering-bas search and possibl repair of earli mistak from local search . we focu on a specif version of thi techniqu : tabu decision-repair . experi done on open-shop schedul problem show that our approach compet well with the best highli special algorithm","ordered_present_kp":[63,89,95,167,194,423,474,760],"keyphrases":["search algorithms","CSP","Constraint Satisfaction Problems","local search algorithms","systematic algorithms","partial assignments","filtering techniques","tabu decision-repair"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1367","title":"The set of just-in-time management strategies: an assessment of their impact on plant-level productivity and input-factor substitutability using variable cost function estimates","abstract":"Many manufacturers in the automobile industry around the world have adopted the just-in-time (JIT) set of management strategies in an effort to improve productivity, efficiency and product quality. The paper provides empirical evidence that supports the idea that JIT manufacturing environments are, in fact, more productive than their non-JIT counterparts. Plant-level cross-sectional data from auto-parts manufacturing firms are used to estimate variable cost functions for a JIT group as well as for a non-JIT group of plants. Differences in cost function characteristics between the two groups are examined and discussed","tok_text":"the set of just-in-tim manag strategi : an assess of their impact on plant-level product and input-factor substitut use variabl cost function estim \n mani manufactur in the automobil industri around the world have adopt the just-in-tim ( jit ) set of manag strategi in an effort to improv product , effici and product qualiti . the paper provid empir evid that support the idea that jit manufactur environ are , in fact , more product than their non-jit counterpart . plant-level cross-sect data from auto-part manufactur firm are use to estim variabl cost function for a jit group as well as for a non-jit group of plant . differ in cost function characterist between the two group are examin and discuss","ordered_present_kp":[11,69,93,120,173,238,299,310,501],"keyphrases":["just-in-time management strategies","plant-level productivity","input-factor substitutability","variable cost function estimates","automobile industry","JIT","efficiency","product quality","auto-parts manufacturing firms"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"777","title":"Access to information for blind and visually impaired clients","abstract":"This article guides I&R providers in establishing effective communication techniques for working with visually impaired consumers. The authors discuss common causes of vision impairment and the functional implications of each and offer information on disability etiquette and effective voice, accessible media and in-person communication. There is an overview of assistive technologies used by people who are visually impaired-to facilitate written and electronic communications as well as low-tech solutions for producing large-print and Braille materials in-house. Providers who implement these communication techniques will be well equipped to serve visually-impaired consumers, and consumers will be more likely to avail themselves of these services when providers make them easily accessible","tok_text":"access to inform for blind and visual impair client \n thi articl guid i&r provid in establish effect commun techniqu for work with visual impair consum . the author discuss common caus of vision impair and the function implic of each and offer inform on disabl etiquett and effect voic , access media and in-person commun . there is an overview of assist technolog use by peopl who are visual impaired-to facilit written and electron commun as well as low-tech solut for produc large-print and braill materi in-hous . provid who implement these commun techniqu will be well equip to serv visually-impair consum , and consum will be more like to avail themselv of these servic when provid make them easili access","ordered_present_kp":[31,101,254,274,288,305,348,425,494],"keyphrases":["visually impaired clients","communication techniques","disability etiquette","effective voice","accessible media","in-person communication","assistive technologies","electronic communications","Braille materials","information access","blind clients","information and referral systems","written communications","large-print materials"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","M","R","R"]} {"id":"732","title":"A unifying co-operative Web caching architecture","abstract":"Network caching of objects has become a standard way of reducing network traffic and latency in the Web. However, Web caches exhibit poor performance with a hit rate of about 30%. A solution to improve this hit rate is to have a group of proxies form co-operation where objects can be cached for later retrieval. A cooperative cache system includes protocols for hierarchical and transversal caching. The drawback of such a system lies in the resulting network load due to the number of messages that need to be exchanged to locate an object. This paper proposes a new co-operative Web caching architecture, which unifies previous methods of Web caching. Performance results shows that the architecture achieve up to 70% co-operative hit rate and accesses the cached object in at most two hops. Moreover, the architecture is scalable with low traffic and database overhead","tok_text":"a unifi co-op web cach architectur \n network cach of object ha becom a standard way of reduc network traffic and latenc in the web . howev , web cach exhibit poor perform with a hit rate of about 30 % . a solut to improv thi hit rate is to have a group of proxi form co-oper where object can be cach for later retriev . a cooper cach system includ protocol for hierarch and transvers cach . the drawback of such a system lie in the result network load due to the number of messag that need to be exchang to locat an object . thi paper propos a new co-op web cach architectur , which unifi previou method of web cach . perform result show that the architectur achiev up to 70 % co-op hit rate and access the cach object in at most two hop . moreov , the architectur is scalabl with low traffic and databas overhead","ordered_present_kp":[8,37,677,322,348,374,439],"keyphrases":["co-operative Web caching architecture","network caching","cooperative cache system","protocols","transversal caching","network load","co-operative hit rate","network traffic reduction","network latency reduction","hierarchical caching","scalable architecture","low traffic overhead","low database overhead","Web browser","World Wide Web"],"prmu":["P","P","P","P","P","P","P","M","M","R","R","R","R","M","M"]} {"id":"1186","title":"Implementing: it's all about processes","abstract":"Looks at how the key to successful technology deployment can be found in a set of four basic disciplines","tok_text":"implement : it 's all about process \n look at how the key to success technolog deploy can be found in a set of four basic disciplin","ordered_present_kp":[69,0],"keyphrases":["implementation","technology deployment","incremental targets","third-party integration","vendor-supplied hardware integration services","vendor-supplied software integration services","manufacturers"],"prmu":["P","P","U","U","U","U","U"]} {"id":"102","title":"Harmless delays in Cohen-Grossberg neural networks","abstract":"Without assuming monotonicity and differentiability of the activation functions and any symmetry of interconnections, we establish some sufficient conditions for the globally asymptotic stability of a unique equilibrium for the Cohen-Grossberg (1983) neural network with multiple delays. Lyapunov functionals and functions combined with the Razumikhin technique are employed. The criteria are all independent of the magnitudes of the delays, and thus the delays under these conditions are harmless","tok_text":"harmless delay in cohen-grossberg neural network \n without assum monoton and differenti of the activ function and ani symmetri of interconnect , we establish some suffici condit for the global asymptot stabil of a uniqu equilibrium for the cohen-grossberg ( 1983 ) neural network with multipl delay . lyapunov function and function combin with the razumikhin techniqu are employ . the criteria are all independ of the magnitud of the delay , and thu the delay under these condit are harmless","ordered_present_kp":[0,18,65,77,95,130,186,285,301,348],"keyphrases":["harmless delays","Cohen-Grossberg neural networks","monotonicity","differentiability","activation functions","interconnections","globally asymptotic stability","multiple delays","Lyapunov functionals","Razumikhin technique"],"prmu":["P","P","P","P","P","P","P","P","P","P"]} {"id":"941","title":"Option pricing formulas based on a non-Gaussian stock price model","abstract":"Options are financial instruments that depend on the underlying stock. We explain their non-Gaussian fluctuations using the nonextensive thermodynamics parameter q. A generalized form of the Black-Scholes (BS) partial differential equation (1973) and some closed-form solutions are obtained. The standard BS equation (q = 1) which is used by economists to calculate option prices requires multiple values of the stock volatility (known as the volatility smile). Using q = 1.5 which well models the empirical distribution of returns, we get a good description of option prices using a single volatility","tok_text":"option price formula base on a non-gaussian stock price model \n option are financi instrument that depend on the underli stock . we explain their non-gaussian fluctuat use the nonextens thermodynam paramet q. a gener form of the black-schol ( bs ) partial differenti equat ( 1973 ) and some closed-form solut are obtain . the standard bs equat ( q = 1 ) which is use by economist to calcul option price requir multipl valu of the stock volatil ( known as the volatil smile ) . use q = 1.5 which well model the empir distribut of return , we get a good descript of option price use a singl volatil","ordered_present_kp":[75,0,176,291,430,459,510],"keyphrases":["option pricing formulas","financial instruments","nonextensive thermodynamics parameter","closed-form solutions","stock volatility","volatility smile","empirical distribution","nonGaussian stock price model","Black-Scholes partial differential equation"],"prmu":["P","P","P","P","P","P","P","M","R"]} {"id":"904","title":"Modeling and simulation of adaptive available bit rate voice over asynchronous transfer mode networks","abstract":"This article presents a modeling and simulation methodology to analyze the performance of voice quality when sent over the available bit rate service in asynchronous transfer mode networks. Sources can modify the rate at which they send traffic to the network based on the feedback carried in the resource management cells. This is achieved by changing the encoding level. As the contention increases to network resources-bandwidth in this case-sources start reducing the rate at which they generate and send traffic. The efficiency of the scheme under different scheduling\/drop policies and other operating conditions and environments is evaluated using simulation modeling. Furthermore, sensitivity analysis is applied to different parameters, such as queue size and averaging interval length, to investigate their impact on the performance metrics. Results show that limiting the load to 41% of the link capacity results in an acceptable quality","tok_text":"model and simul of adapt avail bit rate voic over asynchron transfer mode network \n thi articl present a model and simul methodolog to analyz the perform of voic qualiti when sent over the avail bit rate servic in asynchron transfer mode network . sourc can modifi the rate at which they send traffic to the network base on the feedback carri in the resourc manag cell . thi is achiev by chang the encod level . as the content increas to network resources-bandwidth in thi case-sourc start reduc the rate at which they gener and send traffic . the effici of the scheme under differ schedul \/ drop polici and other oper condit and environ is evalu use simul model . furthermor , sensit analysi is appli to differ paramet , such as queue size and averag interv length , to investig their impact on the perform metric . result show that limit the load to 41 % of the link capac result in an accept qualiti","ordered_present_kp":[10,0,157,293,328,350,398,582,730,745,800,864,19],"keyphrases":["modeling","simulation","adaptive available bit rate voice","voice quality","traffic","feedback","resource management cells","encoding level","scheduling\/drop policies","queue size","averaging interval length","performance metrics","link capacity","performance analysis","bandwidth contention","ATM networks"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","R","M","M"]} {"id":"596","title":"Copyright management in the digital age","abstract":"Listening to and buying music online is becoming increasingly popular with consumers. So much so that Merrill Lynch forecasts the value of the online music market will explode from $8 million in 2001 to $1,409 million in 2005. But online delivery is not without problems; the issue of copyright management in particular has become a serious thorn in the side for digital content creators. Martin Brass, ex- music producer and senior industry consultant at Syntegra, explains","tok_text":"copyright manag in the digit age \n listen to and buy music onlin is becom increasingli popular with consum . so much so that merril lynch forecast the valu of the onlin music market will explod from $ 8 million in 2001 to $ 1,409 million in 2005 . but onlin deliveri is not without problem ; the issu of copyright manag in particular ha becom a seriou thorn in the side for digit content creator . martin brass , ex- music produc and senior industri consult at syntegra , explain","ordered_present_kp":[23,374],"keyphrases":["digital age","digital content creators","online music delivery","music industry","Internet","Napster"],"prmu":["P","P","R","R","U","U"]} {"id":"1287","title":"On average depth of decision trees implementing Boolean functions","abstract":"The article considers the representation of Boolean functions in the form of decision trees. It presents the bounds on average time complexity of decision trees for all classes of Boolean functions that are closed over substitution, and the insertion and deletion of unessential variables. The obtained results are compared with the results developed by M.Ju. Moshkov (1995) that describe the worst case time complexity of decision trees","tok_text":"on averag depth of decis tree implement boolean function \n the articl consid the represent of boolean function in the form of decis tree . it present the bound on averag time complex of decis tree for all class of boolean function that are close over substitut , and the insert and delet of unessenti variabl . the obtain result are compar with the result develop by m.ju . moshkov ( 1995 ) that describ the worst case time complex of decis tree","ordered_present_kp":[3,19,40,163,408],"keyphrases":["average depth","decision trees","Boolean functions","average time complexity","worst case time complexity"],"prmu":["P","P","P","P","P"]} {"id":"697","title":"Schedulability analysis of real-time traffic in WorldFIP networks: an integrated approach","abstract":"The WorldFIP protocol is one of the profiles that constitute the European fieldbus standard EN-50170. It is particularly well suited to be used in distributed computer-controlled systems where a set of process variables must be shared among network devices. To cope with the real-time requirements of such systems, the protocol provides communication services based on the exchange of periodic and aperiodic identified variables. The periodic exchanges have the highest priority and are executed at run time according to a cyclic schedule. Therefore, the respective schedulability can be determined at pre-run-time when building the schedule table. Concerning the aperiodic exchanges, the situation is different since their priority is lower and they are bandied according to a first-come-first-served policy. In this paper, a response-time-based schedulability analysis for the real-time traffic is presented. Such analysis considers both types of traffic in an integrated way, according to their priorities. Furthermore, a fixed-priorities-based policy is also used to schedule the periodic traffic. The proposed analysis represents an improvement relative to previous work and it can be evaluated online as part of a traffic online admission control. This feature is of particular importance when a planning scheduler is used, instead of the typical offline static scheduler, to allow online changes to the set of periodic process variables","tok_text":"schedul analysi of real-tim traffic in worldfip network : an integr approach \n the worldfip protocol is one of the profil that constitut the european fieldbu standard en-50170 . it is particularli well suit to be use in distribut computer-control system where a set of process variabl must be share among network devic . to cope with the real-tim requir of such system , the protocol provid commun servic base on the exchang of period and aperiod identifi variabl . the period exchang have the highest prioriti and are execut at run time accord to a cyclic schedul . therefor , the respect schedul can be determin at pre-run-tim when build the schedul tabl . concern the aperiod exchang , the situat is differ sinc their prioriti is lower and they are bandi accord to a first-come-first-serv polici . in thi paper , a response-time-bas schedul analysi for the real-tim traffic is present . such analysi consid both type of traffic in an integr way , accord to their prioriti . furthermor , a fixed-priorities-bas polici is also use to schedul the period traffic . the propos analysi repres an improv rel to previou work and it can be evalu onlin as part of a traffic onlin admiss control . thi featur is of particular import when a plan schedul is use , instead of the typic offlin static schedul , to allow onlin chang to the set of period process variabl","ordered_present_kp":[39,220,391,671,1159,770,1334],"keyphrases":["WorldFIP Networks","distributed computer-controlled systems","communication services","aperiodic exchanges","first-come-first-served policy","traffic online admission control","periodic process variables","EN-50170 European fieldbus standard","real-time traffic schedulability analysis","real-time communication","scheduling algorithms","response time"],"prmu":["P","P","P","P","P","P","P","R","R","R","M","M"]} {"id":"1301","title":"Integrate-and-fire neurons driven by correlated stochastic input","abstract":"Neurons are sensitive to correlations among synaptic inputs. However, analytical models that explicitly include correlations are hard to solve analytically, so their influence on a neuron's response has been difficult to ascertain. To gain some intuition on this problem, we studied the firing times of two simple integrate-and-fire model neurons driven by a correlated binary variable that represents the total input current. Analytic expressions were obtained for the average firing rate and coefficient of variation (a measure of spike-train variability) as functions of the mean, variance, and correlation time of the stochastic input. The results of computer simulations were in excellent agreement with these expressions. In these models, an increase in correlation time in general produces an increase in both the average firing rate and the variability of the output spike trains. However, the magnitude of the changes depends differentially on the relative values of the input mean and variance: the increase in firing rate is higher when the variance is large relative to the mean, whereas the increase in variability is higher when the variance is relatively small. In addition, the firing rate always tends to a finite limit value as the correlation time increases toward infinity, whereas the coefficient of variation typically diverges. These results suggest that temporal correlations may play a major role in determining the variability as well as the intensity of neuronal spike trains","tok_text":"integrate-and-fir neuron driven by correl stochast input \n neuron are sensit to correl among synapt input . howev , analyt model that explicitli includ correl are hard to solv analyt , so their influenc on a neuron 's respons ha been difficult to ascertain . to gain some intuit on thi problem , we studi the fire time of two simpl integrate-and-fir model neuron driven by a correl binari variabl that repres the total input current . analyt express were obtain for the averag fire rate and coeffici of variat ( a measur of spike-train variabl ) as function of the mean , varianc , and correl time of the stochast input . the result of comput simul were in excel agreement with these express . in these model , an increas in correl time in gener produc an increas in both the averag fire rate and the variabl of the output spike train . howev , the magnitud of the chang depend differenti on the rel valu of the input mean and varianc : the increas in fire rate is higher when the varianc is larg rel to the mean , wherea the increas in variabl is higher when the varianc is rel small . in addit , the fire rate alway tend to a finit limit valu as the correl time increas toward infin , wherea the coeffici of variat typic diverg . these result suggest that tempor correl may play a major role in determin the variabl as well as the intens of neuron spike train","ordered_present_kp":[0,35,309,375,491,524,636,816,1258],"keyphrases":["integrate-and-fire neurons","correlated stochastic input","firing times","correlated binary variable","coefficient of variation","spike-train variability","computer simulation","output spike trains","temporal correlations","synaptic input correlations"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"1344","title":"Restoration of archival documents using a wavelet technique","abstract":"This paper addresses a problem of restoring handwritten archival documents by recovering their contents from the interfering handwriting on the reverse side caused by the seeping of ink. We present a novel method that works by first matching both sides of a document such that the interfering strokes are mapped with the corresponding strokes originating from the reverse side. This facilitates the identification of the foreground and interfering strokes. A wavelet reconstruction process then iteratively enhances the foreground strokes and smears the interfering strokes so as to strengthen the discriminating capability of an improved Canny edge detector against the interfering strokes. The method has been shown to restore the documents effectively with average precision and recall rates for foreground text extraction at 84 percent and 96 percent, respectively","tok_text":"restor of archiv document use a wavelet techniqu \n thi paper address a problem of restor handwritten archiv document by recov their content from the interf handwrit on the revers side caus by the seep of ink . we present a novel method that work by first match both side of a document such that the interf stroke are map with the correspond stroke origin from the revers side . thi facilit the identif of the foreground and interf stroke . a wavelet reconstruct process then iter enhanc the foreground stroke and smear the interf stroke so as to strengthen the discrimin capabl of an improv canni edg detector against the interf stroke . the method ha been shown to restor the document effect with averag precis and recal rate for foreground text extract at 84 percent and 96 percent , respect","ordered_present_kp":[32,89,442,591],"keyphrases":["wavelet technique","handwritten archival documents","wavelet reconstruction process","Canny edge detector","archival documents restoration","ink seepage","iterative stroke enhancement"],"prmu":["P","P","P","P","R","M","R"]} {"id":"711","title":"On bivariate dependence and the convex order","abstract":"We investigate the interplay between variability (in the sense of the convex order) and dependence in a bivariate framework, extending some previous results in this area. We exploit the fact that discrete uniform distributions are dense in the space of probability measures in the topology of weak convergence to prove our central result. We also obtain a partial result in the general multivariate case. Our findings can be interpreted in terms of the impact of component variability on the mean life of correlated serial and parallel systems","tok_text":"on bivari depend and the convex order \n we investig the interplay between variabl ( in the sens of the convex order ) and depend in a bivari framework , extend some previou result in thi area . we exploit the fact that discret uniform distribut are dens in the space of probabl measur in the topolog of weak converg to prove our central result . we also obtain a partial result in the gener multivari case . our find can be interpret in term of the impact of compon variabl on the mean life of correl serial and parallel system","ordered_present_kp":[3,25,219,270,292,303,459,481,512],"keyphrases":["bivariate dependence","convex order","discrete uniform distributions","probability measures","topology","weak convergence","component variability","mean life","parallel systems","serial systems","bivariate probability distributions"],"prmu":["P","P","P","P","P","P","P","P","P","R","R"]} {"id":"754","title":"Record makers [UK health records]","abstract":"Plans for a massive cradle-to-grave electronic records project have been revealed by the government. Is the scheme really viable?","tok_text":"record maker [ uk health record ] \n plan for a massiv cradle-to-grav electron record project have been reveal by the govern . is the scheme realli viabl ?","ordered_present_kp":[15,69],"keyphrases":["UK health records","electronic records project","integrated care records services","health care","social care"],"prmu":["P","P","M","M","U"]} {"id":"1000","title":"Does classicism explain universality? Arguments against a pure classical component of mind","abstract":"One of the hallmarks of human cognition is the capacity to generalize over arbitrary constituents. Marcus (Cognition 66, p.153; Cognitive Psychology 37, p. 243, 1998) argued that this capacity, called \"universal generalization\" (universality), is not supported by connectionist models. Instead, universality is best explained by classical symbol systems, with connectionism as its implementation. Here it is argued that universality is also a problem for classicism in that the syntax-sensitive rules that are supposed to provide causal explanations of mental processes are either too strict, precluding possible generalizations; or too lax, providing no information as to the appropriate alternative. Consequently, universality is not explained by a classical theory","tok_text":"doe classic explain univers ? argument against a pure classic compon of mind \n one of the hallmark of human cognit is the capac to gener over arbitrari constitu . marcu ( cognit 66 , p.153 ; cognit psycholog 37 , p. 243 , 1998 ) argu that thi capac , call \" univers gener \" ( univers ) , is not support by connectionist model . instead , univers is best explain by classic symbol system , with connection as it implement . here it is argu that univers is also a problem for classic in that the syntax-sensit rule that are suppos to provid causal explan of mental process are either too strict , preclud possibl gener ; or too lax , provid no inform as to the appropri altern . consequ , univers is not explain by a classic theori","ordered_present_kp":[4,20,54,102,258,306,365,494,539,556],"keyphrases":["classicism","universality","classical component of mind","human cognition","universal generalization","connectionist models","classical symbol systems","syntax-sensitive rules","causal explanations","mental processes"],"prmu":["P","P","P","P","P","P","P","P","P","P"]} {"id":"1045","title":"Using fractional order adjustment rules and fractional order reference models in model-reference adaptive control","abstract":"This paper investigates the use of Fractional Order Calculus (FOC) in conventional Model Reference Adaptive Control (MRAC) systems. Two modifications to the conventional MRAC are presented, i.e., the use of fractional order parameter adjustment rule and the employment of fractional order reference model. Through examples, benefits from the use of FOC are illustrated together with some remarks for further research","tok_text":"use fraction order adjust rule and fraction order refer model in model-refer adapt control \n thi paper investig the use of fraction order calculu ( foc ) in convent model refer adapt control ( mrac ) system . two modif to the convent mrac are present , i.e. , the use of fraction order paramet adjust rule and the employ of fraction order refer model . through exampl , benefit from the use of foc are illustr togeth with some remark for further research","ordered_present_kp":[4,35,65,193,148],"keyphrases":["fractional order adjustment rules","fractional order reference models","model-reference adaptive control","FOC","MRAC","fractional calculus"],"prmu":["P","P","P","P","P","R"]} {"id":"882","title":"On M\/D\/1 queue with deterministic server vacations","abstract":"We study a single server vacation queue with Poisson arrivals, deterministic service of constant duration b (> 0) and deterministic vacations of constant duration d (> 0) and designate this model as M\/D\/D\/1. After completion of each service, the server may take a vacation with probability p or may continue working in the system with probability 1 - p. We obtain time-dependent as well as steady state probability generation functions for the number in the system. For the steady state we obtain explicitly the mean number and the mean waiting time for the system and for the queue. All known results of the M\/D\/1 queue are derived as a special case. Finally, a numerical illustration is discussed","tok_text":"on m \/ d\/1 queue with determinist server vacat \n we studi a singl server vacat queue with poisson arriv , determinist servic of constant durat b ( > 0 ) and determinist vacat of constant durat d ( > 0 ) and design thi model as m \/ d \/ d\/1 . after complet of each servic , the server may take a vacat with probabl p or may continu work in the system with probabl 1 - p. we obtain time-depend as well as steadi state probabl gener function for the number in the system . for the steadi state we obtain explicitli the mean number and the mean wait time for the system and for the queue . all known result of the m \/ d\/1 queue are deriv as a special case . final , a numer illustr is discuss","ordered_present_kp":[3,22,90,106,157,402,515,535],"keyphrases":["M\/D\/1 queue","deterministic server vacations","Poisson arrivals","deterministic service","deterministic vacations","steady state probability generation functions","mean number","mean waiting time","M\/D\/D\/1 model","time-dependent probability generation functions"],"prmu":["P","P","P","P","P","P","P","P","R","R"]} {"id":"548","title":"Cool and green [air conditioning]","abstract":"In these days of global warming, air conditioning engineers need to specify not just for the needs of the occupants, but also to maximise energy efficiency. Julian Brunnock outlines the key areas to consider for energy efficient air conditioning systems","tok_text":"cool and green [ air condit ] \n in these day of global warm , air condit engin need to specifi not just for the need of the occup , but also to maximis energi effici . julian brunnock outlin the key area to consid for energi effici air condit system","ordered_present_kp":[17,152],"keyphrases":["air conditioning","energy efficiency"],"prmu":["P","P"]} {"id":"1158","title":"From powder to perfect parts","abstract":"GKN Sinter Metals has increased productivity and quality by automating the powder metal lines that produce its transmission parts","tok_text":"from powder to perfect part \n gkn sinter metal ha increas product and qualiti by autom the powder metal line that produc it transmiss part","ordered_present_kp":[30,91,81],"keyphrases":["GKN Sinter Metals","automating","powder metal lines","conveyors","gentle transfer units","robotic systems"],"prmu":["P","P","P","U","U","U"]} {"id":"927","title":"Autonomous detection of crack initiation using surface-mounted piezotransducers","abstract":"In this paper we report on the application of an in situ health monitoring system, comprising an array of piezoceramic wafer elements, to the detection of fatigue degradation in metallic specimens exposed to cyclic loading. Lamb waves, transmitted through a beam test coupon, are sensed using small surface-mounted piezotransducer elements, and the signals are then autonomously analysed for indications relating to the onset of structural degradation. The experimental results confirm the efficacy of the approach and provide a demonstration of good robustness under realistic loading conditions, emphasizing the great potential for developing an automated in situ structural health monitoring system for application to fatigue-prone operational structures, such as aircraft","tok_text":"autonom detect of crack initi use surface-mount piezotransduc \n in thi paper we report on the applic of an in situ health monitor system , compris an array of piezoceram wafer element , to the detect of fatigu degrad in metal specimen expos to cyclic load . lamb wave , transmit through a beam test coupon , are sens use small surface-mount piezotransduc element , and the signal are then autonom analys for indic relat to the onset of structur degrad . the experiment result confirm the efficaci of the approach and provid a demonstr of good robust under realist load condit , emphas the great potenti for develop an autom in situ structur health monitor system for applic to fatigue-pron oper structur , such as aircraft","ordered_present_kp":[107,159,203,220,244,258,327,436,543,564,618,714],"keyphrases":["in situ health monitoring","piezoceramic wafer elements","fatigue degradation","metallic specimens","cyclic loading","Lamb waves","surface-mounted piezotransducer elements","structural degradation","robustness","loading conditions","automated in situ structural health monitoring","aircraft","fatigue operational structures"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"962","title":"Optimal control using the transport equation: the Liouville machine","abstract":"Transport theory describes the scattering behavior of physical particles such as photons. Here we show how to connect this theory to optimal control theory and to adaptive behavior of agents embedded in an environment. Environments and tasks are defined by physical boundary conditions. Given some task, we compute a set of probability densities on continuous state and action and time. From these densities we derive an optimal policy such that for all states the most likely action maximizes the probability of reaching a predefined goal state. Liouville's conservation theorem tells us that the conditional density at time t, state s, and action a must equal the density at t + dt, s + ds, a + da. Discretization yields a linear system that can be solved directly and whose solution corresponds to an optimal policy. Discounted reward schemes are incorporated naturally by taking the Laplace transform of the equations. The Liouville machine quickly solves rather complex maze problems","tok_text":"optim control use the transport equat : the liouvil machin \n transport theori describ the scatter behavior of physic particl such as photon . here we show how to connect thi theori to optim control theori and to adapt behavior of agent embed in an environ . environ and task are defin by physic boundari condit . given some task , we comput a set of probabl densiti on continu state and action and time . from these densiti we deriv an optim polici such that for all state the most like action maxim the probabl of reach a predefin goal state . liouvil 's conserv theorem tell us that the condit densiti at time t , state s , and action a must equal the densiti at t + dt , s + ds , a + da . discret yield a linear system that can be solv directli and whose solut correspond to an optim polici . discount reward scheme are incorpor natur by take the laplac transform of the equat . the liouvil machin quickli solv rather complex maze problem","ordered_present_kp":[0,22,44,90,110,212],"keyphrases":["optimal control","transport equation","Liouville machine","scattering behavior","physical particles","adaptive behavior","embedded agents"],"prmu":["P","P","P","P","P","P","R"]} {"id":"121","title":"Formula-dependent equivalence for compositional CTL model checking","abstract":"We present a polytime computable state equivalence that is defined with respect to a given CTL formula. Since it does not attempt to preserve all CTL formulas, like bisimulation does, we can expect to compute coarser equivalences. This equivalence can be used to reduce the complexity of model checking a system of interacting FSM. Additionally, we show that in some cases our techniques can detect if a formula passes or fails, without forming the entire product machine. The method is exact and fully automatic, and handles full CTL","tok_text":"formula-depend equival for composit ctl model check \n we present a polytim comput state equival that is defin with respect to a given ctl formula . sinc it doe not attempt to preserv all ctl formula , like bisimul doe , we can expect to comput coarser equival . thi equival can be use to reduc the complex of model check a system of interact fsm . addit , we show that in some case our techniqu can detect if a formula pass or fail , without form the entir product machin . the method is exact and fulli automat , and handl full ctl","ordered_present_kp":[0,36,67,134,333],"keyphrases":["formula-dependent equivalence","CTL model checking","polytime computable state equivalence","CTL formula","interacting FSM","compositional minimization","coarse equivalence","complexity reduction","automatic method","formal design verification","computation tree logic"],"prmu":["P","P","P","P","P","M","M","M","R","U","M"]} {"id":"73","title":"How does attitude impact IT implementation: a study of small business owners","abstract":"According to previous studies, attitude towards information technology (IT) among small business owners appears to be a key factor in achieving high quality IT implementations. In an effort to extend this stream of research, we conducted case studies with small business owners and learned that high quality IT implementations resulted with owners who had positive or negative attitudes toward IT, but not with owners who had uncertain attitudes. Owners with apolar attitude, either positive or negative, all took action to temper the uncertainty and risk surrounding the use of new IT in their organization. In contrast, owners with uncertain attitudes did not make mitigating attempts to reduce uncertainty and risk. A consistent finding among those with high quality IT implementations was an entrepreneurial, or shared, management style. It is proposed, based on case study data, that small business owners with an uncertain attitude towards IT might experience higher quality IT results in their organizations through practicing a more entrepreneurial, or shared, management style. The study provides insights for both computer specialists and small business owners planning IT implementations","tok_text":"how doe attitud impact it implement : a studi of small busi owner \n accord to previou studi , attitud toward inform technolog ( it ) among small busi owner appear to be a key factor in achiev high qualiti it implement . in an effort to extend thi stream of research , we conduct case studi with small busi owner and learn that high qualiti it implement result with owner who had posit or neg attitud toward it , but not with owner who had uncertain attitud . owner with apolar attitud , either posit or neg , all took action to temper the uncertainti and risk surround the use of new it in their organ . in contrast , owner with uncertain attitud did not make mitig attempt to reduc uncertainti and risk . a consist find among those with high qualiti it implement wa an entrepreneuri , or share , manag style . it is propos , base on case studi data , that small busi owner with an uncertain attitud toward it might experi higher qualiti it result in their organ through practic a more entrepreneuri , or share , manag style . the studi provid insight for both comput specialist and small busi owner plan it implement","ordered_present_kp":[49,388,439,555,596,797,1061,1100],"keyphrases":["small business owners","negative attitudes","uncertain attitude","risk","organization","management style","computer specialists","planning","information technology implementation","positive attitudes"],"prmu":["P","P","P","P","P","P","P","P","R","R"]} {"id":"649","title":"Methods for outlier detection in prediction","abstract":"If a prediction sample is different from the calibration samples, it can be considered as an outlier in prediction. In this work, two techniques, the use of uncertainty estimation and the convex hull method are studied to detect such prediction outliers. Classical techniques (Mahalanobis distance and X-residuals), potential functions and robust techniques are used for comparison. It is concluded that the combination of the convex hull method and uncertainty estimation offers a practical way for detecting outliers in prediction. By adding the potential function method, inliers can also be detected","tok_text":"method for outlier detect in predict \n if a predict sampl is differ from the calibr sampl , it can be consid as an outlier in predict . in thi work , two techniqu , the use of uncertainti estim and the convex hull method are studi to detect such predict outlier . classic techniqu ( mahalanobi distanc and x-residu ) , potenti function and robust techniqu are use for comparison . it is conclud that the combin of the convex hull method and uncertainti estim offer a practic way for detect outlier in predict . by ad the potenti function method , inlier can also be detect","ordered_present_kp":[11,44,77,176,202,283,306,319,340,547],"keyphrases":["outlier detection","prediction sample","calibration samples","uncertainty estimation","convex hull method","Mahalanobis distance","X-residuals","potential functions","robust techniques","inliers"],"prmu":["P","P","P","P","P","P","P","P","P","P"]} {"id":"1259","title":"A mechanism for inferring approximate solutions under incomplete knowledge based on rule similarity","abstract":"This paper proposes an inference method which can obtain an approximate solution even if the knowledge stored in the problem-solving system is incomplete. When a rule needed for solving the problem does not exist, the problem can be solved by using rules similar to the existing rules. In an implementation using the SLD procedure, a resolution is executed between a subgoal and a rule if an atom of the subgoal is similar to the consequence atom of the rule. Similarities between atoms are calculated using a knowledge base of words with account of the reasoning situation, and the reliability of the derived solution is calculated based on these similarities. If many solutions are obtained, they are grouped into classes of similar solutions and a representative solution is then selected for each class. The proposed method was verified experimentally by solving simple problems","tok_text":"a mechan for infer approxim solut under incomplet knowledg base on rule similar \n thi paper propos an infer method which can obtain an approxim solut even if the knowledg store in the problem-solv system is incomplet . when a rule need for solv the problem doe not exist , the problem can be solv by use rule similar to the exist rule . in an implement use the sld procedur , a resolut is execut between a subgoal and a rule if an atom of the subgoal is similar to the consequ atom of the rule . similar between atom are calcul use a knowledg base of word with account of the reason situat , and the reliabl of the deriv solut is calcul base on these similar . if mani solut are obtain , they are group into class of similar solut and a repres solut is then select for each class . the propos method wa verifi experiment by solv simpl problem","ordered_present_kp":[102,19,40,67,361,469,576,600,737],"keyphrases":["approximate solution","incomplete knowledge","rule similarity","inference method","SLD procedure","consequence atom","reasoning","reliability","representative solution","problem solving","subgoal atom","word knowledge base","common sense knowledge"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R","M"]} {"id":"1198","title":"Post-projected Runge-Kutta methods for index-2 differential-algebraic equations","abstract":"A new projection technique for Runge-Kutta methods applied to index-2 differential-algebraic equations is presented in which the numerical approximation is projected only as part of the output process. It is shown that for methods that are strictly stable at infinity, the order of convergence is unaffected compared to standard projected methods. Gauss methods, for which this technique is of special interest when some symmetry is to be preserved, are studied in more detail","tok_text":"post-project runge-kutta method for index-2 differential-algebra equat \n a new project techniqu for runge-kutta method appli to index-2 differential-algebra equat is present in which the numer approxim is project onli as part of the output process . it is shown that for method that are strictli stabl at infin , the order of converg is unaffect compar to standard project method . gauss method , for which thi techniqu is of special interest when some symmetri is to be preserv , are studi in more detail","ordered_present_kp":[0,187,317,365,36],"keyphrases":["post-projected Runge-Kutta methods","index-2 differential-algebraic equations","numerical approximation","order of convergence","projected methods"],"prmu":["P","P","P","P","P"]} {"id":"588","title":"An accurate COG defuzzifier design using Lamarckian co-adaptation of learning and evolution","abstract":"This paper proposes a design technique of optimal center of gravity (COG) defuzzifier using the Lamarckian co-adaptation of learning and evolution. The proposed COG defuzzifier is specified by various design parameters such as the centers, widths, and modifiers of MFs. The design parameters are adjusted with the Lamarckian co-adaptation of learning and evolution, where the learning performs a local search of design parameters in an individual COG defuzzifier, but the evolution performs a global search of design parameters among a population of various COG defuzzifiers. This co-adaptation scheme allows to evolve much faster than the non-learning case and gives a higher possibility of finding an optimal solution due to its wider searching capability. An application to the truck backer-upper control problem of the proposed co-adaptive design method of COG defuzzifier is presented. The approximation ability and control performance are compared with those of the conventionally simplified COG defuzzifier in terms of the fuzzy logic controller's approximation error and the average tracing distance, respectively","tok_text":"an accur cog defuzzifi design use lamarckian co-adapt of learn and evolut \n thi paper propos a design techniqu of optim center of graviti ( cog ) defuzzifi use the lamarckian co-adapt of learn and evolut . the propos cog defuzzifi is specifi by variou design paramet such as the center , width , and modifi of mf . the design paramet are adjust with the lamarckian co-adapt of learn and evolut , where the learn perform a local search of design paramet in an individu cog defuzzifi , but the evolut perform a global search of design paramet among a popul of variou cog defuzzifi . thi co-adapt scheme allow to evolv much faster than the non-learn case and give a higher possibl of find an optim solut due to it wider search capabl . an applic to the truck backer-upp control problem of the propos co-adapt design method of cog defuzzifi is present . the approxim abil and control perform are compar with those of the convent simplifi cog defuzzifi in term of the fuzzi logic control 's approxim error and the averag trace distanc , respect","ordered_present_kp":[57,67,963,422],"keyphrases":["learning","evolution","local search","fuzzy logic controller","optimal center of gravity defuzzifier"],"prmu":["P","P","P","P","R"]} {"id":"674","title":"Portal payback","abstract":"The benefits of deploying a corporate portal are well-documented: access to applications and content is centralised, so users do not spend hours searching for information; the management of disparate applications is also centralised, and by allowing users to access 'self-service' applications in areas such as human resources and procurement, organisations spend less time on manual processing tasks. But how far can prospective customers rely on the ROI figures presented to them by portal technology vendors? In particular, how reliable are the 'ROI calculators' these vendors supply on their web sites?","tok_text":"portal payback \n the benefit of deploy a corpor portal are well-docu : access to applic and content is centralis , so user do not spend hour search for inform ; the manag of dispar applic is also centralis , and by allow user to access ' self-servic ' applic in area such as human resourc and procur , organis spend less time on manual process task . but how far can prospect custom reli on the roi figur present to them by portal technolog vendor ? in particular , how reliabl are the ' roi calcul ' these vendor suppli on their web site ?","ordered_present_kp":[41,488,530],"keyphrases":["corporate portal","ROI calculator","web sites","return on investment","metrics"],"prmu":["P","P","P","M","U"]} {"id":"631","title":"A modified Fieller interval for the interval estimation of effective doses for a logistic dose-response curve","abstract":"Interval estimation of the gamma % effective dose ( mu \/sub gamma \/ say) is often based on the asymptotic variance of the maximum likelihood estimator (delta interval) or Fieller's theorem (Fieller interval). Sitter and Wu (1993) compared the delta and Fieller intervals for the median effective dose ( mu \/sub 50\/) assuming a logistic dose-response curve. Their results indicated that although Fieller intervals are generally superior to delta intervals, they appear to be conservative. Here an adjusted form of the Fieller interval for mu \/sub gamma \/ termed an adjusted Fieller (AF) interval is introduced. A comparison of the AF interval with the delta and Fieller intervals is provided and the properties of these three interval estimation methods are investigated","tok_text":"a modifi fieller interv for the interv estim of effect dose for a logist dose-respons curv \n interv estim of the gamma % effect dose ( mu \/sub gamma \/ say ) is often base on the asymptot varianc of the maximum likelihood estim ( delta interv ) or fieller 's theorem ( fieller interv ) . sitter and wu ( 1993 ) compar the delta and fieller interv for the median effect dose ( mu \/sub 50\/ ) assum a logist dose-respons curv . their result indic that although fieller interv are gener superior to delta interv , they appear to be conserv . here an adjust form of the fieller interv for mu \/sub gamma \/ term an adjust fieller ( af ) interv is introduc . a comparison of the af interv with the delta and fieller interv is provid and the properti of these three interv estim method are investig","ordered_present_kp":[2,32,48,66,178,202,229,247,354],"keyphrases":["modified Fieller interval","interval estimation","effective doses","logistic dose-response curve","asymptotic variance","maximum likelihood estimator","delta interval","Fieller's theorem","median effective dose"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"1264","title":"Estimation of the vanishing point for automatic driving system using a cross ratio","abstract":"This paper proposes a new method to estimate the vanishing point used as the vehicle heading, which is essential in automatic driving systems. The proposed method uses a cross ratio comprised of a ratio of lengths from four collinear points for extracting the edges that shape the vanishing point. Then, lines that intersect at one point are fitted to the edges in a Hough space. Consequently, the vanishing point is estimated robustly even when the lane markings are occluded by other vehicles. In the presence of lane markings, the road boundaries are also estimated at the same time. Experimental results from images of a real road scene show the effectiveness of the proposed method","tok_text":"estim of the vanish point for automat drive system use a cross ratio \n thi paper propos a new method to estim the vanish point use as the vehicl head , which is essenti in automat drive system . the propos method use a cross ratio compris of a ratio of length from four collinear point for extract the edg that shape the vanish point . then , line that intersect at one point are fit to the edg in a hough space . consequ , the vanish point is estim robustli even when the lane mark are occlud by other vehicl . in the presenc of lane mark , the road boundari are also estim at the same time . experiment result from imag of a real road scene show the effect of the propos method","ordered_present_kp":[30,57,30,270,400,473,627],"keyphrases":["automatic driving system","automatic driving system","cross ratio","collinear points","Hough space","lane markings","real road scene","vanishing point estimation","automatic driving systems"],"prmu":["P","P","P","P","P","P","P","R","P"]} {"id":"1221","title":"An approach to developing computational supports for reciprocal tutoring","abstract":"This study presents a novel approach to developing computational supports for reciprocal tutoring. Reciprocal tutoring is a collaborative learning activity, where two participants take turns to play the role of a tutor and a tutee. The computational supports include scaffolding tools for the tutor and a computer-simulated virtual participant. The approach, including system architecture, implementations of scaffolding tools for the tutor and of a virtual participant is presented herein. Furthermore, a system for reciprocal tutoring is implemented as an example of the approach","tok_text":"an approach to develop comput support for reciproc tutor \n thi studi present a novel approach to develop comput support for reciproc tutor . reciproc tutor is a collabor learn activ , where two particip take turn to play the role of a tutor and a tute . the comput support includ scaffold tool for the tutor and a computer-simul virtual particip . the approach , includ system architectur , implement of scaffold tool for the tutor and of a virtual particip is present herein . furthermor , a system for reciproc tutor is implement as an exampl of the approach","ordered_present_kp":[161,280,314,370],"keyphrases":["collaborative learning","scaffolding tools","computer-simulated virtual participant","system architecture","reciprocal tutoring computational support","intelligent tutoring system"],"prmu":["P","P","P","P","R","M"]} {"id":"1299","title":"How much should publishers spend on technology?","abstract":"A study confirms that spending on publishing-specific information technology (IT) resources is growing much faster than IT spending for general business activities, at least among leading publishers in the scientific, technical and medical (STM) market. The survey asked about information technology funding and staffing levels-past, present and future-and also inquired about activities in content management, Web delivery, computer support and customer relationship management. The results provide a starting point for measuring information technology growth and budget allocations in this publishing segment","tok_text":"how much should publish spend on technolog ? \n a studi confirm that spend on publishing-specif inform technolog ( it ) resourc is grow much faster than it spend for gener busi activ , at least among lead publish in the scientif , technic and medic ( stm ) market . the survey ask about inform technolog fund and staf levels-past , present and future-and also inquir about activ in content manag , web deliveri , comput support and custom relationship manag . the result provid a start point for measur inform technolog growth and budget alloc in thi publish segment","ordered_present_kp":[152,381,397,16,530,412,431],"keyphrases":["publishing","IT spending","content management","Web delivery","computer support","customer relationship management","budget"],"prmu":["P","P","P","P","P","P","P"]} {"id":"689","title":"Continuous-time linear systems: folklore and fact","abstract":"We consider a family of continuous input-output maps representing linear time-invariant systems that take a set of signals into itself. It is shown that this family contains maps whose impulse response is the zero function, but which take certain inputs into nonzero outputs. It is shown also that this family contains members whose input-output properties are not described by their frequency domain response functions, and that the maps considered need not even commute","tok_text":"continuous-tim linear system : folklor and fact \n we consid a famili of continu input-output map repres linear time-invari system that take a set of signal into itself . it is shown that thi famili contain map whose impuls respons is the zero function , but which take certain input into nonzero output . it is shown also that thi famili contain member whose input-output properti are not describ by their frequenc domain respons function , and that the map consid need not even commut","ordered_present_kp":[15,72,111,216,238,406,479],"keyphrases":["linear systems","continuous input-output maps","time-invariant systems","impulse response","zero function","frequency domain response","commutation","continuous-time systems","signal processing"],"prmu":["P","P","P","P","P","P","P","R","M"]} {"id":"575","title":"A new voltage-vector selection algorithm in direct torque control of induction motor drives","abstract":"AC drives based on direct torque control of induction machines allow high dynamic performance to be obtained with very simple control schemes. The drive behavior, in terms of current, flux and torque ripple, is dependent on the utilised voltage vector selection strategy and the operating conditions. In this paper a new voltage vector selection algorithm, which allows a sensible reduction of the RMS value of the stator current ripple without increasing the average value of the inverter switching frequency and without the need of a PWM pulse generator block is presented Numerical simulations have been carried out to validate the proposed method","tok_text":"a new voltage-vector select algorithm in direct torqu control of induct motor drive \n ac drive base on direct torqu control of induct machin allow high dynam perform to be obtain with veri simpl control scheme . the drive behavior , in term of current , flux and torqu rippl , is depend on the utilis voltag vector select strategi and the oper condit . in thi paper a new voltag vector select algorithm , which allow a sensibl reduct of the rm valu of the stator current rippl without increas the averag valu of the invert switch frequenc and without the need of a pwm puls gener block is present numer simul have been carri out to valid the propos method","ordered_present_kp":[6,41,65,86,147,263,301,339,441,456,516],"keyphrases":["voltage-vector selection algorithm","direct torque control","induction motor drives","AC drives","high dynamic performance","torque ripple","voltage vector selection strategy","operating conditions","RMS value","stator current ripple","inverter switching frequency","torque variations","flux variations","4-poles induction motor","steady-state operation","dynamic behavior","torque step response","220 V","50 Hz","4 kW"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","M","M","M","M","R","M","U","U","U"]} {"id":"1165","title":"Recognizing groups G\/sub 2\/(3\/sup n\/) by their element orders","abstract":"It is proved that a finite group that is isomorphic to a simple non-Abelian group G = G\/sub 2\/(3\/sup n\/) is, up to isomorphism, recognized by a set omega (G) of its element orders, that is, H approximately= G if omega (H) = omega (G) for some finite group H","tok_text":"recogn group g \/ sub 2\/(3 \/ sup n\/ ) by their element order \n it is prove that a finit group that is isomorph to a simpl non-abelian group g = g \/ sub 2\/(3 \/ sup n\/ ) is , up to isomorph , recogn by a set omega ( g ) of it element order , that is , h approximately= g if omega ( h ) = omega ( g ) for some finit group h","ordered_present_kp":[46,81,101],"keyphrases":["element orders","finite group","isomorphism"],"prmu":["P","P","P"]} {"id":"1120","title":"An effective feedback control mechanism for DiffServ architecture","abstract":"As a scalable QoS (Quality of Service) architecture, Diffserv (Differentiated Service) mainly consists of two components: traffic conditioning at the edge of the Diffserv domain and simple packet forwarding inside the DiffServ domain. DiffServ has many advantages such as flexibility, scalability and simplicity. But when providing AF (Assured Forwarding) services, DiffServ has some problems such as unfairness among aggregated flows or among micro-flows belonging to an aggregated flow. In this paper, a feedback mechanism for AF aggregated flows is proposed to solve this problem. Simulation results show that this mechanism does improve the performance of DiffServ. First, it can improve the fairness among aggregated flows and make DiffServ more friendly toward TCP (Transmission Control Protocol) flows. Second, it can decrease the buffer requirements at the congested router and thus obtain lower delay and packet loss rate. Third, it also keeps almost the same link utility as in normal DiffServ. Finally, it is simple and easy to be implemented","tok_text":"an effect feedback control mechan for diffserv architectur \n as a scalabl qo ( qualiti of servic ) architectur , diffserv ( differenti servic ) mainli consist of two compon : traffic condit at the edg of the diffserv domain and simpl packet forward insid the diffserv domain . diffserv ha mani advantag such as flexibl , scalabl and simplic . but when provid af ( assur forward ) servic , diffserv ha some problem such as unfair among aggreg flow or among micro-flow belong to an aggreg flow . in thi paper , a feedback mechan for af aggreg flow is propos to solv thi problem . simul result show that thi mechan doe improv the perform of diffserv . first , it can improv the fair among aggreg flow and make diffserv more friendli toward tcp ( transmiss control protocol ) flow . second , it can decreas the buffer requir at the congest router and thu obtain lower delay and packet loss rate . third , it also keep almost the same link util as in normal diffserv . final , it is simpl and easi to be implement","ordered_present_kp":[74,38,175,234,177,511,424,737,10],"keyphrases":["feedback control","Diffserv","QoS","traffic conditioning","AF","packet forwarding","fairness","feedback mechanism","TCP","QoS architecture"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"794","title":"On the discretization of double-bracket flows","abstract":"This paper extends the method of Magnus series to Lie-algebraic equations originating in double-bracket flows. We show that the solution of the isospectral flow Y' = [[Y,N],Y], Y(O) = Y\/sub 0\/ in Sym(n), can be represented in the form Y(t) = e\/sup Omega (t)\/Y\/sub 0\/e\/sup - Omega (1)\/, where the Taylor expansion of Omega can be constructed explicitly, term-by-term, identifying individual expansion terms with certain rooted trees with bicolor leaves. This approach is extended to other Lie-algebraic equations that can be appropriately expressed in terms of a finite \"alphabet\"","tok_text":"on the discret of double-bracket flow \n thi paper extend the method of magnu seri to lie-algebra equat origin in double-bracket flow . we show that the solut of the isospectr flow y ' = [ [ y , n],i ] , y(o ) = y \/ sub 0\/ in sym(n ) , can be repres in the form y(t ) = e \/ sup omega ( t)\/i \/ sub 0 \/ e \/ sup - omega ( 1)\/ , where the taylor expans of omega can be construct explicitli , term-by-term , identifi individu expans term with certain root tree with bicolor leav . thi approach is extend to other lie-algebra equat that can be appropri express in term of a finit \" alphabet \"","ordered_present_kp":[71,85,165,334,460],"keyphrases":["Magnus series","Lie-algebraic equations","isospectral flow","Taylor expansion","bicolor leaves","double-bracket flows discretization"],"prmu":["P","P","P","P","P","R"]} {"id":"1384","title":"Data allocation on wireless broadcast channels for efficient query processing","abstract":"Data broadcast is an excellent method for efficient data dissemination in the mobile computing environment. The application domain of data broadcast will be widely expanded in the near future, where the client is expected to perform complex queries or transactions on the broadcast data. To reduce the access latency for processing the complex query, it is beneficial to place the data accessed in a query close to each other on the broadcast channel. In this paper, we propose an efficient algorithm to determine the allocation of the data on the broadcast channel such that frequently co-accessed data are not only allocated close to each other, but also in a particular order which optimizes the performance of query processing. Our mechanism is based on the well-known problem named optimal linear ordering. Experiments are performed to justify the benefit of our approach","tok_text":"data alloc on wireless broadcast channel for effici queri process \n data broadcast is an excel method for effici data dissemin in the mobil comput environ . the applic domain of data broadcast will be wide expand in the near futur , where the client is expect to perform complex queri or transact on the broadcast data . to reduc the access latenc for process the complex queri , it is benefici to place the data access in a queri close to each other on the broadcast channel . in thi paper , we propos an effici algorithm to determin the alloc of the data on the broadcast channel such that frequent co-access data are not onli alloc close to each other , but also in a particular order which optim the perform of queri process . our mechan is base on the well-known problem name optim linear order . experi are perform to justifi the benefit of our approach","ordered_present_kp":[52,14,334,134],"keyphrases":["wireless broadcast channels","query processing","mobile computing","access latency","database broadcasting","access time","tuning time","broadcast program"],"prmu":["P","P","P","P","M","M","U","M"]} {"id":"1078","title":"Action aggregation and defuzzification in Mamdani-type fuzzy systems","abstract":"Discusses the issues of action aggregation and defuzzification in Mamdani-type fuzzy systems. The paper highlights the shortcomings of defuzzification techniques associated with the customary interpretation of the sentence connective 'and' by means of the set union operation. These include loss of smoothness of the output characteristic and inaccurate mapping of the fuzzy response. The most appropriate procedure for aggregating the outputs of different fuzzy rules and converting them into crisp signals is then suggested. The advantages in terms of increased transparency and mapping accuracy of the fuzzy response are demonstrated","tok_text":"action aggreg and defuzzif in mamdani-typ fuzzi system \n discuss the issu of action aggreg and defuzzif in mamdani-typ fuzzi system . the paper highlight the shortcom of defuzzif techniqu associ with the customari interpret of the sentenc connect ' and ' by mean of the set union oper . these includ loss of smooth of the output characterist and inaccur map of the fuzzi respons . the most appropri procedur for aggreg the output of differ fuzzi rule and convert them into crisp signal is then suggest . the advantag in term of increas transpar and map accuraci of the fuzzi respons are demonstr","ordered_present_kp":[0,18,30,231,270,440,365,473,536,549],"keyphrases":["action aggregation","defuzzification","Mamdani-type fuzzy systems","sentence connective","set union operation","fuzzy response","fuzzy rules","crisp signals","transparency","mapping accuracy"],"prmu":["P","P","P","P","P","P","P","P","P","P"]} {"id":"842","title":"The incredible shrinking pipeline","abstract":"We look at the harsh facts concerning the percentage of degrees awarded in CS to women. We study the trend of degrees awarded in CS since 1980, and compare the trend in CS to other science and engineering disciplines. We consider the relationship between the percentage of degrees awarded to women by a CS department and the college the CS department is within. We find that CS departments in engineering colleges graduate, on average, proportionately fewer women than CS departments in non-engineering colleges. We request that the community respond to the facts and speculations presented in this article","tok_text":"the incred shrink pipelin \n we look at the harsh fact concern the percentag of degre award in cs to women . we studi the trend of degre award in cs sinc 1980 , and compar the trend in cs to other scienc and engin disciplin . we consid the relationship between the percentag of degre award to women by a cs depart and the colleg the cs depart is within . we find that cs depart in engin colleg graduat , on averag , proportion fewer women than cs depart in non-engin colleg . we request that the commun respond to the fact and specul present in thi articl","ordered_present_kp":[100,196,207],"keyphrases":["women","science","engineering","pipeline shrinkage problem","computer science degrees"],"prmu":["P","P","P","M","M"]} {"id":"807","title":"Integrated optical metrology controls post etch CDs","abstract":"Control of the transistor gate critical dimension (CD) on the order of a few nanometers is a top priority in many advanced IC fabs. Each nanometer deviation from the target gate length translates directly into the operational speed of these devices. However, using in-line process control by linking the lithography and etch tools can improve CD performance beyond what each individual tool can achieve. The integration of optical CD metrology tools to etch mainframes can result in excellent etcher stability and better control of post-etch CDs","tok_text":"integr optic metrolog control post etch cd \n control of the transistor gate critic dimens ( cd ) on the order of a few nanomet is a top prioriti in mani advanc ic fab . each nanomet deviat from the target gate length translat directli into the oper speed of these devic . howev , use in-lin process control by link the lithographi and etch tool can improv cd perform beyond what each individu tool can achiev . the integr of optic cd metrolog tool to etch mainfram can result in excel etcher stabil and better control of post-etch cd","ordered_present_kp":[0,60,160,198,244,425,451,485,284,356],"keyphrases":["integrated optical metrology","transistor gate critical dimension","IC fabs","target gate length","operational speed","in-line process control","CD performance","optical CD metrology tools","etch mainframes","etcher stability","post etch CD control","lithography tools","photolithography"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R","U"]} {"id":"1411","title":"Speedera: Web without the wait","abstract":"There's no greater testament to the utility of the Internet than the fact that hundreds of millions of people worldwide are willing to wait for Web pages as they build incrementally on screen. But while users may put up with the \"World Wide Wait,\" they definitely don't like it. That's where Content Delivery Networks come in. CDNs can't turn a footpath into a freeway, but they can help data in transit take advantage of shortcuts and steer clear of traffic jams. And while enhancing the responsiveness of Web interaction, CDNs also enhance the prospects of their clients, who need engaged visitors to keep their Web-based business models afloat. \"Our mission is to improve the quality of the Internet experience for end-users,\" says Gordon Smith, vice president of marketing at Speedera Networks in Santa Clara, California, \"and to enable Web-site operators to provide better delivery quality, performance, scalability, and security through an outsourced service model that slashes IT costs.\"","tok_text":"speedera : web without the wait \n there 's no greater testament to the util of the internet than the fact that hundr of million of peopl worldwid are will to wait for web page as they build increment on screen . but while user may put up with the \" world wide wait , \" they definit do n't like it . that 's where content deliveri network come in . cdn ca n't turn a footpath into a freeway , but they can help data in transit take advantag of shortcut and steer clear of traffic jam . and while enhanc the respons of web interact , cdn also enhanc the prospect of their client , who need engag visitor to keep their web-bas busi model afloat . \" our mission is to improv the qualiti of the internet experi for end-us , \" say gordon smith , vice presid of market at speedera network in santa clara , california , \" and to enabl web-sit oper to provid better deliveri qualiti , perform , scalabl , and secur through an outsourc servic model that slash it cost . \"","ordered_present_kp":[313,827,857,886,900,917,517,616,690],"keyphrases":["Content Delivery Networks","Web interaction","Web-based business models","Internet experience","Web-site operators","delivery quality","scalability","security","outsourced service model","World Wide Web"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"1085","title":"A variable-stepsize variable-order multistep method for the integration of perturbed linear problems","abstract":"G. Scheifele (1971) wrote the solution of a perturbed oscillator as an expansion in terms of a new set of functions, which extends the monomials in the Taylor series of the solution. Recently, P. Martin and J.M. Ferrandiz (1997) constructed a multistep code based on the Scheifele technique, and it was generalized by D.J. Lopez and P. Martin (1998) for perturbed linear problems. However, the remarked codes are constant steplength methods, and efficient integrators must be able to change the steplength. In this paper we extend the ideas of F.T. Krogh (1974) from Adams methods to the algorithm proposed by Lopez and Martin, and we show the advantages of the new code in perturbed problems","tok_text":"a variable-steps variable-ord multistep method for the integr of perturb linear problem \n g. scheifel ( 1971 ) wrote the solut of a perturb oscil as an expans in term of a new set of function , which extend the monomi in the taylor seri of the solut . recent , p. martin and j.m. ferrandiz ( 1997 ) construct a multistep code base on the scheifel techniqu , and it wa gener by d.j. lopez and p. martin ( 1998 ) for perturb linear problem . howev , the remark code are constant steplength method , and effici integr must be abl to chang the steplength . in thi paper we extend the idea of f.t. krogh ( 1974 ) from adam method to the algorithm propos by lopez and martin , and we show the advantag of the new code in perturb problem","ordered_present_kp":[2,132,211,225,311,468,613],"keyphrases":["variable-stepsize variable-order multistep method","perturbed oscillator","monomials","Taylor series","multistep code","constant steplength methods","Adams methods","perturbed linear problems integration"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"1379","title":"Web-based intelligent helpdesk-support environment","abstract":"With the advent of Internet technology, it is now feasible to provide effective and efficient helpdesk service over the global Internet to meet customers' requirements and satisfaction. In this research, we have designed and developed a Web-based intelligent helpdesk-support environment, WebHotLine, to support the customer service centre of a large multinational corporation in the electronics industry. The paper describes the basic architecture of the environment that supports the major functions of Web-based fault information retrieval, online multilingual translation capability, different operating modes of video-conferencing for enhanced support and direct intelligent fault diagnosis by customers or customer support engineers. As a result, WebHotLine helps to save cost in eliminating the expensive overseas telephone charges, reduction in machine down time and number of on-site visits by service engineers as in traditional helpdesk environment","tok_text":"web-bas intellig helpdesk-support environ \n with the advent of internet technolog , it is now feasibl to provid effect and effici helpdesk servic over the global internet to meet custom ' requir and satisfact . in thi research , we have design and develop a web-bas intellig helpdesk-support environ , webhotlin , to support the custom servic centr of a larg multin corpor in the electron industri . the paper describ the basic architectur of the environ that support the major function of web-bas fault inform retriev , onlin multilingu translat capabl , differ oper mode of video-conferenc for enhanc support and direct intellig fault diagnosi by custom or custom support engin . as a result , webhotlin help to save cost in elimin the expens oversea telephon charg , reduct in machin down time and number of on-sit visit by servic engin as in tradit helpdesk environ","ordered_present_kp":[0,63,302,329,490,521],"keyphrases":["Web-based intelligent helpdesk-support environment","Internet technology","WebHotLine","customer service centre","Web-based fault information retrieval","online multilingual translation capability","videoconferencing"],"prmu":["P","P","P","P","P","P","U"]} {"id":"769","title":"Permission grids: practical, error-bounded simplification","abstract":"We introduce the permission grid, a spatial occupancy grid which can be used to guide almost any standard polygonal surface simplification algorithm into generating an approximation with a guaranteed geometric error bound. In particular, all points on the approximation are guaranteed to be within some user-specified distance from the original surface. Such bounds are notably absent from many current simplification methods, and are becoming increasingly important for applications in scientific computing and adaptive level of detail control. Conceptually simple, the permission grid defines a volume in which the approximation must lie, and does not permit the underlying simplification algorithm to generate approximations outside the volume. The permission grid makes three important, practical improvements over current error-bounded simplification methods. First, it works on arbitrary triangular models, handling all manners of mesh degeneracies gracefully. Further, the error tolerance may be easily expanded as simplification proceeds, allowing the construction of an error-bounded level of detail hierarchy with vertex correspondences among all levels of detail. And finally, the permission grid has a representation complexity independent of the size of the input model, and a small running time overhead, making it more practical and efficient than current methods with similar guarantees","tok_text":"permiss grid : practic , error-bound simplif \n we introduc the permiss grid , a spatial occup grid which can be use to guid almost ani standard polygon surfac simplif algorithm into gener an approxim with a guarante geometr error bound . in particular , all point on the approxim are guarante to be within some user-specifi distanc from the origin surfac . such bound are notabl absent from mani current simplif method , and are becom increasingli import for applic in scientif comput and adapt level of detail control . conceptu simpl , the permiss grid defin a volum in which the approxim must lie , and doe not permit the underli simplif algorithm to gener approxim outsid the volum . the permiss grid make three import , practic improv over current error-bound simplif method . first , it work on arbitrari triangular model , handl all manner of mesh degeneraci grace . further , the error toler may be easili expand as simplif proce , allow the construct of an error-bound level of detail hierarchi with vertex correspond among all level of detail . and final , the permiss grid ha a represent complex independ of the size of the input model , and a small run time overhead , make it more practic and effici than current method with similar guarante","ordered_present_kp":[0,80,144,207,191,25,311,469,489,801,850,888,1009,1089,1161],"keyphrases":["permission grid","error-bounded simplification","spatial occupancy grid","polygonal surface simplification algorithm","approximation","guaranteed geometric error bound","user-specified distance","scientific computing","adaptive level of detail control","arbitrary triangular models","mesh degeneracies","error tolerance","vertex correspondences","representation complexity","running time overhead"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"551","title":"Access privilege management in protection systems","abstract":"We consider the problem of managing access privileges on protected objects. We associate one or more locks with each object, one lock for each access right defined by the object type. Possession of an access right on a given object is certified by possession of a key for this object, if this key matches one of the object locks. We introduce a number of variants to this basic key-lock technique. Polymorphic access rights make it possible to decrease the number of keys required to certify possession of complex access privileges that are defined in terms of several access rights. Multiple locks on the same access right allow us to exercise forms of selective revocation of access privileges. A lock conversion function can be used to reduce the number of locks associated with any given object to a single lock. The extent of the results obtained is evaluated in relation to alternative methodologies for access privilege management","tok_text":"access privileg manag in protect system \n we consid the problem of manag access privileg on protect object . we associ one or more lock with each object , one lock for each access right defin by the object type . possess of an access right on a given object is certifi by possess of a key for thi object , if thi key match one of the object lock . we introduc a number of variant to thi basic key-lock techniqu . polymorph access right make it possibl to decreas the number of key requir to certifi possess of complex access privileg that are defin in term of sever access right . multipl lock on the same access right allow us to exercis form of select revoc of access privileg . a lock convers function can be use to reduc the number of lock associ with ani given object to a singl lock . the extent of the result obtain is evalu in relat to altern methodolog for access privileg manag","ordered_present_kp":[0,25,92,131,393,413,647,683],"keyphrases":["access privilege management","protection systems","protected objects","locks","key-lock technique","polymorphic access rights","selective revocation","lock conversion function","complex access privilege possession certification"],"prmu":["P","P","P","P","P","P","P","P","M"]} {"id":"986","title":"Wavelet-based level-of-detail representation of 3D objects","abstract":"In this paper, we propose a 3D object LOD (Level of Detail) modeling system that constructs a mesh from range images and generates the mesh of various LOD using the wavelet transform. In the initial mesh generation, we use the marching cube algorithm. We modify the original algorithm to apply it to construct the mesh from multiple range images efficiently. To get the base mesh we use the decimation algorithm which simplifies a mesh with preserving the topology. Finally, when reconstructing new mesh which is similar to initial mesh we calculate the wavelet coefficients by using the wavelet transform. We solve the critical problem of wavelet-based methods - the surface crease problem - by using the mesh simplification as the base mesh generation method","tok_text":"wavelet-bas level-of-detail represent of 3d object \n in thi paper , we propos a 3d object lod ( level of detail ) model system that construct a mesh from rang imag and gener the mesh of variou lod use the wavelet transform . in the initi mesh gener , we use the march cube algorithm . we modifi the origin algorithm to appli it to construct the mesh from multipl rang imag effici . to get the base mesh we use the decim algorithm which simplifi a mesh with preserv the topolog . final , when reconstruct new mesh which is similar to initi mesh we calcul the wavelet coeffici by use the wavelet transform . we solv the critic problem of wavelet-bas method - the surfac creas problem - by use the mesh simplif as the base mesh gener method","ordered_present_kp":[0,154,205,262,393,414,558,618,661,695],"keyphrases":["wavelet-based level-of-detail representation","range images","wavelet transform","marching cube algorithm","base mesh","decimation algorithm","wavelet coefficients","critical problem","surface crease problem","mesh simplification","3D object level of detail modeling system","hierarchy transformation"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","M"]} {"id":"1141","title":"Reproducibility of mammary gland structure during repeat setups in a supine position","abstract":"Purpose: In breast conserving therapy, complete excision of the tumor with an acceptable cosmetic outcome depends on accurate localization in terms of both the position of the lesion and its extent. We hypothesize that preoperative contrast-enhanced magnetic resonance (MR) imaging of the patient in a supine position may be used for accurate tumor localization and marking of its extent immediately prior to surgery. Our aims in this study are to assess the reproducibility of mammary gland structure during repeat setups in a supine position, to evaluate the effect of a breast immobilization device, and to derive reproducibility margins that take internal tissue shifts into account occurring between repeat setups. Materials Methods: The reproducibility of mammary gland structure during repeat setups in a supine position is estimated by quantification of tissue shifts in the breasts of healthy volunteers between repeat MR setups. For each volunteer fiducials are identified and registered with their counter locations in corresponding MR volumes. The difference in position denotes the shift of breast tissue. The dependence on breast volume and the part of the breast, as well as the effect of a breast immobilization cast are studied. Results: The tissue shifts are small with a mean standard deviation on the order of 1.5 mm, being slightly larger in large breasts (V>1000 cm\/sup 3\/), and in the posterior part (toward the pectoral muscle) of both small and large breasts. The application of a breast immobilization cast reduces the tissue shifts in large breasts. A reproducibility margin on the order of 5 mm will take the internal tissue shifts into account that occur between repeat setups. Conclusion: The results demonstrate a high reproducibility of mammary gland structure during repeat setups in a supine position","tok_text":"reproduc of mammari gland structur dure repeat setup in a supin posit \n purpos : in breast conserv therapi , complet excis of the tumor with an accept cosmet outcom depend on accur local in term of both the posit of the lesion and it extent . we hypothes that preoper contrast-enhanc magnet reson ( mr ) imag of the patient in a supin posit may be use for accur tumor local and mark of it extent immedi prior to surgeri . our aim in thi studi are to assess the reproduc of mammari gland structur dure repeat setup in a supin posit , to evalu the effect of a breast immobil devic , and to deriv reproduc margin that take intern tissu shift into account occur between repeat setup . materi method : the reproduc of mammari gland structur dure repeat setup in a supin posit is estim by quantif of tissu shift in the breast of healthi volunt between repeat mr setup . for each volunt fiduci are identifi and regist with their counter locat in correspond mr volum . the differ in posit denot the shift of breast tissu . the depend on breast volum and the part of the breast , as well as the effect of a breast immobil cast are studi . result : the tissu shift are small with a mean standard deviat on the order of 1.5 mm , be slightli larger in larg breast ( v>1000 cm \/ sup 3\/ ) , and in the posterior part ( toward the pector muscl ) of both small and larg breast . the applic of a breast immobil cast reduc the tissu shift in larg breast . a reproduc margin on the order of 5 mm will take the intern tissu shift into account that occur between repeat setup . conclus : the result demonstr a high reproduc of mammari gland structur dure repeat setup in a supin posit","ordered_present_kp":[40,58,84,356,558,594,620],"keyphrases":["repeat setups","supine position","breast conserving therapy","accurate tumor localization","breast immobilization device","reproducibility margins","internal tissue shifts","mammary gland structure reproducibility","contrast-enhanced magnetic resonance imaging","localization methods"],"prmu":["P","P","P","P","P","P","P","R","R","R"]} {"id":"1104","title":"A 3-stage pipelined architecture for multi-view images decoder","abstract":"In this paper, we proposed the architecture of the decoder which implements the multi-view images decoding algorithm. The study of the hardware structure of the multi-view image processing has not been accomplished. The proposed multi-view images decoder operates in a three stage pipelined manner and extracts the depth of the pixels of the decoded image every clock. The multi-view images decoder consists of three modules, Node selector which transfers the value of the nodes repeatedly and Depth Extractor which extracts the depth of each pixel from the four values of the nodes and Affine Transformer which generates the projecting position on the image plane from the values of the pixels and the specified viewpoint. The proposed architecture is designed and simulated by the Max+PlusII design tool and the operating frequency is 30 MHz. The image can be constructed in a real time by the decoder with the proposed architecture","tok_text":"a 3-stage pipelin architectur for multi-view imag decod \n in thi paper , we propos the architectur of the decod which implement the multi-view imag decod algorithm . the studi of the hardwar structur of the multi-view imag process ha not been accomplish . the propos multi-view imag decod oper in a three stage pipelin manner and extract the depth of the pixel of the decod imag everi clock . the multi-view imag decod consist of three modul , node selector which transfer the valu of the node repeatedli and depth extractor which extract the depth of each pixel from the four valu of the node and affin transform which gener the project posit on the imag plane from the valu of the pixel and the specifi viewpoint . the propos architectur is design and simul by the max+plusii design tool and the oper frequenc is 30 mhz . the imag can be construct in a real time by the decod with the propos architectur","ordered_present_kp":[34,183,444,509,598,705,767,798,815],"keyphrases":["multi-view images decoder","hardware structure","node selector","depth extractor","affine transformer","viewpoint","Max+PlusII design tool","operating frequency","30 MHz","three-stage pipelined architecture","pixel depth"],"prmu":["P","P","P","P","P","P","P","P","P","M","R"]} {"id":"97","title":"Philadelphia stock exchange taps TimesTen for database technology","abstract":"PHLX rolls out Equity Options AutoQuote System to traders as the first application to leverage its enhanced data architecture","tok_text":"philadelphia stock exchang tap timesten for databas technolog \n phlx roll out equiti option autoquot system to trader as the first applic to leverag it enhanc data architectur","ordered_present_kp":[0,31,78,159],"keyphrases":["Philadelphia stock exchange","TimesTen","Equity Options AutoQuote System","data architecture"],"prmu":["P","P","P","P"]} {"id":"650","title":"Molecular descriptor selection combining genetic algorithms and fuzzy logic: application to database mining procedures","abstract":"A new algorithm, devoted to molecular descriptor selection in the context of data mining problems, has been developed. This algorithm is based on the concepts of genetic algorithms (GA) for descriptor hyperspace exploration and combined with a stepwise approach to get local convergence. Its selection power was evaluated by a fitness function derived from a fuzzy clustering method. Different training and test sets were randomly generated at each GA generation. The fitness score was derived by combining the scores of the training and test sets. The ability of the proposed algorithm to select relevant subsets of descriptors was tested on two data sets. The first one, an academic example, corresponded to the artificial problem of Bullseye, the second was a real data set including 114 olfactory compounds divided into three odor categories. In both cases, the proposed method allowed to improve the separation between the different data set classes","tok_text":"molecular descriptor select combin genet algorithm and fuzzi logic : applic to databas mine procedur \n a new algorithm , devot to molecular descriptor select in the context of data mine problem , ha been develop . thi algorithm is base on the concept of genet algorithm ( ga ) for descriptor hyperspac explor and combin with a stepwis approach to get local converg . it select power wa evalu by a fit function deriv from a fuzzi cluster method . differ train and test set were randomli gener at each ga gener . the fit score wa deriv by combin the score of the train and test set . the abil of the propos algorithm to select relev subset of descriptor wa test on two data set . the first one , an academ exampl , correspond to the artifici problem of bullsey , the second wa a real data set includ 114 olfactori compound divid into three odor categori . in both case , the propos method allow to improv the separ between the differ data set class","ordered_present_kp":[0,79,176,35,423,55,281,351,327,397,463,515,751,802,838],"keyphrases":["molecular descriptor selection","genetic algorithms","fuzzy logic","database mining","data mining","descriptor hyperspace exploration","stepwise approach","local convergence","fitness function","fuzzy clustering method","test sets","fitness score","Bullseye","olfactory compounds","odor categories","training sets"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"615","title":"An intelligent tutoring system for a power plant simulator","abstract":"In this paper, an intelligent tutoring system (ITS) is proposed for a power plant simulator. With a well designed ITS, the need for an instructor is minimized and the operator may readily and efficiently take, in real-time, the control of simulator with appropriate messages he(she) gets from the tutoring system. Using SIMULINK and based on object oriented programming (OOP) and C programming language, a fossil-fuelled power plant simulator with an ITS is proposed. Promising results are demonstrated for a typical power plant","tok_text":"an intellig tutor system for a power plant simul \n in thi paper , an intellig tutor system ( it ) is propos for a power plant simul . with a well design it , the need for an instructor is minim and the oper may readili and effici take , in real-tim , the control of simul with appropri messag he(sh ) get from the tutor system . use simulink and base on object orient program ( oop ) and c program languag , a fossil-fuel power plant simul with an it is propos . promis result are demonstr for a typic power plant","ordered_present_kp":[410,3,333,354,388],"keyphrases":["intelligent tutoring system","SIMULINK","object oriented programming","C programming language","fossil-fuelled power plant simulator","control simulation","CAI"],"prmu":["P","P","P","P","P","R","U"]} {"id":"1240","title":"Implementation and evaluation of HPF\/SX V2","abstract":"We are developing HPF\/SX V2, a High Performance Fortran (HPF) compiler for vector parallel machines. It provides some unique extensions as well as the features of HPF 2.0 and HPF\/JA. In particular, this paper describes four of them: (1) the ON directive of HPF 2.0; (2) the REFLECT and LOCAL directives of HPF\/JA; (3) vectorization directives; and (4) automatic parallelization. We evaluate these features through some benchmark programs on NEC SX-5. The results show that each of them achieved a 5-8 times speedup in 8-CPU parallel execution and the four features are useful for vector parallel execution. We also evaluate the overall performance of HPF\/SX V2 by using over 30 well-known benchmark programs from HPFBench, APR Benchmarks, GENESIS Benchmarks, and NAS Parallel Benchmarks. About half of the programs showed good performance, while the other half suggest weakness of the compiler, especially on its runtimes. It is necessary to improve them to put the compiler to practical use","tok_text":"implement and evalu of hpf \/ sx v2 \n we are develop hpf \/ sx v2 , a high perform fortran ( hpf ) compil for vector parallel machin . it provid some uniqu extens as well as the featur of hpf 2.0 and hpf \/ ja . in particular , thi paper describ four of them : ( 1 ) the on direct of hpf 2.0 ; ( 2 ) the reflect and local direct of hpf \/ ja ; ( 3 ) vector direct ; and ( 4 ) automat parallel . we evalu these featur through some benchmark program on nec sx-5 . the result show that each of them achiev a 5 - 8 time speedup in 8-cpu parallel execut and the four featur are use for vector parallel execut . we also evalu the overal perform of hpf \/ sx v2 by use over 30 well-known benchmark program from hpfbench , apr benchmark , genesi benchmark , and na parallel benchmark . about half of the program show good perform , while the other half suggest weak of the compil , especi on it runtim . it is necessari to improv them to put the compil to practic use","ordered_present_kp":[23,108,426,115,97],"keyphrases":["HPF\/SX V2","compiler","vector parallel machines","parallelization","benchmark","High Performance Fortran compiler"],"prmu":["P","P","P","P","P","R"]} {"id":"1205","title":"HPCVIEW: a tool for top-down analysis of node performance","abstract":"It is increasingly difficult for complex scientific programs to attain a significant fraction of peak performance on systems that are based on microprocessors with substantial instruction-level parallelism and deep memory hierarchies. Despite this trend, performance analysis and tuning tools are still not used regularly by algorithm and application designers. To a large extent, existing performance tools fail to meet many user needs and are cumbersome to use. To address these issues, we developed HPCVIEW - a toolkit for combining multiple sets of program profile data, correlating the data with source code, and generating a database that can be analyzed anywhere with a commodity Web browser. We argue that HPCVIEW addresses many of the issues that have limited the usability and the utility of most existing tools. We originally built HPCVIEW to facilitate our own work on data layout and optimizing compilers. Now, in addition to daily use within our group, HPCVIEW is being used by several code development teams in DoD and DoE laboratories as well as at NCSA","tok_text":"hpcview : a tool for top-down analysi of node perform \n it is increasingli difficult for complex scientif program to attain a signific fraction of peak perform on system that are base on microprocessor with substanti instruction-level parallel and deep memori hierarchi . despit thi trend , perform analysi and tune tool are still not use regularli by algorithm and applic design . to a larg extent , exist perform tool fail to meet mani user need and are cumbersom to use . to address these issu , we develop hpcview - a toolkit for combin multipl set of program profil data , correl the data with sourc code , and gener a databas that can be analyz anywher with a commod web browser . we argu that hpcview address mani of the issu that have limit the usabl and the util of most exist tool . we origin built hpcview to facilit our own work on data layout and optim compil . now , in addit to daili use within our group , hpcview is be use by sever code develop team in dod and doe laboratori as well as at ncsa","ordered_present_kp":[0,21,41,89,147,217,248,291,599,666,844,860],"keyphrases":["HPCView","top-down analysis","node performance","complex scientific programs","peak performance","instruction-level parallelism","deep memory hierarchies","performance analysis","source code","commodity Web browser","data layout","optimizing compilers","software tools","binary analysis"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","M","M"]} {"id":"138","title":"Optical actuation of a bistable MEMS","abstract":"This paper presents an optical actuation scheme for MEMS devices based on the well-established fact that light possesses momentum, and hence, imparts a force equal to 2 W\/c when reflected by a surface. Here, W is the total power of the reflected light, and c is the speed of light. Radiation pressure, as it is known, is nearly insignificant for most macroscale applications, but it can be quite significant for MEMS devices. In addition, light actuation offers a new paradigm. First, intersecting light beams do not interfere, in contrast to electrical conductors, which short when they come into contact. Second, light can operate in high temperature and high radiation environments far outside the capability of solid state electronic components. This actuation method is demonstrated, both in air and in vacuum, by switching the state of a bistable MEMS device. The associated heat transfer model is also presented","tok_text":"optic actuat of a bistabl mem \n thi paper present an optic actuat scheme for mem devic base on the well-establish fact that light possess momentum , and henc , impart a forc equal to 2 w \/ c when reflect by a surfac . here , w is the total power of the reflect light , and c is the speed of light . radiat pressur , as it is known , is nearli insignific for most macroscal applic , but it can be quit signific for mem devic . in addit , light actuat offer a new paradigm . first , intersect light beam do not interfer , in contrast to electr conductor , which short when they come into contact . second , light can oper in high temperatur and high radiat environ far outsid the capabl of solid state electron compon . thi actuat method is demonstr , both in air and in vacuum , by switch the state of a bistabl mem devic . the associ heat transfer model is also present","ordered_present_kp":[18,53,299,77,481,643,834],"keyphrases":["bistable MEMS","optical actuation scheme","MEMS devices","radiation pressure","intersecting light beams","high radiation environments","heat transfer model","high temperature environments"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"1318","title":"Network intrusion and fault detection: a statistical anomaly approach","abstract":"With the advent and explosive growth of the global Internet and electronic commerce environments, adaptive\/automatic network\/service intrusion and anomaly detection in wide area data networks and e-commerce infrastructures is fast gaining critical research and practical importance. We present and demonstrate the use of a general-purpose hierarchical multitier multiwindow statistical anomaly detection technology and system that operates automatically, adaptively, and proactively, and can be applied to various networking technologies, including both wired and wireless ad hoc networks. Our method uses statistical models and multivariate classifiers to detect anomalous network conditions. Some numerical results are also presented that demonstrate that our proposed methodology can reliably detect attacks with traffic anomaly intensity as low as 3-5 percent of the typical background traffic intensity, thus promising to generate an effective early warning","tok_text":"network intrus and fault detect : a statist anomali approach \n with the advent and explos growth of the global internet and electron commerc environ , adapt \/ automat network \/ servic intrus and anomali detect in wide area data network and e-commerc infrastructur is fast gain critic research and practic import . we present and demonstr the use of a general-purpos hierarch multiti multiwindow statist anomali detect technolog and system that oper automat , adapt , and proactiv , and can be appli to variou network technolog , includ both wire and wireless ad hoc network . our method use statist model and multivari classifi to detect anomal network condit . some numer result are also present that demonstr that our propos methodolog can reliabl detect attack with traffic anomali intens as low as 3 - 5 percent of the typic background traffic intens , thu promis to gener an effect earli warn","ordered_present_kp":[0,19,111,124,151,213,240,550,591,609,769,829],"keyphrases":["network intrusion","fault detection","Internet","electronic commerce environment","adaptive\/automatic network\/service intrusion","wide area data networks","e-commerce infrastructure","wireless ad hoc networks","statistical models","multivariate classifiers","traffic anomaly intensity","background traffic intensity","computer network attacks","denial of service","early warning systems","neural network classification","ad hoc wireless experiments","backpropagation","perceptron-back propagation hybrid","hierarchical multitier statistical anomaly detection","multiwindow anomaly detection","wired ad hoc networks"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","M","M","R","M","M","U","U","R","R","R"]} {"id":"708","title":"Sufficient conditions on nonemptiness and boundedness of the solution set of the P\/sub 0\/ function nonlinear complementarity problem","abstract":"The P\/sub 0\/ function nonlinear complementarity, problem (NCP) has attracted a lot of attention among researchers. Various assumed conditions, which ensure that the NCP has a solution have been proposed. In this paper, by using the notion of an exceptional family of elements we develop a sufficient condition which ensures that the solution set of the P\/sub 0\/ function NCP is nonempty and bounded. In particular, we prove that many existing assumed conditions imply this sufficient condition. Thus, these conditions imply that the solution set of the P\/sub 0\/ function NCP is nonempty and bounded. In addition, we also prove directly that a few existence conditions imply that the solution set of the P\/sub 0\/ function NCP is bounded","tok_text":"suffici condit on nonempti and bounded of the solut set of the p \/ sub 0\/ function nonlinear complementar problem \n the p \/ sub 0\/ function nonlinear complementar , problem ( ncp ) ha attract a lot of attent among research . variou assum condit , which ensur that the ncp ha a solut have been propos . in thi paper , by use the notion of an except famili of element we develop a suffici condit which ensur that the solut set of the p \/ sub 0\/ function ncp is nonempti and bound . in particular , we prove that mani exist assum condit impli thi suffici condit . thu , these condit impli that the solut set of the p \/ sub 0\/ function ncp is nonempti and bound . in addit , we also prove directli that a few exist condit impli that the solut set of the p \/ sub 0\/ function ncp is bound","ordered_present_kp":[0,18,31,46,63],"keyphrases":["sufficient conditions","nonemptiness","boundedness","solution set","P\/sub 0\/ function nonlinear complementarity problem"],"prmu":["P","P","P","P","P"]} {"id":"866","title":"Adjoint-based optimization of steady suction for disturbance control in incompressible flows","abstract":"The optimal distribution of steady suction needed to control the growth of single or multiple disturbances in quasi-three-dimensional incompressible boundary layers on a flat plate is investigated. The evolution of disturbances is analysed in the framework of the parabolized stability equations (PSE). A gradient-based optimization procedure is used and the gradients are evaluated using the adjoint of the parabolized stability equations (APSE) and the adjoint of the boundary layer equations (ABLE). The accuracy of the gradient is increased by introducing a stabilization procedure for the PSE. Results show that a suction peak appears in the upstream part of the suction region for optimal control of Tollmien-Schlichting (T-S) waves, steady streamwise streaks in a two-dimensional boundary layer and oblique waves in a quasi-three-dimensional boundary layer subject to an adverse pressure gradient. The mean flow modifications due to suction are shown to have a stabilizing effect similar to that of a favourable pressure gradient. It is also shown that the optimal suction distribution for the disturbance of interest reduces the growth rate of other perturbations. Results for control of a steady cross-flow mode in a three-dimensional boundary layer subject to a favourable pressure gradient show that not even large amounts of suction can completely stabilize the disturbance","tok_text":"adjoint-bas optim of steadi suction for disturb control in incompress flow \n the optim distribut of steadi suction need to control the growth of singl or multipl disturb in quasi-three-dimension incompress boundari layer on a flat plate is investig . the evolut of disturb is analys in the framework of the parabol stabil equat ( pse ) . a gradient-bas optim procedur is use and the gradient are evalu use the adjoint of the parabol stabil equat ( aps ) and the adjoint of the boundari layer equat ( abl ) . the accuraci of the gradient is increas by introduc a stabil procedur for the pse . result show that a suction peak appear in the upstream part of the suction region for optim control of tollmien-schlicht ( t- ) wave , steadi streamwis streak in a two-dimension boundari layer and obliqu wave in a quasi-three-dimension boundari layer subject to an advers pressur gradient . the mean flow modif due to suction are shown to have a stabil effect similar to that of a favour pressur gradient . it is also shown that the optim suction distribut for the disturb of interest reduc the growth rate of other perturb . result for control of a steadi cross-flow mode in a three-dimension boundari layer subject to a favour pressur gradient show that not even larg amount of suction can complet stabil the disturb","ordered_present_kp":[0,21,40,59,226,307,340,562,727,789,857,887,1142],"keyphrases":["adjoint-based optimization","steady suction","disturbance control","incompressible flows","flat plate","parabolized stability equations","gradient-based optimization procedure","stabilization procedure","steady streamwise streaks","oblique waves","adverse pressure gradient","mean flow","steady cross-flow mode","quasithree-dimensional incompressible boundary layers","Tollmien-Schlichting waves","laminar-turbulent transition"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","M","R","U"]} {"id":"823","title":"Estimation of thermal coefficients of magneto-optical media","abstract":"Previously we described a method for estimating the thermal conductivity of magneto-optic recording media. The method relies on identifying the laser power that brings the maximum temperature of the TbFeCo layer to as high as the Curie temperature. We extensively use a similar method to estimate the heat capacity of a dielectric layer, a TbFeCo layer, and an aluminum alloy layer of magneto-optic recording media. Measurements are conducted on static disks with a beam of light focused on a TbFeCo layer. The method has the advantage of thermal diffusion depending on a multilayer structure and irradiation time","tok_text":"estim of thermal coeffici of magneto-opt media \n previous we describ a method for estim the thermal conduct of magneto-opt record media . the method reli on identifi the laser power that bring the maximum temperatur of the tbfeco layer to as high as the curi temperatur . we extens use a similar method to estim the heat capac of a dielectr layer , a tbfeco layer , and an aluminum alloy layer of magneto-opt record media . measur are conduct on static disk with a beam of light focus on a tbfeco layer . the method ha the advantag of thermal diffus depend on a multilay structur and irradi time","ordered_present_kp":[9,29,92,170,197,223,254,332,316,373,111,446,473,535,562,584,223],"keyphrases":["thermal coefficients","magneto-optical media","thermal conductivity","magneto-optic recording media","laser power","maximum temperature","TbFeCo layer","TbFeCo","Curie temperature","heat capacity","dielectric layer","aluminum alloy layer","static disks","light focusing","thermal diffusion","multilayer structure","irradiation time"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1435","title":"Experimental investigations on monitoring and control of induction heating process for semi-solid alloys using the heating coil as sensor","abstract":"A method of monitoring the state of metal alloys during induction heating and control of the heating process utilizing the heating coil itself as a sensor is proposed, and its usefulness and effectiveness were experimentally investigated using aluminium A357 billets for the semi-solid metal (SSM) casting processes. The impedance of the coil containing the billet was continuously measured by the proposed method in the temperature range between room temperature and 700 degrees C. It was found that the reactance component of the impedance varied distinctively according to the billet state and could clearly monitor the deformation of the billet, while the resistance component increased with temperature, reflecting the variation of the resistivity of the billet which has strong correlation to the solid\/liquid fraction of the billets. The measured impedance is very sensitive to the billet states such as temperature, deformation and solid\/liquid fraction and could be used as a parameter to monitor and control the heating process for SSMs","tok_text":"experiment investig on monitor and control of induct heat process for semi-solid alloy use the heat coil as sensor \n a method of monitor the state of metal alloy dure induct heat and control of the heat process util the heat coil itself as a sensor is propos , and it use and effect were experiment investig use aluminium a357 billet for the semi-solid metal ( ssm ) cast process . the imped of the coil contain the billet wa continu measur by the propos method in the temperatur rang between room temperatur and 700 degre c. it wa found that the reactanc compon of the imped vari distinct accord to the billet state and could clearli monitor the deform of the billet , while the resist compon increas with temperatur , reflect the variat of the resist of the billet which ha strong correl to the solid \/ liquid fraction of the billet . the measur imped is veri sensit to the billet state such as temperatur , deform and solid \/ liquid fraction and could be use as a paramet to monitor and control the heat process for ssm","ordered_present_kp":[46,547,604,680,797],"keyphrases":["induction heating process","reactance component","billet state","resistance component","solid\/liquid fraction","process monitoring","process control","semisolid alloys","semisolid metal casting","heating coil sensor","coil impedance","billet deformation","resistivity variation","solenoid coil","20 to 700 C"],"prmu":["P","P","P","P","P","R","R","M","M","R","R","R","R","M","M"]} {"id":"1019","title":"Optical setup and analysis of disk-type photopolymer high-density holographic storage","abstract":"A relatively simple scheme for disk-type photopolymer high-density holographic storage based on angular and spatial multiplexing is described. The effects of the optical setup on the recording capacity and density are studied. Calculations and analysis show that this scheme is more effective than a scheme based on the spatioangular multiplexing for disk-type photopolymer high-density holographic storage, which has a limited medium thickness. Also an optimal beam recording angle exists to achieve maximum recording capacity and density","tok_text":"optic setup and analysi of disk-typ photopolym high-dens holograph storag \n a rel simpl scheme for disk-typ photopolym high-dens holograph storag base on angular and spatial multiplex is describ . the effect of the optic setup on the record capac and densiti are studi . calcul and analysi show that thi scheme is more effect than a scheme base on the spatioangular multiplex for disk-typ photopolym high-dens holograph storag , which ha a limit medium thick . also an optim beam record angl exist to achiev maximum record capac and densiti","ordered_present_kp":[27,0,166,234,440,469,508],"keyphrases":["optical setup","disk-type photopolymer high-density holographic storage","spatial multiplexing","recording capacity","limited medium thickness","optimal beam recording angle","maximum recording capacity","angular multiplexing","recording density","spatio-angular multiplexing","maximum density"],"prmu":["P","P","P","P","P","P","P","R","R","M","R"]} {"id":"1024","title":"Rational systems exhibit moderate risk aversion with respect to \"gambles\" on variable-resolution compression","abstract":"In an embedded wavelet scheme for progressive transmission, a tree structure naturally defines the spatial relationship on the hierarchical pyramid. Transform coefficients over each tree correspond to a unique local spatial region of the original image, and they can be coded bit-plane by bit-plane through successive-approximation quantization. After receiving the approximate value of some coefficients, the decoder can obtain a reconstructed image. We show a rational system for progressive transmission that, in absence of a priori knowledge about regions of interest, chooses at any truncation time among alternative trees for further transmission in such a way as to avoid certain forms of behavioral inconsistency. We prove that some rational transmission systems might exhibit aversion to risk involving \"gambles\" on tree-dependent quality of encoding while others favor taking such risks. Based on an acceptable predictor for visual distinctness from digital imagery, we demonstrate that, without any outside knowledge, risk-prone systems as well as those with strong risk aversion appear in capable of attaining the quality of reconstructions that can be achieved with moderate risk-averse behavior","tok_text":"ration system exhibit moder risk avers with respect to \" gambl \" on variable-resolut compress \n in an embed wavelet scheme for progress transmiss , a tree structur natur defin the spatial relationship on the hierarch pyramid . transform coeffici over each tree correspond to a uniqu local spatial region of the origin imag , and they can be code bit-plan by bit-plan through successive-approxim quantiz . after receiv the approxim valu of some coeffici , the decod can obtain a reconstruct imag . we show a ration system for progress transmiss that , in absenc of a priori knowledg about region of interest , choos at ani truncat time among altern tree for further transmiss in such a way as to avoid certain form of behavior inconsist . we prove that some ration transmiss system might exhibit avers to risk involv \" gambl \" on tree-depend qualiti of encod while other favor take such risk . base on an accept predictor for visual distinct from digit imageri , we demonstr that , without ani outsid knowledg , risk-pron system as well as those with strong risk avers appear in capabl of attain the qualiti of reconstruct that can be achiev with moder risk-avers behavior","ordered_present_kp":[68,127,0,22,102,150,227,283,375,478,622,57,904,925,946],"keyphrases":["rational system","moderate risk aversion","gambles","variable-resolution compression","embedded wavelet scheme","progressive transmission","tree structure","transform coefficients","local spatial region","successive-approximation quantization","reconstructed image","truncation time","acceptable predictor","visual distinctness","digital imagery","hierarchical pyramid spatial relationship","behavioral inconsistency avoidance","image encoding","embedded coding","rate control optimization","decision problem","progressive transmission utility functions","information theoretic measure"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R","U","U","M","U"]} {"id":"1061","title":"Abacus, EFI and anti-virus","abstract":"The Extensible Firmware Interface (EFI) standard emerged as a logical step to provide flexibility and extensibility to boot sequence processes, enabling the complete abstraction of a system's BIOS interface from the system's hardware. In doing so, this provided the means of standardizing a boot-up sequence, extending device drivers and boot time applications' portability to non PC-AT-based architectures, including embedded systems like Internet appliances, TV Internet set-top boxes and 64-bit Itanium platforms","tok_text":"abacu , efi and anti-viru \n the extens firmwar interfac ( efi ) standard emerg as a logic step to provid flexibl and extens to boot sequenc process , enabl the complet abstract of a system 's bio interfac from the system 's hardwar . in do so , thi provid the mean of standard a boot-up sequenc , extend devic driver and boot time applic ' portabl to non pc-at-bas architectur , includ embed system like internet applianc , tv internet set-top box and 64-bit itanium platform","ordered_present_kp":[16,386],"keyphrases":["anti-virus","embedded systems","Extensible Firmware Interface standard"],"prmu":["P","P","R"]} {"id":"1325","title":"X-Rite: more than a graphic arts company","abstract":"Although it is well known as a maker of densitometers and spectrophotometers, X-Rite is active in measuring light and shape in many industries. Among them are automobile finishes, paint and home improvements, scientific instruments, optical semiconductors and even cosmetic dentistry","tok_text":"x-rite : more than a graphic art compani \n although it is well known as a maker of densitomet and spectrophotomet , x-rite is activ in measur light and shape in mani industri . among them are automobil finish , paint and home improv , scientif instrument , optic semiconductor and even cosmet dentistri","ordered_present_kp":[0,21],"keyphrases":["X-Rite","graphic arts","colour measurement"],"prmu":["P","P","M"]} {"id":"1360","title":"Automated post bonding inspection by using machine vision techniques","abstract":"Inspection plays an important role in the semiconductor industry. In this paper, we focus on the inspection task after wire bonding in packaging. The purpose of wire bonding (W\/B) is to connect the bond pads with the lead fingers. Two major types of defects are (1) bonding line missing and (2) bonding line breakage. The numbers of bonding lines and bonding balls are used as the features for defect classification. The proposed method consists of image preprocessing, orientation determination, connection detection, bonding line detection, bonding ball detection, and defect classification. The proposed method is simple and fast. The experimental results show that the proposed method can detect the defects effectively","tok_text":"autom post bond inspect by use machin vision techniqu \n inspect play an import role in the semiconductor industri . in thi paper , we focu on the inspect task after wire bond in packag . the purpos of wire bond ( w \/ b ) is to connect the bond pad with the lead finger . two major type of defect are ( 1 ) bond line miss and ( 2 ) bond line breakag . the number of bond line and bond ball are use as the featur for defect classif . the propos method consist of imag preprocess , orient determin , connect detect , bond line detect , bond ball detect , and defect classif . the propos method is simpl and fast . the experiment result show that the propos method can detect the defect effect","ordered_present_kp":[91,0,31,165,178,257,306,331,379,415,461,479,497,514,533],"keyphrases":["automated post bonding inspection","machine vision","semiconductor industry","wire bonding","packaging","lead fingers","bonding line missing","bonding line breakage","bonding balls","defect classification","image preprocessing","orientation determination","connection detection","bonding line detection","bonding ball detection","IC manufacturing","bond pad connection"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","U","R"]} {"id":"735","title":"IT at the heart of joined-up policing","abstract":"Police IT is to shift from application-focused to component-based technology. The change of strategy, part of the Valiant Programme, will make information held by individual forces available on a national basis","tok_text":"it at the heart of joined-up polic \n polic it is to shift from application-focus to component-bas technolog . the chang of strategi , part of the valiant programm , will make inform held by individu forc avail on a nation basi","ordered_present_kp":[146,37],"keyphrases":["police IT","Valiant Programme","UK"],"prmu":["P","P","U"]} {"id":"770","title":"The 3D visibility complex","abstract":"Visibility problems are central to many computer graphics applications. The most common examples include hidden-part removal for view computation, shadow boundaries, mutual visibility of objects for lighting simulation. In this paper, we present a theoretical study of 3D visibility properties for scenes of smooth convex objects. We work in the space of light rays, or more precisely, of maximal free segments. We group segments that \"see\" the same object; this defines the 3D visibility complex. The boundaries of these groups of segments correspond to the visual events of the scene (limits of shadows, disappearance of an object when the viewpoint is moved, etc.). We provide a worst case analysis of the complexity of the visibility complex of 3D scenes, as well as a probabilistic study under a simple assumption for \"normal\" scenes. We extend the visibility complex to handle temporal visibility. We give an output-sensitive construction algorithm and present applications of our approach","tok_text":"the 3d visibl complex \n visibl problem are central to mani comput graphic applic . the most common exampl includ hidden-part remov for view comput , shadow boundari , mutual visibl of object for light simul . in thi paper , we present a theoret studi of 3d visibl properti for scene of smooth convex object . we work in the space of light ray , or more precis , of maxim free segment . we group segment that \" see \" the same object ; thi defin the 3d visibl complex . the boundari of these group of segment correspond to the visual event of the scene ( limit of shadow , disappear of an object when the viewpoint is move , etc . ) . we provid a worst case analysi of the complex of the visibl complex of 3d scene , as well as a probabilist studi under a simpl assumpt for \" normal \" scene . we extend the visibl complex to handl tempor visibl . we give an output-sensit construct algorithm and present applic of our approach","ordered_present_kp":[4,59,113,135,149,195,286,333,365,525,728,829,856],"keyphrases":["3D visibility complex","computer graphics","hidden-part removal","view computation","shadow boundaries","lighting simulation","smooth convex objects","light rays","maximal free segments","visual events","probabilistic study","temporal visibility","output-sensitive construction algorithm","mutual object visibility","worst case complexity analysis","normal scenes"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1408","title":"PKI: coming to an enterprise near you?","abstract":"For many years public key infrastructure (PKI) deployments were the provenance of governments and large, security-conscious corporations and financial institutions. These organizations have the financial and human resources necessary to successfully manage the complexities of a public key system. Lately however, several forces have converged to encourage a broader base of enterprises to take a closer look at PKI. These forces are discussed. PKI vendors are now demonstrating to customers how they can make essential business applications faster and more efficient by moving them to the Internet-without sacrificing security. Those applications usually include secure remote access, secure messaging, electronic document exchange, transaction validation, and network authentication. After a brief discussion of PKI basics the author reviews various products available on the market","tok_text":"pki : come to an enterpris near you ? \n for mani year public key infrastructur ( pki ) deploy were the proven of govern and larg , security-consci corpor and financi institut . these organ have the financi and human resourc necessari to success manag the complex of a public key system . late howev , sever forc have converg to encourag a broader base of enterpris to take a closer look at pki . these forc are discuss . pki vendor are now demonstr to custom how they can make essenti busi applic faster and more effici by move them to the internet-without sacrif secur . those applic usual includ secur remot access , secur messag , electron document exchang , transact valid , and network authent . after a brief discuss of pki basic the author review variou product avail on the market","ordered_present_kp":[0,54,421,131,598,619,634,662,683],"keyphrases":["PKI","public key infrastructure","security","PKI vendors","secure remote access","secure messaging","electronic document exchange","transaction validation","network authentication","business-critical applications","e-commerce","IPSec VPNs","Baltimore Technologies","Entrust","GeoTrust","RSA Security","VeriSign"],"prmu":["P","P","P","P","P","P","P","P","P","M","U","U","U","U","U","M","U"]} {"id":"57","title":"Speaker adaptive modeling by vocal tract normalization","abstract":"This paper presents methods for speaker adaptive modeling using vocal tract normalization (VTN) along with experimental tests on three databases. We propose a new training method for VTN: By using single-density acoustic models per HMM state for selecting the scale factor of the frequency axis, we avoid the problem that a mixture-density tends to learn the scale factors of the training speakers and thus cannot be used for selecting the scale factor. We show that using single Gaussian densities for selecting the scale factor in training results in lower error rates than using mixture densities. For the recognition phase, we propose an improvement of the well-known two-pass strategy: by using a non-normalized acoustic model for the first recognition pass instead of a normalized model, lower error rates are obtained. In recognition tests, this method is compared with a fast variant of VTN. The two-pass strategy is an efficient method, but it is suboptimal because the scale factor and the word sequence are determined sequentially. We found that for telephone digit string recognition this suboptimality reduces the VTN gain in recognition performance by 30% relative. In summary, on the German spontaneous speech task Verbmobil, the WSJ task and the German telephone digit string corpus SieTill, the proposed methods for VTN reduce the error rates significantly","tok_text":"speaker adapt model by vocal tract normal \n thi paper present method for speaker adapt model use vocal tract normal ( vtn ) along with experiment test on three databas . we propos a new train method for vtn : by use single-dens acoust model per hmm state for select the scale factor of the frequenc axi , we avoid the problem that a mixture-dens tend to learn the scale factor of the train speaker and thu can not be use for select the scale factor . we show that use singl gaussian densiti for select the scale factor in train result in lower error rate than use mixtur densiti . for the recognit phase , we propos an improv of the well-known two-pass strategi : by use a non-norm acoust model for the first recognit pass instead of a normal model , lower error rate are obtain . in recognit test , thi method is compar with a fast variant of vtn . the two-pass strategi is an effici method , but it is suboptim becaus the scale factor and the word sequenc are determin sequenti . we found that for telephon digit string recognit thi suboptim reduc the vtn gain in recognit perform by 30 % rel . in summari , on the german spontan speech task verbmobil , the wsj task and the german telephon digit string corpu sietil , the propos method for vtn reduc the error rate significantli","ordered_present_kp":[0,23,160,186,216,245,384,468,522,644,945,1000,1117,1160,1177,1212],"keyphrases":["speaker adaptive modeling","vocal tract normalization","databases","training method","single-density acoustic models","HMM state","training speakers","single Gaussian densities","training results","two-pass strategy","word sequence","telephone digit string recognition","German spontaneous speech task","WSJ task","German telephone digit string corpus","SieTill","frequency scale factor","error rate reduction","nonnormalized acoustic model","Verlimobil"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","M","M","U"]} {"id":"628","title":"Rank tests of association for exchangeable paired data","abstract":"We describe two rank tests of association for paired exchangeable data motivated by the study of lifespans in twins. The pooled sample is ranked. The nonparametric test of association is based on R\/sup +\/, the sum of the smaller within-pair ranks. A second measure L\/sup +\/ is the sum of within-pair rank products. Under the null hypothesis of within-pair independence, the two test statistics are approximately normally distributed. Expressions for the exact means and variances of R\/sup +\/ and L\/sup +\/ are given. We describe the power of these two statistics under a close alternative hypothesis to that of independence. Both the R\/sup +\/ and L\/sup +\/ tests indicate nonparametric statistical evidence of positive association of longevity in identical twins and a negligible relationship between the lifespans of fraternal twins listed in the Danish twin registry. The statistics are also applied to the analysis of a clinical trial studying the time to failure of ventilation tubes in children with bilateral otitis media","tok_text":"rank test of associ for exchang pair data \n we describ two rank test of associ for pair exchang data motiv by the studi of lifespan in twin . the pool sampl is rank . the nonparametr test of associ is base on r \/ sup + \/ , the sum of the smaller within-pair rank . a second measur l \/ sup + \/ is the sum of within-pair rank product . under the null hypothesi of within-pair independ , the two test statist are approxim normal distribut . express for the exact mean and varianc of r \/ sup + \/ and l \/ sup + \/ are given . we describ the power of these two statist under a close altern hypothesi to that of independ . both the r \/ sup + \/ and l \/ sup + \/ test indic nonparametr statist evid of posit associ of longev in ident twin and a neglig relationship between the lifespan of fratern twin list in the danish twin registri . the statist are also appli to the analysi of a clinic trial studi the time to failur of ventil tube in children with bilater otiti media","ordered_present_kp":[0,13,83,146,171,246,307,344,362,393,454,663,707,717,778,803,873,943],"keyphrases":["rank tests","association","paired exchangeable data","pooled sample","nonparametric test","within-pair ranks","within-pair rank products","null hypothesis","within-pair independence","test statistics","exact means","nonparametric statistical evidence","longevity","identical twins","fraternal twins","Danish twin registry","clinical trial","bilateral otitis media","twin lifespans","exact variances","ventilation tube failure time"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"12","title":"National learning systems: a new approach on technological change in late industrializing economies and evidences from the cases of Brazil and South Korea","abstract":"The paper has two intertwined parts. The first one is a proposal for a conceptual and theoretical framework to understand technical change in late industrializing economies. The second part develops a kind of empirical test of the usefulness of that new framework by means of a comparative study of the Brazilian and South Korean cases. All the four types of macroevidences of the technical change processes of Brazil and Korea corroborate, directly or indirectly, the hypothesis of the existence of actual cases of national learning systems (NLSs) of passive and active nature, as it is shown to be the cases of Brazil and South Korea, respectively. The contrast between the two processes of technical change prove remarkable, despite both processes being essentially confined to learning. The concepts of passive and active NLSs show how useful they are to apprehend the diversity of those realities, and, consequently, to avoid, for instance, interpretations that misleadingly suppose (based on conventional economic theory) that those countries have a similar lack of technological dynamism","tok_text":"nation learn system : a new approach on technolog chang in late industri economi and evid from the case of brazil and south korea \n the paper ha two intertwin part . the first one is a propos for a conceptu and theoret framework to understand technic chang in late industri economi . the second part develop a kind of empir test of the use of that new framework by mean of a compar studi of the brazilian and south korean case . all the four type of macroevid of the technic chang process of brazil and korea corrobor , directli or indirectli , the hypothesi of the exist of actual case of nation learn system ( nlss ) of passiv and activ natur , as it is shown to be the case of brazil and south korea , respect . the contrast between the two process of technic chang prove remark , despit both process be essenti confin to learn . the concept of passiv and activ nlss show how use they are to apprehend the divers of those realiti , and , consequ , to avoid , for instanc , interpret that misleadingli suppos ( base on convent econom theori ) that those countri have a similar lack of technolog dynam","ordered_present_kp":[0,40,59,107,118],"keyphrases":["national learning systems","technological change","late industrializing economies","Brazil","South Korea","national innovation system"],"prmu":["P","P","P","P","P","M"]} {"id":"1238","title":"Optimization of element-by-element FEM in HPF 1.1","abstract":"In this study, Poisson's equation is numerically evaluated by the element-by-element (EBE) finite-element method in a parallel environment using HPF 1.1 (High-Performance Fortran). In order to achieve high parallel efficiency, the data structures have been altered to node-based data instead of mixtures of node- and element-based data, representing a node-based EBE finite-element scheme (nEBE). The parallel machine used in this study was the NEC SX-4, and experiments were performed on a single node having 32 processors sharing common memory. The HPF compiler used in the experiments is HPF\/SX Rev 2.0 released in 1997 (unofficial), which supports HPF 1.1. Models containing approximately 200 000 and 1,500,000 degrees of freedom were analyzed in order to evaluate the method. The calculation time, parallel efficiency, and memory used were compared. The performance of HPF in the conjugate gradient solver for the large model, using the NEC SX-4 compiler option-noshrunk, was about 85% that of the message passing interface","tok_text":"optim of element-by-el fem in hpf 1.1 \n in thi studi , poisson 's equat is numer evalu by the element-by-el ( ebe ) finite-el method in a parallel environ use hpf 1.1 ( high-perform fortran ) . in order to achiev high parallel effici , the data structur have been alter to node-bas data instead of mixtur of node- and element-bas data , repres a node-bas ebe finite-el scheme ( nebe ) . the parallel machin use in thi studi wa the nec sx-4 , and experi were perform on a singl node have 32 processor share common memori . the hpf compil use in the experi is hpf \/ sx rev 2.0 releas in 1997 ( unoffici ) , which support hpf 1.1 . model contain approxim 200 000 and 1,500,000 degre of freedom were analyz in order to evalu the method . the calcul time , parallel effici , and memori use were compar . the perform of hpf in the conjug gradient solver for the larg model , use the nec sx-4 compil option-noshrunk , wa about 85 % that of the messag pass interfac","ordered_present_kp":[526,825,937,9,30],"keyphrases":["element-by-element","HPF","HPF compiler","conjugate gradient solver","message passing","finite element method","parallel programs","Poisson equation"],"prmu":["P","P","P","P","P","M","M","R"]} {"id":"1181","title":"Dynamic neighborhood structures in parallel evolution strategies","abstract":"Parallelizing is a straightforward approach to reduce the total computation time of evolutionary algorithms. Finding an appropriate communication network within spatially structured populations for improving convergence speed and convergence probability is a difficult task. A new method that uses a dynamic communication scheme in an evolution strategy will be compared with conventional static and dynamic approaches. The communication structure is based on a so-called diffusion model approach. The links between adjacent individuals are dynamically chosen according to deterministic or probabilistic rules. Due to self-organization effects, efficient and stable communication structures are established that perform robustly and quickly on a multimodal test function","tok_text":"dynam neighborhood structur in parallel evolut strategi \n parallel is a straightforward approach to reduc the total comput time of evolutionari algorithm . find an appropri commun network within spatial structur popul for improv converg speed and converg probabl is a difficult task . a new method that use a dynam commun scheme in an evolut strategi will be compar with convent static and dynam approach . the commun structur is base on a so-cal diffus model approach . the link between adjac individu are dynam chosen accord to determinist or probabilist rule . due to self-organ effect , effici and stabl commun structur are establish that perform robustli and quickli on a multimod test function","ordered_present_kp":[131,677,247,229,31],"keyphrases":["parallelizing","evolutionary algorithms","convergence speed","convergence probability","multimodal test function","parallel evolutionary algorithms"],"prmu":["P","P","P","P","P","R"]} {"id":"903","title":"Modeling and simulation of an ABR flow control algorithm using a virtual source\/virtual destination switch","abstract":"The available bit rate (ABR) service class of asynchronous transfer mode networks uses a feedback control mechanism to adapt to varying link capacities. The virtual source\/virtual destination (VS\/VD) technique offers the possibility of segmenting the otherwise end-to-end ABR control loop into separate loops. The improved feedback delay and control of ABR traffic inside closed segments provide a better performance for ABR connections. This article presents the use of classical linear control theory to model and develop an ABR VS\/VD flow control algorithm. Discrete event simulations are used to analyze the behavior of the algorithm with respect to transient behavior and correctness of the control model. Linear control theory offers the means to derive correct choices of parameters and to assess performance issues, such as stability of the system, during the design phase. The performance goals are high link utilization, fair bandwidth distribution, and robust operation in various environments, which are verified by discrete event simulations. The major contribution of this work is the use of analytic methods (linear control theory) to model and design an ABR flow control algorithm tailored for the special layout of a VS\/VD switch, and the use of simulation techniques to verify the result","tok_text":"model and simul of an abr flow control algorithm use a virtual sourc \/ virtual destin switch \n the avail bit rate ( abr ) servic class of asynchron transfer mode network use a feedback control mechan to adapt to vari link capac . the virtual sourc \/ virtual destin ( vs \/ vd ) techniqu offer the possibl of segment the otherwis end-to-end abr control loop into separ loop . the improv feedback delay and control of abr traffic insid close segment provid a better perform for abr connect . thi articl present the use of classic linear control theori to model and develop an abr vs \/ vd flow control algorithm . discret event simul are use to analyz the behavior of the algorithm with respect to transient behavior and correct of the control model . linear control theori offer the mean to deriv correct choic of paramet and to assess perform issu , such as stabil of the system , dure the design phase . the perform goal are high link util , fair bandwidth distribut , and robust oper in variou environ , which are verifi by discret event simul . the major contribut of thi work is the use of analyt method ( linear control theori ) to model and design an abr flow control algorithm tailor for the special layout of a vs \/ vd switch , and the use of simul techniqu to verifi the result","ordered_present_kp":[0,22,55,176,217,343,385,433,519,610,694,732,833,856,924,941,972],"keyphrases":["modeling","ABR flow control algorithm","virtual source\/virtual destination switch","feedback control mechanism","link capacities","control loop","feedback delay","closed segments","classical linear control theory","discrete event simulations","transient behavior","control model","performance issues","stability","high link utilization","fair bandwidth distribution","robust operation","ATM networks","available bit rate service class","traffic control"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M","R","R"]} {"id":"140","title":"A high-resolution high-frequency monolithic top-shooting microinjector free of satellite drops - part II: fabrication, implementation, and characterization","abstract":"For pt. I, see ibid., vol. 11, no. 5, p. 427-36 (2002). Describes the fabrication, implementation and characterization of a thermal driven microinjector, featuring a bubble check valve and monolithic fabrication. Microfabrication of this microinjector is based on bulk\/surface-combined micromachining of the silicon wafer, free of the bonding process that is commonly used in the fabrication of commercial printing head, so that even solvents and fuels can be ejected. Droplet ejection sequences of two microinjectors have been studied along with a commercial inkjet printhead for comparison. The droplet ejection of our microinjector with 10 mu m diameter nozzle has been characterized at a frequency over 35 kHz, at least 3 times higher than those of commercial counterparts. The droplet volume from this device is smaller than 1 pl, 10 times smaller than those of commercial inkjets employed in the consumer market at the time of testing. Visualization results have verified that our design, although far from being optimized, operates in the frequency several times higher than those of commercial products and reduces the crosstalk among neighboring chambers","tok_text":"a high-resolut high-frequ monolith top-shoot microinjector free of satellit drop - part ii : fabric , implement , and character \n for pt . i , see ibid . , vol . 11 , no . 5 , p. 427 - 36 ( 2002 ) . describ the fabric , implement and character of a thermal driven microinjector , featur a bubbl check valv and monolith fabric . microfabr of thi microinjector is base on bulk \/ surface-combin micromachin of the silicon wafer , free of the bond process that is commonli use in the fabric of commerci print head , so that even solvent and fuel can be eject . droplet eject sequenc of two microinjector have been studi along with a commerci inkjet printhead for comparison . the droplet eject of our microinjector with 10 mu m diamet nozzl ha been character at a frequenc over 35 khz , at least 3 time higher than those of commerci counterpart . the droplet volum from thi devic is smaller than 1 pl , 10 time smaller than those of commerci inkjet employ in the consum market at the time of test . visual result have verifi that our design , although far from be optim , oper in the frequenc sever time higher than those of commerci product and reduc the crosstalk among neighbor chamber","ordered_present_kp":[26,67,249,289,370,439,638,731,847,959,1152,774],"keyphrases":["monolithic top-shooting microinjector","satellite drops","thermal driven microinjector","bubble check valve","bulk\/surface-combined micromachining","bonding process","inkjet printhead","nozzle","35 kHz","droplet volume","consumer market","crosstalk","10 micron"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","M"]} {"id":"591","title":"Approximation theory of fuzzy systems based upon genuine many-valued implications - MIMO cases","abstract":"It is constructively proved that the multi-input-multi-output fuzzy systems based upon genuine many-valued implications are universal approximators (they are called Boolean type fuzzy systems in this paper). The general approach to construct such fuzzy systems is given, that is, through the partition of the output region (by the given accuracy). Two examples are provided to demonstrate the way in which fuzzy systems are designed to approximate given functions with a given required approximation accuracy","tok_text":"approxim theori of fuzzi system base upon genuin many-valu implic - mimo case \n it is construct prove that the multi-input-multi-output fuzzi system base upon genuin many-valu implic are univers approxim ( they are call boolean type fuzzi system in thi paper ) . the gener approach to construct such fuzzi system is given , that is , through the partit of the output region ( by the given accuraci ) . two exampl are provid to demonstr the way in which fuzzi system are design to approxim given function with a given requir approxim accuraci","ordered_present_kp":[111,220,19,49,187],"keyphrases":["fuzzy systems","many-valued implication","multi-input-multi-output fuzzy systems","universal approximator","Boolean type fuzzy systems"],"prmu":["P","P","P","P","P"]} {"id":"946","title":"Entanglement measures with asymptotic weak-monotonicity as lower (upper) bound for the entanglement of cost (distillation)","abstract":"We propose entanglement measures with asymptotic weak-monotonicity. We show that a normalized form of entanglement measures with the asymptotic weak-monotonicity are lower (upper) bound for the entanglement of cost (distillation)","tok_text":"entangl measur with asymptot weak-monoton as lower ( upper ) bound for the entangl of cost ( distil ) \n we propos entangl measur with asymptot weak-monoton . we show that a normal form of entangl measur with the asymptot weak-monoton are lower ( upper ) bound for the entangl of cost ( distil )","ordered_present_kp":[0,20,75,93],"keyphrases":["entanglement measures","asymptotic weak-monotonicity","entanglement of cost","distillation"],"prmu":["P","P","P","P"]} {"id":"105","title":"Greenberger-Horne-Zeilinger paradoxes for many qubits","abstract":"We construct Greenberger-Horne-Zeilinger (GHZ) contradictions for three or more parties sharing an entangled state, the dimension of each subsystem being an even integer d. The simplest example that goes beyond the standard GHZ paradox (three qubits) involves five ququats (d = 4). We then examine the criteria that a GHZ paradox must satisfy in order to be genuinely M partite and d dimensional","tok_text":"greenberger-horne-zeiling paradox for mani qubit \n we construct greenberger-horne-zeiling ( ghz ) contradict for three or more parti share an entangl state , the dimens of each subsystem be an even integ d. the simplest exampl that goe beyond the standard ghz paradox ( three qubit ) involv five ququat ( d = 4 ) . we then examin the criteria that a ghz paradox must satisfi in order to be genuin m partit and d dimension","ordered_present_kp":[0,38,142,256],"keyphrases":["Greenberger-Horne-Zeilinger paradoxes","many qubits","entangled state","GHZ paradox","GHZ contradictions"],"prmu":["P","P","P","P","R"]} {"id":"1139","title":"Development and evaluation of a case-based reasoning classifier for prediction of breast biopsy outcome with BI-RADS\/sup TM\/ lexicon","abstract":"Approximately 70-85% of breast biopsies are performed on benign lesions. To reduce this high number of biopsies performed on benign lesions, a case-based reasoning (CBR) classifier was developed to predict biopsy results from BI-RADS\/sup TM\/ findings. We used 1433 (931 benign) biopsy-proven mammographic cases. CBR similarity was defined using either the Hamming or Euclidean distance measure over case features. Ten features represented each case: calcification distribution, calcification morphology, calcification number, mass margin, mass shape, mass density, mass size, associated findings, special cases, and age. Performance was evaluated using Round Robin sampling, Receiver Operating Characteristic (ROC) analysis, and bootstrap. To determine the most influential features for the CBR, an exhaustive feature search was performed over all possible feature combinations (1022) and similarity thresholds. Influential features were defined as the most frequently occurring features in the feature subsets with the highest partial ROC areas (\/sub 0.90\/AUC). For CBR with Hamming distance, the most influential features were found to be mass margin, calcification morphology, age, calcification distribution, calcification number, and mass shape, resulting in an \/sub 0.90\/AUC of 0.33. At 95% sensitivity, the Hamming CBR would spare from biopsy 34% of the benign lesions. At 98% sensitivity, the Hamming CBR would spare 27% benign lesions. For the CBR with Euclidean distance, the most influential feature subset consisted of mass margin, calcification morphology, age, mass density, and associated findings, resulting in \/sub 0.90\/AUC of 0.37. At 95% sensitivity, the Euclidean CBR would spare from biopsy 41% benign lesions. At 98% sensitivity, the Euclidean CBR would spare 27% benign lesions. The profile of cases spared by both distance measures at 98% sensitivity indicates that the CBR is a potentially useful diagnostic tool for the classification of mammographic lesions, by recommending short-term follow-up for likely benign lesions that is in agreement with final biopsy results and mammographer's intuition","tok_text":"develop and evalu of a case-bas reason classifi for predict of breast biopsi outcom with bi-rad \/ sup tm\/ lexicon \n approxim 70 - 85 % of breast biopsi are perform on benign lesion . to reduc thi high number of biopsi perform on benign lesion , a case-bas reason ( cbr ) classifi wa develop to predict biopsi result from bi-rad \/ sup tm\/ find . we use 1433 ( 931 benign ) biopsy-proven mammograph case . cbr similar wa defin use either the ham or euclidean distanc measur over case featur . ten featur repres each case : calcif distribut , calcif morpholog , calcif number , mass margin , mass shape , mass densiti , mass size , associ find , special case , and age . perform wa evalu use round robin sampl , receiv oper characterist ( roc ) analysi , and bootstrap . to determin the most influenti featur for the cbr , an exhaust featur search wa perform over all possibl featur combin ( 1022 ) and similar threshold . influenti featur were defin as the most frequent occur featur in the featur subset with the highest partial roc area ( \/sub 0.90 \/ auc ) . for cbr with ham distanc , the most influenti featur were found to be mass margin , calcif morpholog , age , calcif distribut , calcif number , and mass shape , result in an \/sub 0.90 \/ auc of 0.33 . at 95 % sensit , the ham cbr would spare from biopsi 34 % of the benign lesion . at 98 % sensit , the ham cbr would spare 27 % benign lesion . for the cbr with euclidean distanc , the most influenti featur subset consist of mass margin , calcif morpholog , age , mass densiti , and associ find , result in \/sub 0.90 \/ auc of 0.37 . at 95 % sensit , the euclidean cbr would spare from biopsi 41 % benign lesion . at 98 % sensit , the euclidean cbr would spare 27 % benign lesion . the profil of case spare by both distanc measur at 98 % sensit indic that the cbr is a potenti use diagnost tool for the classif of mammograph lesion , by recommend short-term follow-up for like benign lesion that is in agreement with final biopsi result and mammograph 's intuit","ordered_present_kp":[23,63,167,372,404,447,521,540,559,575,589,602,617,629,643,662,689,756,873,900,989,1012,789,1838,1904],"keyphrases":["case-based reasoning classifier","breast biopsy outcome","benign lesions","biopsy-proven mammographic cases","CBR similarity","Euclidean distance measure","calcification distribution","calcification morphology","calcification number","mass margin","mass shape","mass density","mass size","associated findings","special cases","age","Round Robin sampling","bootstrap","influential features","feature combinations","similarity thresholds","feature subsets","highest partial ROC areas","diagnostic tool","short-term follow-up","BI-RADS lexicon","Hamming distance measure","Receiver Operating Characteristic analysis","mammographic lesion classification"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R"]} {"id":"1280","title":"Products and polymorphic subtypes","abstract":"This paper is devoted to a comprehensive study of polymorphic subtypes with products. We first present a sound and complete Hilbert style axiomatization of the relation of being a subtype in presence of to , * type constructors and the For all quantifier, and we show that such axiornatization is not encodable in the system with to , For all only. In order to give a logical semantics to such a subtyping relation, we propose a new form of a sequent which plays a key role in a natural deduction and a Gentzen style calculi. Interestingly enough, the sequent must have the form E implies T, where E is a non-commutative, non-empty sequence of typing assumptions and T is a finite binary tree of typing judgements, each of them behaving like a pushdown store. We study basic metamathematical properties of the two logical systems, such as subject reduction and cut elimination. Some decidability\/undecidability issues related to the presented subtyping relation are also explored: as expected, the subtyping over to , *, For all is undecidable, being already undecidable for the to , For all fragment (as proved in [15]), but for the *, For all fragment it turns out to be decidable","tok_text":"product and polymorph subtyp \n thi paper is devot to a comprehens studi of polymorph subtyp with product . we first present a sound and complet hilbert style axiomat of the relat of be a subtyp in presenc of to , * type constructor and the for all quantifi , and we show that such axiornat is not encod in the system with to , for all onli . in order to give a logic semant to such a subtyp relat , we propos a new form of a sequent which play a key role in a natur deduct and a gentzen style calculi . interestingli enough , the sequent must have the form e impli t , where e is a non-commut , non-empti sequenc of type assumpt and t is a finit binari tree of type judgement , each of them behav like a pushdown store . we studi basic metamathemat properti of the two logic system , such as subject reduct and cut elimin . some decid \/ undecid issu relat to the present subtyp relat are also explor : as expect , the subtyp over to , * , for all is undecid , be alreadi undecid for the to , for all fragment ( as prove in [ 15 ] ) , but for the * , for all fragment it turn out to be decid","ordered_present_kp":[12,144,361,479,640,704,829],"keyphrases":["polymorphic subtypes","Hilbert style axiomatization","logical semantics","Gentzen style calculi","finite binary tree","pushdown store","decidability","products subtypes","metamathernatical properties"],"prmu":["P","P","P","P","P","P","P","R","M"]} {"id":"690","title":"Robust Kalman filter design for discrete time-delay systems","abstract":"The problem of finite- and infinite-horizon robust Kalman filtering for uncertain discrete-time systems with state delay is addressed. The system under consideration is subject to time-varying norm-bounded parameter uncertainty in both the state and output matrices. We develop a new methodology for designing a linear filter such that the error variance of the filter is guaranteed to be within a certain upper bound for any allowed uncertainty and time delay. The solution is given in terms of two Riccati equations. Multiple time-delay systems are also investigated","tok_text":"robust kalman filter design for discret time-delay system \n the problem of finite- and infinite-horizon robust kalman filter for uncertain discrete-tim system with state delay is address . the system under consider is subject to time-vari norm-bound paramet uncertainti in both the state and output matric . we develop a new methodolog for design a linear filter such that the error varianc of the filter is guarante to be within a certain upper bound for ani allow uncertainti and time delay . the solut is given in term of two riccati equat . multipl time-delay system are also investig","ordered_present_kp":[0,32,164,239,292,349,529],"keyphrases":["robust Kalman filter","discrete time-delay systems","state delay","norm-bounded parameter uncertainty","output matrices","linear filter","Riccati equations","uncertain systems","time-varying parameter uncertainty","state matrices","robust state estimation"],"prmu":["P","P","P","P","P","P","P","R","R","R","M"]} {"id":"859","title":"Developing a hardware and programming curriculum for middle school girls","abstract":"Techbridge provides experiences and resources that would teach girls technology skills as well as excite their curiosity and build their confidence. Funded by the National Science Foundation and sponsored by Chabot Space and Science Center in Oakland, California, Techbridge is a three-year program that serves approximately 200 girls annually. Techbridge is hosted at 8 middle and high schools in Oakland and at the California School for the Blind in Fremont, California generally as an after-school program meeting once a week. Techbridge comes at a critical time in girls' development when girls have many important decisions to make regarding classes and careers, but often lack the confidence and guidance to make the best choices. Techbridge helps girls plan for the next steps to high school and college with its role models and guidance. Techbridge also provides training and resources for teachers, counselors, and families","tok_text":"develop a hardwar and program curriculum for middl school girl \n techbridg provid experi and resourc that would teach girl technolog skill as well as excit their curios and build their confid . fund by the nation scienc foundat and sponsor by chabot space and scienc center in oakland , california , techbridg is a three-year program that serv approxim 200 girl annual . techbridg is host at 8 middl and high school in oakland and at the california school for the blind in fremont , california gener as an after-school program meet onc a week . techbridg come at a critic time in girl ' develop when girl have mani import decis to make regard class and career , but often lack the confid and guidanc to make the best choic . techbridg help girl plan for the next step to high school and colleg with it role model and guidanc . techbridg also provid train and resourc for teacher , counselor , and famili","ordered_present_kp":[45,10,65],"keyphrases":["hardware and programming curriculum","middle school girls","Techbridge","technology skills teaching"],"prmu":["P","P","P","R"]} {"id":"1362","title":"Process planning for reliable high-speed machining of moulds","abstract":"A method of generating NC programs for the high-speed milling of moulds is investigated. Forging dies and injection moulds, whether plastic or aluminium, have a complex surface geometry. In addition they are made of steels of hardness as much as 30 or even 50 HRC. Since 1995, high-speed machining has been much adopted by the die-making industry, which with this technology can reduce its use of Sinking Electrodischarge Machining (SEDM). EDM, in general, calls for longer machining times. The use of high-speed machining makes it necessary to redefine the preliminary stages of the process. In addition, it affects the methodology employed in the generation of NC programs, which requires the use of high-level CAM software. The aim is to generate error-free programs that make use of optimum cutting strategies in the interest of productivity and surface quality. The final result is a more reliable manufacturing process. There are two risks in the use of high-speed milling on hardened steels. One of these is tool breakage, which may be very costly and may furthermore entail marks on the workpiece. The other is collisions between the tool and the workpiece or fixtures, the result of which may be damage to the ceramic bearings in the spindles. in order to minimize these risks it is necessary that new control and optimization steps be included in the CAM methodology. There are three things that the firm adopting high-speed methods should do. It should redefine its process engineering, it should systematize access by its CAM programmers to high-speed knowhow, and it should take up the use of process simulation tools. In the latter case, it will be very advantageous to use tools for the estimation of cutting forces. The new work methods proposed in this article have made it possible to introduce high speed milling (HSM) into the die industry. Examples are given of how the technique has been applied with CAM programming re-engineered as here proposed, with an explanation of the novel features and the results","tok_text":"process plan for reliabl high-spe machin of mould \n a method of gener nc program for the high-spe mill of mould is investig . forg die and inject mould , whether plastic or aluminium , have a complex surfac geometri . in addit they are made of steel of hard as much as 30 or even 50 hrc . sinc 1995 , high-spe machin ha been much adopt by the die-mak industri , which with thi technolog can reduc it use of sink electrodischarg machin ( sedm ) . edm , in gener , call for longer machin time . the use of high-spe machin make it necessari to redefin the preliminari stage of the process . in addit , it affect the methodolog employ in the gener of nc program , which requir the use of high-level cam softwar . the aim is to gener error-fre program that make use of optimum cut strategi in the interest of product and surfac qualiti . the final result is a more reliabl manufactur process . there are two risk in the use of high-spe mill on harden steel . one of these is tool breakag , which may be veri costli and may furthermor entail mark on the workpiec . the other is collis between the tool and the workpiec or fixtur , the result of which may be damag to the ceram bear in the spindl . in order to minim these risk it is necessari that new control and optim step be includ in the cam methodolog . there are three thing that the firm adopt high-spe method should do . it should redefin it process engin , it should systemat access by it cam programm to high-spe knowhow , and it should take up the use of process simul tool . in the latter case , it will be veri advantag to use tool for the estim of cut forc . the new work method propos in thi articl have made it possibl to introduc high speed mill ( hsm ) into the die industri . exampl are given of how the techniqu ha been appli with cam program re-engin as here propos , with an explan of the novel featur and the result","ordered_present_kp":[44,17,0,70,89,126,139,192,729,764,804,816,939,970,1165,1286,1510,1795,772],"keyphrases":["process planning","reliable high-speed machining","moulds","NC programs","high-speed milling","forging dies","injection moulds","complex surface geometry","error-free programs","optimum cutting strategies","cutting strategies","productivity","surface quality","hardened steels","tool breakage","ceramic bearings","CAM methodology","process simulation tools","CAM programming re-engineering","tool workpiece collisions","process engineering redefinition"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","M"]} {"id":"737","title":"What's in a name? [mobile telephony branding]","abstract":"Mobile operators are frantically consolidating businesses into single international brands","tok_text":"what 's in a name ? [ mobil telephoni brand ] \n mobil oper are frantic consolid busi into singl intern brand","ordered_present_kp":[22,38,71],"keyphrases":["mobile telephony","branding","consolidating businesses"],"prmu":["P","P","P"]} {"id":"772","title":"Meshed atlases for real-time procedural solid texturing","abstract":"We describe an implementation of procedural solid texturing that uses the texture atlas, a one-to-one mapping from an object's surface into its texture space. The method uses the graphics hardware to rasterize the solid texture coordinates as colors directly into the atlas. A texturing procedure is applied per-pixel to the texture map, replacing each solid texture coordinate with its corresponding procedural solid texture result. The procedural solid texture is then mapped back onto the object surface using standard texture mapping. The implementation renders procedural solid textures in real time, and the user can design them interactively. The quality of this technique depends greatly on the layout of the texture atlas. A broad survey of texture atlas schemes is used to develop a set of general purpose mesh atlases and tools for measuring their effectiveness at distributing as many available texture samples as evenly across the surface as possible. The main contribution of this paper is a new multiresolution texture atlas. It distributes all available texture samples in a nearly uniform distribution. This multiresolution texture atlas also supports MIP-mapped minification antialiasing and linear magnification filtering","tok_text":"mesh atlas for real-tim procedur solid textur \n we describ an implement of procedur solid textur that use the textur atla , a one-to-on map from an object 's surfac into it textur space . the method use the graphic hardwar to raster the solid textur coordin as color directli into the atla . a textur procedur is appli per-pixel to the textur map , replac each solid textur coordin with it correspond procedur solid textur result . the procedur solid textur is then map back onto the object surfac use standard textur map . the implement render procedur solid textur in real time , and the user can design them interact . the qualiti of thi techniqu depend greatli on the layout of the textur atla . a broad survey of textur atla scheme is use to develop a set of gener purpos mesh atlas and tool for measur their effect at distribut as mani avail textur sampl as evenli across the surfac as possibl . the main contribut of thi paper is a new multiresolut textur atla . it distribut all avail textur sampl in a nearli uniform distribut . thi multiresolut textur atla also support mip-map minif antialias and linear magnif filter","ordered_present_kp":[15,0,110,126,484,173,207,226,237,261,538,943,1108,1080],"keyphrases":["meshed atlases","real-time procedural solid texturing","texture atlas","one-to-one mapping","texture space","graphics hardware","rasterization","solid texture coordinates","colors","object surface","rendering","multiresolution texture atlas","MIP-mapped minification antialiasing","linear magnification filtering"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1026","title":"Use of SPOT images as a tool for coastal zone management and monitoring of environmental impacts in the coastal zone","abstract":"Modern techniques such as remote sensing have been one of the main factors leading toward the achievement of serious plans regarding coastal management. A multitemporal analysis of land use in certain areas of the Colombian Caribbean Coast is described. It mainly focuses on environmental impacts caused by anthropogenic activities, such as deforestation of mangroves due to shrimp farming. Selection of sensitive areas, percentage of destroyed mangroves, possible endangered areas, etc., are some of the results of this analysis. Recommendations for a coastal management plan in the area have also resulted from this analysis. Some other consequences of the deforestation of mangroves in the coastal zone and the construction of shrimp ponds are also analyzed, such as the increase of erosion problems in these areas and water pollution, among others. The increase of erosion in these areas has also changed part of their morphology, which has been studied by the analysis of SPOT images in previous years. A serious concern exists about the future of these areas. For this reason new techniques like satellite images (SPOT) have been applied with good results, leading to more effective control and coastal management in the area. The use of SPOT images to study changes of the land use of the area is a useful technique to determine patterns of human activities and suggest solutions for severe problems in these areas","tok_text":"use of spot imag as a tool for coastal zone manag and monitor of environment impact in the coastal zone \n modern techniqu such as remot sens have been one of the main factor lead toward the achiev of seriou plan regard coastal manag . a multitempor analysi of land use in certain area of the colombian caribbean coast is describ . it mainli focus on environment impact caus by anthropogen activ , such as deforest of mangrov due to shrimp farm . select of sensit area , percentag of destroy mangrov , possibl endang area , etc . , are some of the result of thi analysi . recommend for a coastal manag plan in the area have also result from thi analysi . some other consequ of the deforest of mangrov in the coastal zone and the construct of shrimp pond are also analyz , such as the increas of eros problem in these area and water pollut , among other . the increas of eros in these area ha also chang part of their morpholog , which ha been studi by the analysi of spot imag in previou year . a seriou concern exist about the futur of these area . for thi reason new techniqu like satellit imag ( spot ) have been appli with good result , lead to more effect control and coastal manag in the area . the use of spot imag to studi chang of the land use of the area is a use techniqu to determin pattern of human activ and suggest solut for sever problem in these area","ordered_present_kp":[31,7,130,237,260,292,377,432,509,794,825,1082,1305,741],"keyphrases":["SPOT images","coastal zone management","remote sensing","multitemporal analysis","land use","Colombian Caribbean Coast","anthropogenic activities","shrimp farming","endangered areas","shrimp ponds","erosion problems","water pollution","satellite images","human activities","environmental impact monitoring","mangrove deforestation","supervised classification","sedimentation","vectorization","vector overlay"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","U","U","U","U"]} {"id":"1063","title":"Operations that do not disturb partially known quantum states","abstract":"Consider a situation in which a quantum system is secretly prepared in a state chosen from the known set of states. We present a principle that gives a definite distinction between the operations that preserve the states of the system and those that disturb the states. The principle is derived by alternately applying a fundamental property of classical signals and a fundamental property of quantum ones. The principle can be cast into a simple form by using a decomposition of the relevant Hilbert space, which is uniquely determined by the set of possible states. The decomposition implies the classification of the degrees of freedom of the system into three parts depending on how they store the information on the initially chosen state: one storing it classically, one storing it nonclassically, and the other one storing no information. Then the principle states that the nonclassical part is inaccessible and the classical part is read-only if we are to preserve the state of the system. From this principle, many types of no-cloning, no-broadcasting, and no-imprinting conditions can easily be derived in general forms including mixed states. It also gives a unified view on how various schemes of quantum cryptography work. The principle helps one to derive optimum amount of resources (bits, qubits, and ebits) required in data compression or in quantum teleportation of mixed-state ensembles","tok_text":"oper that do not disturb partial known quantum state \n consid a situat in which a quantum system is secretli prepar in a state chosen from the known set of state . we present a principl that give a definit distinct between the oper that preserv the state of the system and those that disturb the state . the principl is deriv by altern appli a fundament properti of classic signal and a fundament properti of quantum one . the principl can be cast into a simpl form by use a decomposit of the relev hilbert space , which is uniqu determin by the set of possibl state . the decomposit impli the classif of the degre of freedom of the system into three part depend on how they store the inform on the initi chosen state : one store it classic , one store it nonclass , and the other one store no inform . then the principl state that the nonclass part is inaccess and the classic part is read-onli if we are to preserv the state of the system . from thi principl , mani type of no-clon , no-broadcast , and no-imprint condit can easili be deriv in gener form includ mix state . it also give a unifi view on how variou scheme of quantum cryptographi work . the principl help one to deriv optimum amount of resourc ( bit , qubit , and ebit ) requir in data compress or in quantum teleport of mixed-st ensembl","ordered_present_kp":[25,82,366,499,609,836,1126,1213,1219,1231,1268,1288],"keyphrases":["partially known quantum states","quantum system","classical signals","Hilbert space","degrees of freedom","nonclassical part","quantum cryptography","bits","qubits","ebits","quantum teleportation","mixed-state ensembles","secretly prepared quantum state"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1282","title":"Completeness of timed mu CRL","abstract":"Previously a straightforward extension of the process algebra mu CRL was proposed to explicitly deal with time. The process algebra mu CRL has been especially designed to deal with data in a process algebraic context. Using the features for data, only a minor extension of the language was needed to obtain a very expressive variant of time. Previously it contained syntax, operational semantics and axioms characterising timed mu CRL. It did not contain an in depth analysis of theory of timed mu CRL. This paper fills this gap, by providing soundness and completeness results. The main tool to establish these is a mapping of timed to untimed mu CRL and employing the completeness results obtained for untimed mu CRL","tok_text":"complet of time mu crl \n previous a straightforward extens of the process algebra mu crl wa propos to explicitli deal with time . the process algebra mu crl ha been especi design to deal with data in a process algebra context . use the featur for data , onli a minor extens of the languag wa need to obtain a veri express variant of time . previous it contain syntax , oper semant and axiom characteris time mu crl . it did not contain an in depth analysi of theori of time mu crl . thi paper fill thi gap , by provid sound and complet result . the main tool to establish these is a map of time to untim mu crl and employ the complet result obtain for untim mu crl","ordered_present_kp":[11,0,66,369],"keyphrases":["completeness","timed mu CRL","process algebra","operational semantics"],"prmu":["P","P","P","P"]} {"id":"692","title":"A partial converse to Hadamard's theorem on homeomorphisms","abstract":"A theorem by Hadamard gives a two-part condition under which a map from one Banach space to another is a homeomorphism. The theorem, while often very useful, is incomplete in the sense that it does not explicitly specify the family of maps for which the condition is met. Here, under a typically weak additional assumption on the map, we show that Hadamard's condition is met if, and only if, the map is a homeomorphism with a Lipschitz continuous inverse. An application is given concerning the relation between the stability of a nonlinear system and the stability of related linear systems","tok_text":"a partial convers to hadamard 's theorem on homeomorph \n a theorem by hadamard give a two-part condit under which a map from one banach space to anoth is a homeomorph . the theorem , while often veri use , is incomplet in the sens that it doe not explicitli specifi the famili of map for which the condit is met . here , under a typic weak addit assumpt on the map , we show that hadamard 's condit is met if , and onli if , the map is a homeomorph with a lipschitz continu invers . an applic is given concern the relat between the stabil of a nonlinear system and the stabil of relat linear system","ordered_present_kp":[2,44,129,456,547],"keyphrases":["partial converse","homeomorphisms","Banach space","Lipschitz continuous inverse","linearization","Hadamard theorem","nonlinear system stability","linear system stability","nonlinear feedback systems","nonlinear networks"],"prmu":["P","P","P","P","P","R","R","R","M","M"]} {"id":"1183","title":"Evolving robust asynchronous cellular automata for the density task","abstract":"In this paper the evolution of three kinds of asynchronous cellular automata are studied for the density task. Results are compared with those obtained for synchronous automata and the influence of various asynchronous update policies on the computational strategy is described. How synchronous and asynchronous cellular automata behave is investigated when the update policy is gradually changed, showing that asynchronous cellular automata are more adaptable. The behavior of synchronous and asynchronous evolved automata are studied under the presence of random noise of two kinds and it is shown that asynchronous cellular automata implicitly offer superior fault tolerance","tok_text":"evolv robust asynchron cellular automata for the densiti task \n in thi paper the evolut of three kind of asynchron cellular automata are studi for the densiti task . result are compar with those obtain for synchron automata and the influenc of variou asynchron updat polici on the comput strategi is describ . how synchron and asynchron cellular automata behav is investig when the updat polici is gradual chang , show that asynchron cellular automata are more adapt . the behavior of synchron and asynchron evolv automata are studi under the presenc of random nois of two kind and it is shown that asynchron cellular automata implicitli offer superior fault toler","ordered_present_kp":[13,23,653,554,206],"keyphrases":["asynchronous cellular automata","cellular automata","synchronous automata","random noise","fault tolerance","discrete dynamical systems"],"prmu":["P","P","P","P","P","U"]} {"id":"901","title":"Estimation of the Poisson stream intensity in a multilinear queue with an exponential job queue decay","abstract":"Times the busy queue periods start are found for a multilinear queue with an exponential job queue decay and uniform resource allocation to individual servers. The stream intensity and the average job are estimated from observations of the times the queue busy periods start","tok_text":"estim of the poisson stream intens in a multilinear queue with an exponenti job queue decay \n time the busi queue period start are found for a multilinear queue with an exponenti job queue decay and uniform resourc alloc to individu server . the stream intens and the averag job are estim from observ of the time the queue busi period start","ordered_present_kp":[13,40,66,103,199,21,224],"keyphrases":["Poisson stream intensity","stream intensity","multilinear queue","exponential job queue decay","busy queue periods start","uniform resource allocation","individual servers"],"prmu":["P","P","P","P","P","P","P"]} {"id":"142","title":"Surface micromachined paraffin-actuated microvalve","abstract":"Normally-open microvalves have been fabricated and tested which use a paraffin microactuator as the active element. The entire structure with nominal dimension of phi 600 mu m * 30 mu m is batch-fabricated by surface micromachining the actuator and channel materials on top of a single substrate. Gas flow rates in the 0.01-0.1 sccm range have been measured for several devices with actuation powers ranging from 50 to 150 mW on glass substrates. Leak rates as low as 500 mu sccm have been measured. The normally-open blocking microvalve structure has been used to fabricate a precision flow control system of microvalves consisting of four blocking valve structures. The control valve is designed to operate over a 0.01-5.0 sccm flow range at a differential pressure of 800 torr. Flow rates ranging from 0.02 to 4.996 sccm have been measured. Leak rates as low as 3.2 msccm for the four valve system have been measured","tok_text":"surfac micromachin paraffin-actu microvalv \n normally-open microvalv have been fabric and test which use a paraffin microactu as the activ element . the entir structur with nomin dimens of phi 600 mu m * 30 mu m is batch-fabr by surfac micromachin the actuat and channel materi on top of a singl substrat . ga flow rate in the 0.01 - 0.1 sccm rang have been measur for sever devic with actuat power rang from 50 to 150 mw on glass substrat . leak rate as low as 500 mu sccm have been measur . the normally-open block microvalv structur ha been use to fabric a precis flow control system of microvalv consist of four block valv structur . the control valv is design to oper over a 0.01 - 5.0 sccm flow rang at a differenti pressur of 800 torr . flow rate rang from 0.02 to 4.996 sccm have been measur . leak rate as low as 3.2 msccm for the four valv system have been measur","ordered_present_kp":[45,107,133,263,307,386,442,616,711,310,409,733],"keyphrases":["normally-open microvalves","paraffin microactuator","active element","channel materials","gas flow rates","flow rates","actuation powers","50 to 150 mW","leak rates","blocking valve structures","differential pressure","800 torr","surface micromachined microvalve","600 micron","30 micron"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","M","M"]} {"id":"593","title":"Fuzzy systems with overlapping Gaussian concepts: Approximation properties in Sobolev norms","abstract":"In this paper the approximating capabilities of fuzzy systems with overlapping Gaussian concepts are considered. The target function is assumed to be sampled either on a regular gird or according to a uniform probability density. By exploiting a connection with Radial Basis Functions approximators, a new method for the computation of the system coefficients is provided, showing that it guarantees uniform approximation of the derivatives of the target function","tok_text":"fuzzi system with overlap gaussian concept : approxim properti in sobolev norm \n in thi paper the approxim capabl of fuzzi system with overlap gaussian concept are consid . the target function is assum to be sampl either on a regular gird or accord to a uniform probabl densiti . by exploit a connect with radial basi function approxim , a new method for the comput of the system coeffici is provid , show that it guarante uniform approxim of the deriv of the target function","ordered_present_kp":[0,18,306],"keyphrases":["fuzzy systems","overlapping Gaussian concepts","radial basis functions","learning","fuzzy system models","reproducing kernel Hilbert spaces"],"prmu":["P","P","P","U","M","U"]} {"id":"944","title":"Conditions for the local manipulation of Gaussian states","abstract":"We present a general necessary and sufficient criterion for the possibility of a state transformation from one mixed Gaussian state to another of a bipartite continuous-variable system with two modes. The class of operations that will be considered is the set of local Gaussian completely positive trace-preserving maps","tok_text":"condit for the local manipul of gaussian state \n we present a gener necessari and suffici criterion for the possibl of a state transform from one mix gaussian state to anoth of a bipartit continuous-vari system with two mode . the class of oper that will be consid is the set of local gaussian complet posit trace-preserv map","ordered_present_kp":[15,32,121,179,308],"keyphrases":["local manipulation","Gaussian states","state transformation","bipartite continuous-variable system","trace-preserving maps","quantum information theory"],"prmu":["P","P","P","P","P","U"]} {"id":"107","title":"Deterministic single-photon source for distributed quantum networking","abstract":"A sequence of single photons is emitted on demand from a single three-level atom strongly coupled to a high-finesse optical cavity. The photons are generated by an adiabatically driven stimulated Raman transition between two atomic ground states, with the vacuum field of the cavity stimulating one branch of the transition, and laser pulses deterministically driving the other branch. This process is unitary and therefore intrinsically reversible, which is essential for quantum communication and networking, and the photons should be appropriate for all-optical quantum information processing","tok_text":"determinist single-photon sourc for distribut quantum network \n a sequenc of singl photon is emit on demand from a singl three-level atom strongli coupl to a high-finess optic caviti . the photon are gener by an adiabat driven stimul raman transit between two atom ground state , with the vacuum field of the caviti stimul one branch of the transit , and laser puls determinist drive the other branch . thi process is unitari and therefor intrins revers , which is essenti for quantum commun and network , and the photon should be appropri for all-opt quantum inform process","ordered_present_kp":[0,36,115,158,212,289,477,544],"keyphrases":["deterministic single-photon source","distributed quantum networking","single three-level atom","high-finesse optical cavity","adiabatically driven stimulated Raman transition","vacuum field","quantum communication","all-optical quantum information processing"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"55","title":"Self-testing chips take a load off ATE","abstract":"Looks at how chipmakers get more life out of automatic test equipment by embedding innovative circuits in silicon","tok_text":"self-test chip take a load off ate \n look at how chipmak get more life out of automat test equip by embed innov circuit in silicon","ordered_present_kp":[0,31,78,106],"keyphrases":["self-testing chips","ATE","automatic test equipment","innovative circuits","design-for-test techniques","embedded deterministic testing technique"],"prmu":["P","P","P","P","U","M"]} {"id":"979","title":"Design, analysis and testing of some parallel two-step W-methods for stiff systems","abstract":"Parallel two-step W-methods are linearly-implicit integration methods where the s stage values can be computed in parallel. We construct methods of stage order q = s and order p = s with favourable stability properties. Generalizations for the concepts of A- and L-stability are proposed and conditions for stiff accuracy are given. Numerical comparisons on a shared memory computer show the efficiency of the methods, especially in combination with Krylov-techniques for large stiff systems","tok_text":"design , analysi and test of some parallel two-step w-method for stiff system \n parallel two-step w-method are linearly-implicit integr method where the s stage valu can be comput in parallel . we construct method of stage order q = s and order p = s with favour stabil properti . gener for the concept of a- and l-stabil are propos and condit for stiff accuraci are given . numer comparison on a share memori comput show the effici of the method , especi in combin with krylov-techniqu for larg stiff system","ordered_present_kp":[34,491,111,217,263,397,471],"keyphrases":["parallel two-step W-methods","linearly-implicit integration methods","stage order","stability","shared memory computer","Krylov-techniques","large stiff systems","differential equations","convergence analysis"],"prmu":["P","P","P","P","P","P","P","U","M"]} {"id":"652","title":"A case for end system multicast","abstract":"The conventional wisdom has been that Internet protocol (IP) is the natural protocol layer for implementing multicast related functionality. However, more than a decade after its initial proposal, IP multicast is still plagued with concerns pertaining to scalability, network management, deployment, and support for higher layer functionality such as error, flow, and congestion control. We explore an alternative architecture that we term end system multicast, where end systems implement all multicast related functionality including membership management and packet replication. This shifting of multicast support from routers to end systems has the potential to address most problems associated with IP multicast. However, the key concern is the performance penalty associated with such a model. In particular, end system multicast introduces duplicate packets on physical links and incurs larger end-to-end delays than IP multicast. We study these performance concerns in the context of the Narada protocol. In Narada, end systems self-organize into an overlay structure using a fully distributed protocol. Further, end systems attempt to optimize the efficiency of the overlay by adapting to network dynamics and by considering application level performance. We present details of Narada and evaluate it using both simulation and Internet experiments. Our results indicate that the performance penalties are low both from the application and the network perspectives. We believe the potential benefits of transferring multicast functionality from end systems to routers significantly outweigh the performance penalty incurred","tok_text":"a case for end system multicast \n the convent wisdom ha been that internet protocol ( ip ) is the natur protocol layer for implement multicast relat function . howev , more than a decad after it initi propos , ip multicast is still plagu with concern pertain to scalabl , network manag , deploy , and support for higher layer function such as error , flow , and congest control . we explor an altern architectur that we term end system multicast , where end system implement all multicast relat function includ membership manag and packet replic . thi shift of multicast support from router to end system ha the potenti to address most problem associ with ip multicast . howev , the key concern is the perform penalti associ with such a model . in particular , end system multicast introduc duplic packet on physic link and incur larger end-to-end delay than ip multicast . we studi these perform concern in the context of the narada protocol . in narada , end system self-organ into an overlay structur use a fulli distribut protocol . further , end system attempt to optim the effici of the overlay by adapt to network dynam and by consid applic level perform . we present detail of narada and evalu it use both simul and internet experi . our result indic that the perform penalti are low both from the applic and the network perspect . we believ the potenti benefit of transfer multicast function from end system to router significantli outweigh the perform penalti incur","ordered_present_kp":[11,66,104,210,272,313,362,511,532,837,927,987,1016,1113,1141,1214,1224,702],"keyphrases":["end system multicast","Internet protocol","protocol layer","IP multicast","network management","higher layer functionality","congestion control","membership management","packet replication","performance penalties","end-to-end delays","Narada protocol","overlay structure","distributed protocol","network dynamics","application level performance","simulation","Internet experiments","network scalability","network routers","self-organizing protocol"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"617","title":"Estimation of trifocal tensor using GMM","abstract":"A novel estimation of a trifocal tensor based on the Gaussian mixture model (GMM) is presented. The mixture model is built assuming that the residuals of inliers and outliers belong to different Gaussian distributions. The Bayesian rule is then employed to detect the inliers for re-estimation. Experiments show that the presented method is more precise and relatively unaffected by outliers","tok_text":"estim of trifoc tensor use gmm \n a novel estim of a trifoc tensor base on the gaussian mixtur model ( gmm ) is present . the mixtur model is built assum that the residu of inlier and outlier belong to differ gaussian distribut . the bayesian rule is then employ to detect the inlier for re-estim . experi show that the present method is more precis and rel unaffect by outlier","ordered_present_kp":[27,78,208,233,172,183],"keyphrases":["GMM","Gaussian mixture model","inliers","outliers","Gaussian distributions","Bayesian rule","trifocal tensor estimation","motion analysis","image data","image analysis"],"prmu":["P","P","P","P","P","P","R","U","U","U"]} {"id":"68","title":"Human factors research on data modeling: a review of prior research, an extended framework and future research directions","abstract":"This study reviews and synthesizes human factors research on conceptual data modeling. In addition to analyzing the variables used in earlier studies and summarizing the results of this stream of research, we propose a new framework to help with future efforts in this area. The study finds that prior research has focused on issues that are relevant when conceptual models are used for communication between systems analysts and developers (Analyst Developer models) whereas the issues important for models that are used to facilitate communication between analysts and users (User-Analyst models) have received little attention and, hence, require a significantly stronger role in future research. In addition, we emphasize the importance of building a strong theoretical foundation and using it to guide future empirical work in this area","tok_text":"human factor research on data model : a review of prior research , an extend framework and futur research direct \n thi studi review and synthes human factor research on conceptu data model . in addit to analyz the variabl use in earlier studi and summar the result of thi stream of research , we propos a new framework to help with futur effort in thi area . the studi find that prior research ha focus on issu that are relev when conceptu model are use for commun between system analyst and develop ( analyst develop model ) wherea the issu import for model that are use to facilit commun between analyst and user ( user-analyst model ) have receiv littl attent and , henc , requir a significantli stronger role in futur research . in addit , we emphas the import of build a strong theoret foundat and use it to guid futur empir work in thi area","ordered_present_kp":[0,169,332,502,617],"keyphrases":["human factors","conceptual data modeling","future efforts","Analyst Developer models","User-Analyst models","database"],"prmu":["P","P","P","P","P","U"]} {"id":"1242","title":"VPP Fortran and the design of HPF\/JA extensions","abstract":"VPP Fortran is a data parallel language that has been designed for the VPP series of supercomputers. In addition to pure data parallelism, it contains certain low-level features that were designed to extract high performance from user programs. A comparison of VPP Fortran and High-Performance Fortran (HPF) 2.0 shows that these low-level features are not available in HPF 2.0. The features include asynchronous interprocessor communication, explicit shadow, and the LOCAL directive. They were shown in VPP Fortran to be very useful in handling real-world applications, and they have been included in the HPF\/JA extensions. They are described in the paper. The HPF\/JA Language Specification Version 1.0 is an extension of HPF 2.0 to achieve practical performance for real-world applications and is a result of collaboration in the Japan Association for HPF (JAHPF). Some practical programming and tuning procedures with the HPF\/JA Language Specification are described, using the NAS Parallel Benchmark BT as an example","tok_text":"vpp fortran and the design of hpf \/ ja extens \n vpp fortran is a data parallel languag that ha been design for the vpp seri of supercomput . in addit to pure data parallel , it contain certain low-level featur that were design to extract high perform from user program . a comparison of vpp fortran and high-perform fortran ( hpf ) 2.0 show that these low-level featur are not avail in hpf 2.0 . the featur includ asynchron interprocessor commun , explicit shadow , and the local direct . they were shown in vpp fortran to be veri use in handl real-world applic , and they have been includ in the hpf \/ ja extens . they are describ in the paper . the hpf \/ ja languag specif version 1.0 is an extens of hpf 2.0 to achiev practic perform for real-world applic and is a result of collabor in the japan associ for hpf ( jahpf ) . some practic program and tune procedur with the hpf \/ ja languag specif are describ , use the na parallel benchmark bt as an exampl","ordered_present_kp":[0,65,65,238,414,448,933],"keyphrases":["VPP Fortran","data parallel language","data parallelism","high performance","asynchronous interprocessor communication","explicit shadow","benchmark","asynchronous communication","data locality"],"prmu":["P","P","P","P","P","P","P","R","R"]} {"id":"1207","title":"Packet spacing: an enabling mechanism for delivering multimedia content in computational grids","abstract":"Streaming multimedia with UDP has become increasingly popular over distributed systems like the Internet. Scientific applications that stream multimedia include remote computational steering of visualization data and video-on-demand teleconferencing over the Access Grid. However, UDP does not possess a self-regulating, congestion-control mechanism; and most best-effort traffic is served by congestion-controlled TCP. Consequently, UDP steals bandwidth from TCP such that TCP flows starve for network resources. With the volume of Internet traffic continuing to increase, the perpetuation of UDP-based streaming will cause the Internet to collapse as it did in the mid-1980's due to the use of non-congestion-controlled TCP. To address this problem, we introduce the counter-intuitive notion of inter-packet spacing with control feedback to enable UDP-based applications to perform well in the next-generation Internet and computational grids. When compared with traditional UDP-based streaming, we illustrate that our approach can reduce packet loss over 50% without adversely affecting delivered throughput","tok_text":"packet space : an enabl mechan for deliv multimedia content in comput grid \n stream multimedia with udp ha becom increasingli popular over distribut system like the internet . scientif applic that stream multimedia includ remot comput steer of visual data and video-on-demand teleconferenc over the access grid . howev , udp doe not possess a self-regul , congestion-control mechan ; and most best-effort traffic is serv by congestion-control tcp . consequ , udp steal bandwidth from tcp such that tcp flow starv for network resourc . with the volum of internet traffic continu to increas , the perpetu of udp-bas stream will caus the internet to collaps as it did in the mid-1980 's due to the use of non-congestion-control tcp . to address thi problem , we introduc the counter-intuit notion of inter-packet space with control feedback to enabl udp-bas applic to perform well in the next-gener internet and comput grid . when compar with tradit udp-bas stream , we illustr that our approach can reduc packet loss over 50 % without advers affect deliv throughput","ordered_present_kp":[77,100,139,165,222,244,797,606],"keyphrases":["streaming multimedia","UDP","distributed systems","Internet","remote computational steering","visualization data","UDP-based streaming","inter-packet spacing","network protocol","transport protocols"],"prmu":["P","P","P","P","P","P","P","P","M","U"]} {"id":"95","title":"SIA shelves T+1 decision till 2004","abstract":"The Securities Industry Association has decided that a move to T+1 is more than the industry can handle right now. STP, however, will remain a focus","tok_text":"sia shelv t+1 decis till 2004 \n the secur industri associ ha decid that a move to t+1 is more than the industri can handl right now . stp , howev , will remain a focu","ordered_present_kp":[36,10],"keyphrases":["T+1","Securities Industry Association","straight-through-processing"],"prmu":["P","P","U"]} {"id":"553","title":"Application of traditional system design techniques to Web site design","abstract":"After several decades of computer program construction there emerged a set of principles that provided guidance to produce more manageable programs. With the emergence of the plethora of Internet web sites one wonders if similar guidelines are followed in their construction. Since this is a new technology no apparent universally accepted methods have emerged to guide the designer in Web site construction. This paper reviews the traditional principles of structured programming and the preferred characteristics of Web sites. Finally a mapping of how the traditional guidelines may be applied to Web site construction is presented. The application of the traditional principles of structured programming to the design of a Web site can provide a more usable site for the visitors to the site. The additional benefit of using these time-honored techniques is the creation of a Web site that will be easier to maintain by the development staff","tok_text":"applic of tradit system design techniqu to web site design \n after sever decad of comput program construct there emerg a set of principl that provid guidanc to produc more manag program . with the emerg of the plethora of internet web site one wonder if similar guidelin are follow in their construct . sinc thi is a new technolog no appar univers accept method have emerg to guid the design in web site construct . thi paper review the tradit principl of structur program and the prefer characterist of web site . final a map of how the tradit guidelin may be appli to web site construct is present . the applic of the tradit principl of structur program to the design of a web site can provid a more usabl site for the visitor to the site . the addit benefit of use these time-honor techniqu is the creation of a web site that will be easier to maintain by the develop staff","ordered_present_kp":[17,456],"keyphrases":["system design techniques","structured programming","Internet Web site design"],"prmu":["P","P","R"]} {"id":"984","title":"Bistability of harmonically forced relaxation oscillations","abstract":"Relaxation oscillations appear in processes which involve transitions between two states characterized by fast and slow time scales. When a relaxation oscillator is coupled to an external periodic force its entrainment by the force results in a response which can include multiple periodicities and bistability. The prototype of these behaviors is the harmonically driven van der Pol equation which displays regions in the parameter space of the driving force amplitude where stable orbits of periods 2n+or-1 coexist, flanked by regions of periods 2n+1 and 2n-1. The parameter regions of such bistable orbits are derived analytically for the closely related harmonically driven Stoker-Haag piecewise discontinuous equation. The results are valid over most of the control parameter space of the system. Also considered are the reasons for the more complicated dynamics featuring regions of high multiple periodicity which appear like noise between ordered periodic regions. Since this system mimics in detail the less analytically tractable forced van der Pol equation, the results suggest extensions to situations where forced relaxation oscillations are a component of the operating mechanisms","tok_text":"bistabl of harmon forc relax oscil \n relax oscil appear in process which involv transit between two state character by fast and slow time scale . when a relax oscil is coupl to an extern period forc it entrain by the forc result in a respons which can includ multipl period and bistabl . the prototyp of these behavior is the harmon driven van der pol equat which display region in the paramet space of the drive forc amplitud where stabl orbit of period 2n+or-1 coexist , flank by region of period 2n+1 and 2n-1 . the paramet region of such bistabl orbit are deriv analyt for the close relat harmon driven stoker-haag piecewis discontinu equat . the result are valid over most of the control paramet space of the system . also consid are the reason for the more complic dynam featur region of high multipl period which appear like nois between order period region . sinc thi system mimic in detail the less analyt tractabl forc van der pol equat , the result suggest extens to situat where forc relax oscil are a compon of the oper mechan","ordered_present_kp":[0,11,180,202,340,593,685],"keyphrases":["bistability","harmonically forced relaxation oscillations","external periodic force","entrainment","van der Pol equation","harmonically driven Stoker-Haag piecewise discontinuous equation","control parameter space","nonlinear dynamics"],"prmu":["P","P","P","P","P","P","P","M"]} {"id":"1143","title":"A three-source model for the calculation of head scatter factors","abstract":"Accurate determination of the head scatter factor S\/sub c\/ is an important issue, especially for intensity modulated radiation therapy, where the segmented fields are often very irregular and much less than the collimator jaw settings. In this work, we report an S\/sub c\/ calculation algorithm for symmetric, asymmetric, and irregular open fields shaped by the tertiary collimator (a multileaf collimator or blocks) at different source-to-chamber distance. The algorithm was based on a three-source model, in which the photon radiation to the point of calculation was treated as if it originated from three effective sources: one source for the primary photons from the target and two extra-focal photon sources for the scattered photons from the primary collimator and the flattening filter, respectively. The field mapping method proposed by Kim et al. [Phys. Med. Biol. 43, 1593-1604 (1998)] was extended to two extra-focal source planes and the scatter contributions were integrated over the projected areas (determined by the detector's eye view) in the three source planes considering the source intensity distributions. The algorithm was implemented using Microsoft Visual C\/C++ in the MS Windows environment. The only input data required were head scatter factors for symmetric square fields, which are normally acquired during machine commissioning. A large number of different fields were used to evaluate the algorithm and the results were compared with measurements. We found that most of the calculated S\/sub c\/'s agreed with the measured values to within 0.4%. The algorithm can also be easily applied to deal with irregular fields shaped by a multileaf collimator that replaces the upper or lower collimator jaws","tok_text":"a three-sourc model for the calcul of head scatter factor \n accur determin of the head scatter factor s \/ sub c\/ is an import issu , especi for intens modul radiat therapi , where the segment field are often veri irregular and much less than the collim jaw set . in thi work , we report an s \/ sub c\/ calcul algorithm for symmetr , asymmetr , and irregular open field shape by the tertiari collim ( a multileaf collim or block ) at differ source-to-chamb distanc . the algorithm wa base on a three-sourc model , in which the photon radiat to the point of calcul wa treat as if it origin from three effect sourc : one sourc for the primari photon from the target and two extra-foc photon sourc for the scatter photon from the primari collim and the flatten filter , respect . the field map method propos by kim et al . [ phi . med . biol . 43 , 1593 - 1604 ( 1998 ) ] wa extend to two extra-foc sourc plane and the scatter contribut were integr over the project area ( determin by the detector 's eye view ) in the three sourc plane consid the sourc intens distribut . the algorithm wa implement use microsoft visual c \/ c++ in the ms window environ . the onli input data requir were head scatter factor for symmetr squar field , which are normal acquir dure machin commiss . a larg number of differ field were use to evalu the algorithm and the result were compar with measur . we found that most of the calcul s \/ sub c\/ 's agre with the measur valu to within 0.4 % . the algorithm can also be easili appli to deal with irregular field shape by a multileaf collim that replac the upper or lower collim jaw","ordered_present_kp":[2,38,144,184,246,301,322,192,332,192,347,381,401,421,439,525,655,670,701,725,748,779,884,1043,1131,1160,1207,1258,1590],"keyphrases":["three-source model","head scatter factors","intensity modulated radiation therapy","segmented fields","fields","fields","collimator jaw settings","calculation algorithm","symmetric","asymmetric","irregular open fields","tertiary collimator","multileaf collimator","blocks","source-to-chamber distance","photon radiation","target","extra-focal photon sources","scattered photons","primary collimator","flattening filter","field mapping method","extra-focal source planes","source intensity distributions","MS Windows environment","input data","symmetric square fields","machine commissioning","lower collimator jaws","upper collimator jaws"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1106","title":"Virtual projects at Halden [Reactor Project]","abstract":"The Halden man-machine systems (MMS) programme for 2002 is intended to address issues related to human factors, control room design, computer-based support system areas and system safety and reliability. The Halden MMS programme is intended to address extensive experimental work in the human factors, control room design and computer-based support system areas. The work is based on experiments and demonstrations carried out in the experimental facility HAMMLAB. Pilot-versions of several operator aids are adopted and integrated to the HAMMLAB simulators and demonstrated in a full dynamic setting. The Halden virtual reality laboratory has recently become an integral and important part of the programme","tok_text":"virtual project at halden [ reactor project ] \n the halden man-machin system ( mm ) programm for 2002 is intend to address issu relat to human factor , control room design , computer-bas support system area and system safeti and reliabl . the halden mm programm is intend to address extens experiment work in the human factor , control room design and computer-bas support system area . the work is base on experi and demonstr carri out in the experiment facil hammlab . pilot-vers of sever oper aid are adopt and integr to the hammlab simul and demonstr in a full dynam set . the halden virtual realiti laboratori ha recent becom an integr and import part of the programm","ordered_present_kp":[137,174,218,229,588,152],"keyphrases":["human factors","control room design","computer-based support system","safety","reliability","virtual reality","Halden Reactor Project","man-machine systems programme"],"prmu":["P","P","P","P","P","P","R","R"]} {"id":"899","title":"Mathematical model of functioning of an insurance company with allowance for advertising expenses","abstract":"A mathematical model of the functioning of an insurance company with allowance for advertising expenses is suggested. The basic characteristics of the capital of the company and the advertising efficiency are examined in the case in which the advertising expenses are proportional to the capital","tok_text":"mathemat model of function of an insur compani with allow for advertis expens \n a mathemat model of the function of an insur compani with allow for advertis expens is suggest . the basic characterist of the capit of the compani and the advertis effici are examin in the case in which the advertis expens are proport to the capit","ordered_present_kp":[207,0],"keyphrases":["mathematical model","capital","insurance company functioning","advertising expenses allowance"],"prmu":["P","P","R","R"]} {"id":"864","title":"Valuing corporate debt: the effect of cross-holdings of stock and debt","abstract":"We have developed a simple approach to valuing risky corporate debt when corporations own securities issued by other corporations. We assume that corporate debt can be valued as an option on corporate business asset value, and derive payoff functions when there exist cross-holdings of stock or debt between two firms. Next we show that payoff functions with multiple cross-holdings can be solved by the contraction principle. The payoff functions which we derive provide a number of insights about the risk structure of company cross-holdings. First, the Modigliani-Miller theorem can obtain when there exist cross-holdings between firms. Second, by establishing cross-shareholdings each of stock holders distributes a part of its payoff values to the bond holder of the other's firm, so that both firms can decrease credit risks by cross-shareholdings. In the numerical examples, we show that the correlation in firms can be a critical condition for reducing credit risk by cross-holdings of stock using Monte Carlo simulation. Moreover, we show we can calculate the default spread easily when complicated cross-holdings exist, and find which shares are beneficial or disadvantageous","tok_text":"valu corpor debt : the effect of cross-hold of stock and debt \n we have develop a simpl approach to valu riski corpor debt when corpor own secur issu by other corpor . we assum that corpor debt can be valu as an option on corpor busi asset valu , and deriv payoff function when there exist cross-hold of stock or debt between two firm . next we show that payoff function with multipl cross-hold can be solv by the contract principl . the payoff function which we deriv provid a number of insight about the risk structur of compani cross-hold . first , the modigliani-mil theorem can obtain when there exist cross-hold between firm . second , by establish cross-sharehold each of stock holder distribut a part of it payoff valu to the bond holder of the other 's firm , so that both firm can decreas credit risk by cross-sharehold . in the numer exampl , we show that the correl in firm can be a critic condit for reduc credit risk by cross-hold of stock use mont carlo simul . moreov , we show we can calcul the default spread easili when complic cross-hold exist , and find which share are benefici or disadvantag","ordered_present_kp":[139,212,222,257,376,556,655,734,799,871,958],"keyphrases":["securities","option","corporate business asset value","payoff functions","multiple cross-holdings","Modigliani-Miller theorem","cross-shareholdings","bond holder","credit risks","correlation","Monte Carlo simulation","risky corporate debt valuation","stock holdings","debt holdings"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","M","M","M"]} {"id":"821","title":"Digital rights (and wrongs)","abstract":"Attempting to grasp the many conflicts and proposed safeguards for intellectual property is extremely difficult. Legal, political, economic, and cultural issues-both domestic and international-loom large, almost dwarfing the daunting technological challenges. Solutions devised by courts and legislatures and regulatory agencies are always late out of the blocks and fall ever farther behind. Recently proposed legislation only illustrates the depth and complexity of the problem","tok_text":"digit right ( and wrong ) \n attempt to grasp the mani conflict and propos safeguard for intellectu properti is extrem difficult . legal , polit , econom , and cultur issues-both domest and international-loom larg , almost dwarf the daunt technolog challeng . solut devis by court and legislatur and regulatori agenc are alway late out of the block and fall ever farther behind . recent propos legisl onli illustr the depth and complex of the problem","ordered_present_kp":[88],"keyphrases":["intellectual property","cultural issues","economic issues","political issues","legal issues"],"prmu":["P","M","M","M","M"]} {"id":"1437","title":"Improving the frequency stability of microwave oscillators by utilizing the dual-mode sapphire-loaded cavity resonator","abstract":"The design and experimental testing of a novel control circuit to stabilize the temperature of a sapphire-loaded cavity whispering gallery resonator-oscillator and improve its medium-term frequency stability is presented. Finite-element software was used to predict frequencies and quality factors of WGE\/sub 7,0,0\/ and the WGH\/sub 9,0,0\/ modes near 9 GHz, and separated in frequency by approximately 80 MHz. Calculations show that the novel temperature control circuits from the difference frequency can result in a frequency stability of better than one part in 10\/sup 13\/ at 270 K. Also, we present details on the best way to couple orthogonally to two modes of similar frequency but different polarization","tok_text":"improv the frequenc stabil of microwav oscil by util the dual-mod sapphire-load caviti reson \n the design and experiment test of a novel control circuit to stabil the temperatur of a sapphire-load caviti whisper galleri resonator-oscil and improv it medium-term frequenc stabil is present . finite-el softwar wa use to predict frequenc and qualiti factor of wge \/ sub 7,0,0\/ and the wgh \/ sub 9,0,0\/ mode near 9 ghz , and separ in frequenc by approxim 80 mhz . calcul show that the novel temperatur control circuit from the differ frequenc can result in a frequenc stabil of better than one part in 10 \/ sup 13\/ at 270 k. also , we present detail on the best way to coupl orthogon to two mode of similar frequenc but differ polar","ordered_present_kp":[30,488,57,11,204,524,410],"keyphrases":["frequency stability","microwave oscillators","dual-mode sapphire-loaded cavity resonator","whispering gallery resonator-oscillator","9 GHz","temperature control circuit","difference frequency","frequency standard","temperature stabilisation","finite-element analysis","whispering gallery modes","high-quality factor","270 K"],"prmu":["P","P","P","P","P","P","P","M","M","M","R","M","M"]} {"id":"577","title":"A robust H\/sub infinity \/ control approach for induction motors","abstract":"This paper deals with the robustness and stability of an induction motor control structure against internal and external disturbances. In the proposed control scheme, we have used an H\/sub infinity \/ controller with field orientation and input-output linearization to achieve the above-specified features. Simulation results are included to illustrate the control approach performances","tok_text":"a robust h \/ sub infin \/ control approach for induct motor \n thi paper deal with the robust and stabil of an induct motor control structur against intern and extern disturb . in the propos control scheme , we have use an h \/ sub infin \/ control with field orient and input-output linear to achiev the above-specifi featur . simul result are includ to illustr the control approach perform","ordered_present_kp":[2,109,2,96,158,250,267],"keyphrases":["robust H\/sub infinity \/ control","robustness","stability","induction motors control","external disturbances","field orientation","input-output linearization","internal disturbances"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"1167","title":"A new approach to the d-MC problem","abstract":"Many real-world systems are multi-state systems composed of multi-state components in which the reliability can be computed in terms of the lower bound points of level d, called d-Mincuts (d-MCs). Such systems (electric power, transportation, etc.) may be regarded as flow networks whose arcs have independent, discrete, limited and multi-valued random capacities. In this paper, all MCs are assumed to be known in advance, and the authors focused on how to verify each d-MC candidate before using d-MCs to calculate the network reliability. The proposed algorithm is more efficient than existing algorithms. The algorithm runs in O(p sigma mn) time, a significant improvement over the previous O(p sigma m\/sup 2\/) time bounds based on max-flow\/min-cut, where p and or are the number of MCs and d-MC candidates, respectively. It is simple, intuitive and uses no complex data structures. An example is given to show how all d-MC candidates are found and verified by the proposed algorithm. Then the reliability of this example is computed","tok_text":"a new approach to the d-mc problem \n mani real-world system are multi-st system compos of multi-st compon in which the reliabl can be comput in term of the lower bound point of level d , call d-mincut ( d-mc ) . such system ( electr power , transport , etc . ) may be regard as flow network whose arc have independ , discret , limit and multi-valu random capac . in thi paper , all mc are assum to be known in advanc , and the author focus on how to verifi each d-mc candid befor use d-mc to calcul the network reliabl . the propos algorithm is more effici than exist algorithm . the algorithm run in o(p sigma mn ) time , a signific improv over the previou o(p sigma m \/ sup 2\/ ) time bound base on max-flow \/ min-cut , where p and or are the number of mc and d-mc candid , respect . it is simpl , intuit and use no complex data structur . an exampl is given to show how all d-mc candid are found and verifi by the propos algorithm . then the reliabl of thi exampl is comput","ordered_present_kp":[22,64,90,192,278,681,700],"keyphrases":["d-MC problem","multi-state systems","multi-state components","d-Mincuts","flow networks","time bounds","max-flow\/min-cut","reliability computation","failure analysis algorithm"],"prmu":["P","P","P","P","P","P","P","R","M"]} {"id":"1122","title":"Hybrid broadcast for the video-on-demand service","abstract":"Multicast offers an efficient means of distributing video contents\/programs to multiple clients by batching their requests and then having them share a server's video stream. Batching customers' requests is either client-initiated or server-initiated. Most advanced client-initiated video multicasts are implemented by patching. Periodic broadcast, a typical server-initiated approach, can be entirety-based or segment-based. This paper focuses on the performance of the VoD service for popular videos. First, we analyze the limitation of conventional patching when the customer request rate is high. Then, by combining the advantages of each of the two broadcast schemes, we propose a hybrid broadcast scheme for popular videos, which not only lowers the service latency but also improves clients' interactivity by using an active buffering technique. This is shown to be a good compromise for both lowering service latency and improving the VCR-like interactivity","tok_text":"hybrid broadcast for the video-on-demand servic \n multicast offer an effici mean of distribut video content \/ program to multipl client by batch their request and then have them share a server 's video stream . batch custom ' request is either client-initi or server-initi . most advanc client-initi video multicast are implement by patch . period broadcast , a typic server-initi approach , can be entirety-bas or segment-bas . thi paper focus on the perform of the vod servic for popular video . first , we analyz the limit of convent patch when the custom request rate is high . then , by combin the advantag of each of the two broadcast scheme , we propos a hybrid broadcast scheme for popular video , which not onli lower the servic latenc but also improv client ' interact by use an activ buffer techniqu . thi is shown to be a good compromis for both lower servic latenc and improv the vcr-like interact","ordered_present_kp":[770,25,50,529,552,662],"keyphrases":["video-on-demand","multicast","conventional patching","customer request rate","hybrid broadcast scheme","interactivity","quality-of-service","scheduling"],"prmu":["P","P","P","P","P","P","U","U"]} {"id":"676","title":"Impossible choice [web hosting service provider]","abstract":"Selecting a telecoms and web hosting service provider has become a high-stakes game of chance","tok_text":"imposs choic [ web host servic provid ] \n select a telecom and web host servic provid ha becom a high-stak game of chanc","ordered_present_kp":[15,42],"keyphrases":["web hosting service provider","selection","IT managers","customer service"],"prmu":["P","P","U","M"]} {"id":"633","title":"Using k-nearest-neighbor classification in the leaves of a tree","abstract":"We construct a hybrid (composite) classifier by combining two classifiers in common use - classification trees and k-nearest-neighbor (k-NN). In our scheme we divide the feature space up by a classification tree, and then classify test set items using the k-NN rule just among those training items in the same leaf as the test item. This reduces somewhat the computational load associated with k-NN, and it produces a classification rule that performs better than either trees or the usual k-NN in a number of well-known data sets","tok_text":"use k-nearest-neighbor classif in the leav of a tree \n we construct a hybrid ( composit ) classifi by combin two classifi in common use - classif tree and k-nearest-neighbor ( k-nn ) . in our scheme we divid the featur space up by a classif tree , and then classifi test set item use the k-nn rule just among those train item in the same leaf as the test item . thi reduc somewhat the comput load associ with k-nn , and it produc a classif rule that perform better than either tree or the usual k-nn in a number of well-known data set","ordered_present_kp":[4,138,385,526,288],"keyphrases":["k-nearest-neighbor classification","classification trees","k-NN rule","computational load","data sets","tree leaves","hybrid composite classifier","feature space division"],"prmu":["P","P","P","P","P","R","R","M"]} {"id":"1266","title":"An intelligent information gathering method for dynamic information mediators","abstract":"The Internet is spreading into our society rapidly and is becoming one of the information infrastructures that are indispensable for our daily life. In particular, the WWW is widely used for various purposes such as sharing personal information, academic research, business work, and electronic commerce, and the amount of available information is increasing rapidly. We usually utilize information sources on the Internet as individual stand-alone sources, but if we can integrate them, we can add more value to each of them. Hence, information mediators, which integrate information distributed on the Internet, are drawing attention. In this paper, under the assumption that the information sources to be integrated are updated frequently and asynchronously, we propose an information gathering method that constructs an answer to a query from a user, accessing information sources to be integrated properly within an allowable time period. The proposed method considers the reliability of data in the cache and the quality of answer in order to efficiently access information sources and to provide appropriate answers to the user. As evaluation, we show the effectiveness of the proposed method by using an artificial information integration problem, in which some parameters can be modified, and a real-world flight information service compared with a conventional FIFO information gathering method","tok_text":"an intellig inform gather method for dynam inform mediat \n the internet is spread into our societi rapidli and is becom one of the inform infrastructur that are indispens for our daili life . in particular , the www is wide use for variou purpos such as share person inform , academ research , busi work , and electron commerc , and the amount of avail inform is increas rapidli . we usual util inform sourc on the internet as individu stand-alon sourc , but if we can integr them , we can add more valu to each of them . henc , inform mediat , which integr inform distribut on the internet , are draw attent . in thi paper , under the assumpt that the inform sourc to be integr are updat frequent and asynchron , we propos an inform gather method that construct an answer to a queri from a user , access inform sourc to be integr properli within an allow time period . the propos method consid the reliabl of data in the cach and the qualiti of answer in order to effici access inform sourc and to provid appropri answer to the user . as evalu , we show the effect of the propos method by use an artifici inform integr problem , in which some paramet can be modifi , and a real-world flight inform servic compar with a convent fifo inform gather method","ordered_present_kp":[3,37,63,131,212,276,294,310,1097,1174],"keyphrases":["intelligent information gathering method","dynamic information mediators","Internet","information infrastructures","WWW","academic research","business work","electronic commerce","artificial information integration problem","real-world flight information service"],"prmu":["P","P","P","P","P","P","P","P","P","P"]} {"id":"1223","title":"Formalising optimal feature weight setting in case based diagnosis as linear programming problems","abstract":"Many approaches to case based reasoning (CBR) exploit feature weight setting algorithms to reduce the sensitivity to distance functions. We demonstrate that optimal feature weight setting in a special kind of CBR problems can be formalised as linear programming problems. Therefore, the optimal weight settings can be calculated in polynomial time instead of searching in exponential weight space using heuristics to get sub-optimal settings. We also demonstrate that our approach can be used to solve classification problems","tok_text":"formalis optim featur weight set in case base diagnosi as linear program problem \n mani approach to case base reason ( cbr ) exploit featur weight set algorithm to reduc the sensit to distanc function . we demonstr that optim featur weight set in a special kind of cbr problem can be formalis as linear program problem . therefor , the optim weight set can be calcul in polynomi time instead of search in exponenti weight space use heurist to get sub-optim set . we also demonstr that our approach can be use to solv classif problem","ordered_present_kp":[9,36,58,100,184,370,395,405,517,432],"keyphrases":["optimal feature weight setting","case based diagnosis","linear programming","case based reasoning","distance functions","polynomial time","searching","exponential weight space","heuristics","classification"],"prmu":["P","P","P","P","P","P","P","P","P","P"]} {"id":"918","title":"Schema evolution in data warehouses","abstract":"We address the issues related to the evolution and maintenance of data warehousing systems, when underlying data sources change their schema capabilities. These changes can invalidate views at the data warehousing system. We present an approach for dynamically adapting views according to schema changes arising on source relations. This type of maintenance concerns both the schema and the data of the data warehouse. The main issue is to avoid the view recomputation from scratch especially when views are defined from multiple sources. The data of the data warehouse is used primarily in organizational decision-making and may be strategic. Therefore, the schema of the data warehouse can evolve for modeling new requirements resulting from analysis or data-mining processing. Our approach provides means to support schema evolution of the data warehouse independently of the data sources","tok_text":"schema evolut in data warehous \n we address the issu relat to the evolut and mainten of data wareh system , when underli data sourc chang their schema capabl . these chang can invalid view at the data wareh system . we present an approach for dynam adapt view accord to schema chang aris on sourc relat . thi type of mainten concern both the schema and the data of the data warehous . the main issu is to avoid the view recomput from scratch especi when view are defin from multipl sourc . the data of the data warehous is use primarili in organiz decision-mak and may be strateg . therefor , the schema of the data warehous can evolv for model new requir result from analysi or data-min process . our approach provid mean to support schema evolut of the data warehous independ of the data sourc","ordered_present_kp":[0,17,121,291,540],"keyphrases":["schema evolution","data warehouses","data sources","source relations","organizational decision-making","system maintenance","containment","structural view maintenance","view adaptation","SQL query","data analysis"],"prmu":["P","P","P","P","P","R","U","M","R","U","R"]} {"id":"840","title":"Gender benders [women in computing profession]","abstract":"As a minority in the upper levels of the computing profession, women are sometimes mistreated through ignorance or malice. Some women have learned to respond with wit and panache","tok_text":"gender bender [ women in comput profess ] \n as a minor in the upper level of the comput profess , women are sometim mistreat through ignor or malic . some women have learn to respond with wit and panach","ordered_present_kp":[25,16],"keyphrases":["women","computing profession"],"prmu":["P","P"]} {"id":"805","title":"Active pitch control in larger scale fixed speed horizontal axis wind turbine systems. I. linear controller design","abstract":"This paper reviews and addresses the principles of linear controller design of the fixed speed wind turbine system in above rated wind speed, using pitch angle control of the blades and applying modern control theory. First, the nonlinear equations of the system are built in under some reasonable suppositions. Then, the nonlinear equations are linearised at set operating point and digital simulation results are shown in this paper. Finally, a linear quadratic optimal feedback controller is designed and the dynamics of the closed circle system are simulated with digital calculation. The advantages and disadvantages of the assumptions and design method are also discussed. Because of the inherent characteristics of the linear system control theory, the performance of the linear controller is not sufficient for operating wind turbines, as is discussed","tok_text":"activ pitch control in larger scale fix speed horizont axi wind turbin system . i. linear control design \n thi paper review and address the principl of linear control design of the fix speed wind turbin system in abov rate wind speed , use pitch angl control of the blade and appli modern control theori . first , the nonlinear equat of the system are built in under some reason supposit . then , the nonlinear equat are linearis at set oper point and digit simul result are shown in thi paper . final , a linear quadrat optim feedback control is design and the dynam of the close circl system are simul with digit calcul . the advantag and disadvantag of the assumpt and design method are also discuss . becaus of the inher characterist of the linear system control theori , the perform of the linear control is not suffici for oper wind turbin , as is discuss","ordered_present_kp":[181,240,289,318,452,506,575,745,59,46,0,83],"keyphrases":["active pitch control","horizontal axis wind turbine systems","wind turbines","linear controller design","fixed speed wind turbine system","pitch angle control","control theory","nonlinear equations","digital simulation","linear quadratic optimal feedback controller","closed circle system","linear system control theory","aerodynamics","drive train dynamics"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","U","M"]} {"id":"1413","title":"Web content extraction. A WhizBang! approach","abstract":"The extraction technology that Whizbang uses consists of a unique approach to scouring the Web for current, very specific forms of information. FlipDog, for example, checks company Web sites for hyperlinks to pages that list job opportunities. It then crawls to the deeper page and, using the WhizBang! Extraction Framework, extracts the key elements of the postings, such as job title, name of employer, job category, and job function. Click on a job and you are transferred to the company Web site to view the job description as it appears there","tok_text":"web content extract . a whizbang ! approach \n the extract technolog that whizbang use consist of a uniqu approach to scour the web for current , veri specif form of inform . flipdog , for exampl , check compani web site for hyperlink to page that list job opportun . it then crawl to the deeper page and , use the whizbang ! extract framework , extract the key element of the post , such as job titl , name of employ , job categori , and job function . click on a job and you are transfer to the compani web site to view the job descript as it appear there","ordered_present_kp":[0,174,525,203,314],"keyphrases":["Web content extraction","FlipDog","company Web sites","WhizBang! Extraction Framework","job description","job-hunting site"],"prmu":["P","P","P","P","P","M"]} {"id":"1087","title":"Implementation of DIMSIMs for stiff differential systems","abstract":"Some issues related to the implementation of diagonally implicit multistage integration methods for stiff differential systems are discussed. They include reliable estimation of the local discretization error, construction of continuous interpolants, solution of nonlinear systems of equations by simplified Newton iterations, choice of initial stepsize and order, and step and order changing strategy. Numerical results are presented which indicate that an experimental Matlab code based on type 2 methods of order one, two and three outperforms ode15s code from Matlab ODE suite on problems whose Jacobian has eigenvalues which are close to the imaginary axis","tok_text":"implement of dimsim for stiff differenti system \n some issu relat to the implement of diagon implicit multistag integr method for stiff differenti system are discuss . they includ reliabl estim of the local discret error , construct of continu interpol , solut of nonlinear system of equat by simplifi newton iter , choic of initi stepsiz and order , and step and order chang strategi . numer result are present which indic that an experiment matlab code base on type 2 method of order one , two and three outperform ode15 code from matlab ode suit on problem whose jacobian ha eigenvalu which are close to the imaginari axi","ordered_present_kp":[13,24,180,201,244,264,293,432,86],"keyphrases":["DIMSIMs","stiff differential systems","diagonally implicit multistage integration methods","reliable estimation","local discretization error","interpolants","nonlinear systems of equations","simplified Newton iterations","experimental Matlab code"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"1456","title":"Look who's talking [voice recognition]","abstract":"Voice recognition could be the answer to the problem of financial fraud, but in the world of biometric technology, money talks","tok_text":"look who 's talk [ voic recognit ] \n voic recognit could be the answer to the problem of financi fraud , but in the world of biometr technolog , money talk","ordered_present_kp":[19,89,125],"keyphrases":["voice recognition","financial fraud","biometric","cost"],"prmu":["P","P","P","U"]} {"id":"796","title":"Quadratic Newton iteration for systems with multiplicity","abstract":"Newton's iterator is one of the most popular components of polynomial equation system solvers, either from the numeric or symbolic point of view. This iterator usually handles smooth situations only (when the Jacobian matrix associated to the system is invertible). This is often a restrictive factor. Generalizing Newton's iterator is still an open problem: How to design an efficient iterator with a quadratic convergence even in degenerate cases? We propose an answer for an m-adic topology when the ideal m can be chosen generic enough: compared to a smooth case we prove quadratic convergence with a small overhead that grows with the square of the multiplicity of the root","tok_text":"quadrat newton iter for system with multipl \n newton 's iter is one of the most popular compon of polynomi equat system solver , either from the numer or symbol point of view . thi iter usual handl smooth situat onli ( when the jacobian matrix associ to the system is invert ) . thi is often a restrict factor . gener newton 's iter is still an open problem : how to design an effici iter with a quadrat converg even in degener case ? we propos an answer for an m-adic topolog when the ideal m can be chosen gener enough : compar to a smooth case we prove quadrat converg with a small overhead that grow with the squar of the multipl of the root","ordered_present_kp":[0,24,46,98,228,396,462],"keyphrases":["quadratic Newton iteration","systems with multiplicity","Newton's iterator","polynomial equation system solvers","Jacobian matrix","quadratic convergence","m-adic topology"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1386","title":"When the unexpected happens [disaster planning in banks]","abstract":"A business disruption can be as simple as a power failure or as complex as a terrorist attack. Regardless, you will need to have a plan to minimize interruptions to both your bank and your customers. Marketers have a role in this readiness process","tok_text":"when the unexpect happen [ disast plan in bank ] \n a busi disrupt can be as simpl as a power failur or as complex as a terrorist attack . regardless , you will need to have a plan to minim interrupt to both your bank and your custom . market have a role in thi readi process","ordered_present_kp":[27,42,34],"keyphrases":["disaster planning","planning","banks","recovery","public relations","emergency management"],"prmu":["P","P","P","U","U","U"]} {"id":"1002","title":"Selective representing and world-making","abstract":"We discuss the thesis of selective representing-the idea that the contents of the mental representations had by organisms are highly constrained by the biological niches within which the organisms evolved. While such a thesis has been defended by several authors elsewhere, our primary concern here is to take up the issue of the compatibility of selective representing and realism. We hope to show three things. First, that the notion of selective representing is fully consistent with the realist idea of a mind-independent world. Second, that not only are these two consistent, but that the latter (the realist conception of a mind-independent world) provides the most powerful perspective from which to motivate and understand the differing perceptual and cognitive profiles themselves. Third, that the (genuine and important) sense in which organism and environment may together constitute an integrated system of scientific interest poses no additional threat to the realist conception","tok_text":"select repres and world-mak \n we discuss the thesi of select representing-th idea that the content of the mental represent had by organ are highli constrain by the biolog nich within which the organ evolv . while such a thesi ha been defend by sever author elsewher , our primari concern here is to take up the issu of the compat of select repres and realism . we hope to show three thing . first , that the notion of select repres is fulli consist with the realist idea of a mind-independ world . second , that not onli are these two consist , but that the latter ( the realist concept of a mind-independ world ) provid the most power perspect from which to motiv and understand the differ perceptu and cognit profil themselv . third , that the ( genuin and import ) sens in which organ and environ may togeth constitut an integr system of scientif interest pose no addit threat to the realist concept","ordered_present_kp":[18,0,106,130,351,476,704],"keyphrases":["selective representing","world-making","mental representations","organisms","realism","mind-independent world","cognitive profiles"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1047","title":"Dynamics and control of initialized fractional-order systems","abstract":"Due to the importance of historical effects in fractional-order systems, this paper presents a general fractional-order system and control theory that includes the time-varying initialization response. Previous studies have not properly accounted for these historical effects. The initialization response, along with the forced response, for fractional-order systems is determined. The scalar fractional-order impulse response is determined, and is a generalization of the exponential function. Stability properties of fractional-order systems are presented in the complex w-plane, which is a transformation of the s-plane. Time responses are discussed with respect to pole positions in the complex w-plane and frequency response behavior is included. A fractional-order vector space representation, which is a generalization of the state space concept, is presented including the initialization response. Control methods for vector representations of initialized fractional-order systems are shown. Finally, the fractional-order differintegral is generalized to continuous order-distributions which have the possibility of including all fractional orders in a transfer function","tok_text":"dynam and control of initi fractional-ord system \n due to the import of histor effect in fractional-ord system , thi paper present a gener fractional-ord system and control theori that includ the time-vari initi respons . previou studi have not properli account for these histor effect . the initi respons , along with the forc respons , for fractional-ord system is determin . the scalar fractional-ord impuls respons is determin , and is a gener of the exponenti function . stabil properti of fractional-ord system are present in the complex w-plane , which is a transform of the s-plane . time respons are discuss with respect to pole posit in the complex w-plane and frequenc respons behavior is includ . a fractional-ord vector space represent , which is a gener of the state space concept , is present includ the initi respons . control method for vector represent of initi fractional-ord system are shown . final , the fractional-ord differintegr is gener to continu order-distribut which have the possibl of includ all fraction order in a transfer function","ordered_present_kp":[21,0,10,206,323,404,455,726,775,926,1047],"keyphrases":["dynamics","control","initialized fractional-order systems","initialization response","forced response","impulse response","exponential function","vector space representation","state space concept","fractional-order differintegral","transfer function"],"prmu":["P","P","P","P","P","P","P","P","P","P","P"]} {"id":"880","title":"Computing 2002: democracy, education, and the future","abstract":"Computer scientists, computer engineers, information technologists, and their collective products have grown and changed in quantity, quality, and nature. In the first decade of this new century, it should become apparent to everyone that the computing and information fields, broadly defined, will have a profound impact on every element of every person's life. The author considers how women and girls of the world have been neither educated for computing nor served by computing. Globally, women's participation in computer science grew for a while, then dropped precipitously. Computing, science, engineering, and society will suffer if this decline continues, because women have different perspectives on technology, what it is important for, how it should be built, which projects should be funded, and so on. To create a positive future, to assure that women equally influence the future, computing education must change","tok_text":"comput 2002 : democraci , educ , and the futur \n comput scientist , comput engin , inform technologist , and their collect product have grown and chang in quantiti , qualiti , and natur . in the first decad of thi new centuri , it should becom appar to everyon that the comput and inform field , broadli defin , will have a profound impact on everi element of everi person 's life . the author consid how women and girl of the world have been neither educ for comput nor serv by comput . global , women 's particip in comput scienc grew for a while , then drop precipit . comput , scienc , engin , and societi will suffer if thi declin continu , becaus women have differ perspect on technolog , what it is import for , how it should be built , which project should be fund , and so on . to creat a posit futur , to assur that women equal influenc the futur , comput educ must chang","ordered_present_kp":[41,405,415,602,14],"keyphrases":["democracy","future","women","girls","society","computer science education","gender issues"],"prmu":["P","P","P","P","P","R","U"]} {"id":"1303","title":"Reply to Carreira-Perpinan and Goodhill [mathematics in biology]","abstract":"In a paper by Carreira-Perpinan and Goodhill (see ibid., vol.14, no.7, p.1545-60, 2002) the authors apply mathematical arguments to biology. Swindale et al. think it is inappropriate to apply the standards of proof required in mathematics to the acceptance or rejection of scientific hypotheses. To give some examples, showing that data are well described by a linear model does not rule out an infinity of other possible models that might give better descriptions of the data. Proving in a mathematical sense that the linear model was correct would require ruling out all other possible models, a hopeless task. Similarly, to demonstrate that two DNA samples come from the same individual, it is sufficient to show a match between only a few regions of the genome, even though there remains a very large number of additional comparisons that could be done, any one of which might potentially disprove the match. This is unacceptable in mathematics, but in the real world, it is a perfectly reasonable basis for belief","tok_text":"repli to carreira-perpinan and goodhil [ mathemat in biolog ] \n in a paper by carreira-perpinan and goodhil ( see ibid . , vol.14 , no.7 , p.1545 - 60 , 2002 ) the author appli mathemat argument to biolog . swindal et al . think it is inappropri to appli the standard of proof requir in mathemat to the accept or reject of scientif hypothes . to give some exampl , show that data are well describ by a linear model doe not rule out an infin of other possibl model that might give better descript of the data . prove in a mathemat sens that the linear model wa correct would requir rule out all other possibl model , a hopeless task . similarli , to demonstr that two dna sampl come from the same individu , it is suffici to show a match between onli a few region of the genom , even though there remain a veri larg number of addit comparison that could be done , ani one of which might potenti disprov the match . thi is unaccept in mathemat , but in the real world , it is a perfectli reason basi for belief","ordered_present_kp":[177,53,323,402,667,770],"keyphrases":["biology","mathematical arguments","scientific hypotheses","linear model","DNA","genome","hypothesis testing","cortical maps","neural nets"],"prmu":["P","P","P","P","P","P","U","U","U"]} {"id":"1346","title":"Automatic multilevel thresholding for image segmentation by the growing time adaptive self-organizing map","abstract":"In this paper, a Growing TASOM (Time Adaptive Self-Organizing Map) network called \"GTASOM\" along with a peak finding process is proposed for automatic multilevel thresholding. The proposed GTASOM is tested for image segmentation. Experimental results demonstrate that the GTASOM is a reliable and accurate tool for image segmentation and its results outperform other thresholding methods","tok_text":"automat multilevel threshold for imag segment by the grow time adapt self-organ map \n in thi paper , a grow tasom ( time adapt self-organ map ) network call \" gtasom \" along with a peak find process is propos for automat multilevel threshold . the propos gtasom is test for imag segment . experiment result demonstr that the gtasom is a reliabl and accur tool for imag segment and it result outperform other threshold method","ordered_present_kp":[0,33,53,103,159,181],"keyphrases":["automatic multilevel thresholding","image segmentation","growing time adaptive self-organizing map","Growing TASOM","GTASOM","peak finding process"],"prmu":["P","P","P","P","P","P"]} {"id":"713","title":"Efficient feasibility testing for dial-a-ride problems","abstract":"Dial-a-ride systems involve dispatching a vehicle to satisfy demands from a set of customers who call a vehicle-operating agency requesting that an item tie picked up from a specific location and delivered to a specific destination. Dial-a-ride problems differ from other routing and scheduling problems, in that they typically involve service-related constraints. It is common to have maximum wait time constraints and maximum ride time constraints. In the presence of maximum wait time and maximum ride time restrictions, it is not clear how to efficiently determine, given a sequence of pickups and deliveries, whether a feasible schedule exists. We demonstrate that this, in fact, can be done in linear time","tok_text":"effici feasibl test for dial-a-rid problem \n dial-a-rid system involv dispatch a vehicl to satisfi demand from a set of custom who call a vehicle-oper agenc request that an item tie pick up from a specif locat and deliv to a specif destin . dial-a-rid problem differ from other rout and schedul problem , in that they typic involv service-rel constraint . it is common to have maximum wait time constraint and maximum ride time constraint . in the presenc of maximum wait time and maximum ride time restrict , it is not clear how to effici determin , given a sequenc of pickup and deliveri , whether a feasibl schedul exist . we demonstr that thi , in fact , can be done in linear time","ordered_present_kp":[7,24,70,138,278,287,331,377,410],"keyphrases":["feasibility testing","dial-a-ride problems","dispatching","vehicle-operating agency","routing","scheduling","service-related constraints","maximum wait time constraints","maximum ride time constraints"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"756","title":"A new high resolution color flow system using an eigendecomposition-based adaptive filter for clutter rejection","abstract":"We present a new signal processing strategy for high frequency color flow mapping in moving tissue environments. A new application of an eigendecomposition-based clutter rejection filter is presented with modifications to deal with high blood-to-clutter ratios (BCR). Additionally, a new method for correcting blood velocity estimates with an estimated tissue motion profile is detailed. The performance of the clutter filter and velocity estimation strategies is quantified using a new swept-scan signal model. In vivo color flow images are presented to illustrate the potential of the system for mapping blood flow in the microcirculation with external tissue motion","tok_text":"a new high resolut color flow system use an eigendecomposition-bas adapt filter for clutter reject \n we present a new signal process strategi for high frequenc color flow map in move tissu environ . a new applic of an eigendecomposition-bas clutter reject filter is present with modif to deal with high blood-to-clutt ratio ( bcr ) . addit , a new method for correct blood veloc estim with an estim tissu motion profil is detail . the perform of the clutter filter and veloc estim strategi is quantifi use a new swept-scan signal model . in vivo color flow imag are present to illustr the potenti of the system for map blood flow in the microcircul with extern tissu motion","ordered_present_kp":[44,241,118,146,178,298,393,512,538,637],"keyphrases":["eigendecomposition-based adaptive filter","signal processing strategy","high frequency color flow mapping","moving tissue environments","clutter rejection filter","high blood-to-clutter ratios","estimated tissue motion profile","swept-scan signal model","in vivo color flow images","microcirculation","high resolution colour flow system","HF colour flow mapping","blood velocity estimates correction","blood flow mapping","echoes","clutter suppression performance"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","M","R","R","U","M"]} {"id":"838","title":"Pool halls, chips, and war games: women in the culture of computing","abstract":"Computers are becoming ubiquitous in our society and they offer superb opportunities for people in jobs and everyday life. But there is a noticeable sex difference in use of computers among children. This article asks why computers are more attractive to boys than to girls and offers a cultural framework for explaining the apparent sex differences. Although the data are fragmentary, the world of computing seems to be more consistent with male adolescent culture than with feminine values and goals. Furthermore, both arcade and educational software is designed with boys in mind. These observations lead us to speculate that computing is neither inherently difficult nor uninteresting to girls, but rather that computer games and other software might have to be designed differently for girls. Programs to help teachers instill computer efficacy in all children also need to be developed","tok_text":"pool hall , chip , and war game : women in the cultur of comput \n comput are becom ubiquit in our societi and they offer superb opportun for peopl in job and everyday life . but there is a notic sex differ in use of comput among children . thi articl ask whi comput are more attract to boy than to girl and offer a cultur framework for explain the appar sex differ . although the data are fragmentari , the world of comput seem to be more consist with male adolesc cultur than with feminin valu and goal . furthermor , both arcad and educ softwar is design with boy in mind . these observ lead us to specul that comput is neither inher difficult nor uninterest to girl , but rather that comput game and other softwar might have to be design differ for girl . program to help teacher instil comput efficaci in all children also need to be develop","ordered_present_kp":[34,47,195,229,452,534,687,775],"keyphrases":["women","culture of computing","sex difference","children","male adolescent culture","educational software","computer games","teachers"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"71","title":"A study of computer attitudes of non-computing students of technical colleges in Brunei Darussalam","abstract":"The study surveyed 268 non-computing students among three technical colleges in Brunei Darussalam. The study validated an existing instrument to measure computer attitudes of non-computing students, and identified factors that contributed to the formation of their attitudes. The findings show that computer experience and educational qualification are associated with students' computer attitudes. In contrast, variables such as gender, age, ownership of a personal computer (PC), geographical location of institution, and prior computer training appeared to have no impact on computer attitudes","tok_text":"a studi of comput attitud of non-comput student of technic colleg in brunei darussalam \n the studi survey 268 non-comput student among three technic colleg in brunei darussalam . the studi valid an exist instrument to measur comput attitud of non-comput student , and identifi factor that contribut to the format of their attitud . the find show that comput experi and educ qualif are associ with student ' comput attitud . in contrast , variabl such as gender , age , ownership of a person comput ( pc ) , geograph locat of institut , and prior comput train appear to have no impact on comput attitud","ordered_present_kp":[11,99,351,369,454,463,546,51],"keyphrases":["computer attitudes","technical colleges","survey","computer experience","educational qualification","gender","age","computer training","noncomputing students","personal computer ownership","educational computing","end user computing"],"prmu":["P","P","P","P","P","P","P","P","M","R","R","M"]} {"id":"925","title":"A fundamental investigation into large strain recovery of one-way shape memory alloy wires embedded in flexible polyurethanes","abstract":"Shape memory alloys (SMAs) are being embedded in or externally attached to smart structures because of the large amount of actuation deformation and force that these materials are capable of producing when they are heated. Previous investigations have focused primarily on using single or opposing SMA wires exhibiting the two-way shape memory effect (SME) because of the simplicity with which the repeatable actuation behavior of the structure can be predicted. This repeatable actuation behavior is achieved at the expense of reduced levels of recoverable deformation. Alternatively, many potential smart structure applications will employ multiple SMA wires exhibiting a permanent one-way SME to simplify fabrication and increase the recoverable strains in the structure. To employ the one-way wires, it is necessary to investigate how they affect the recovery of large strains when they are embedded in a structure. In this investigation, the large strain recovery of a one-way SMA wire embedded in a flexible polyurethane is characterized using the novel deformation measurement technique known as digital image correlation. These results are compared with a simple actuation model and a three-dimensional finite element analysis of the structure using the Brinson model for describing the thermomechanical behavior of the SMA. Results indicate that the level of actuation strain in the structure is substantially reduced by the inelastic behavior of the one-way SMA wires, and there are significant differences between the deformations of the matrix material adjacent to the SMA wires and in the region surrounding it. The transformation behavior of the SMA wires was also determined to be volume preserving, which had a significant effect on the transverse strain fields","tok_text":"a fundament investig into larg strain recoveri of one-way shape memori alloy wire embed in flexibl polyurethan \n shape memori alloy ( sma ) are be embed in or extern attach to smart structur becaus of the larg amount of actuat deform and forc that these materi are capabl of produc when they are heat . previou investig have focus primarili on use singl or oppos sma wire exhibit the two-way shape memori effect ( sme ) becaus of the simplic with which the repeat actuat behavior of the structur can be predict . thi repeat actuat behavior is achiev at the expens of reduc level of recover deform . altern , mani potenti smart structur applic will employ multipl sma wire exhibit a perman one-way sme to simplifi fabric and increas the recover strain in the structur . to employ the one-way wire , it is necessari to investig how they affect the recoveri of larg strain when they are embed in a structur . in thi investig , the larg strain recoveri of a one-way sma wire embed in a flexibl polyurethan is character use the novel deform measur techniqu known as digit imag correl . these result are compar with a simpl actuat model and a three-dimension finit element analysi of the structur use the brinson model for describ the thermomechan behavior of the sma . result indic that the level of actuat strain in the structur is substanti reduc by the inelast behavior of the one-way sma wire , and there are signific differ between the deform of the matrix materi adjac to the sma wire and in the region surround it . the transform behavior of the sma wire wa also determin to be volum preserv , which had a signific effect on the transvers strain field","ordered_present_kp":[31,50,91,71,363,220,384,736,91,176,1137,1295,227,1450,1631],"keyphrases":["strain recovery","one-way shape memory","alloy wires","flexible polyurethanes","flexible polyurethanes","smart structures","actuation deformation","deformations","SMA wires","two-way shape memory effect","recoverable strains","three-dimensional finite element analysis","actuation strain","matrix material","transverse strain fields","flexible polyurethane","embedded sensor"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M"]} {"id":"960","title":"Bisimulation minimization and symbolic model checking","abstract":"State space minimization techniques are crucial for combating state explosion. A variety of explicit-state verification tools use bisimulation minimization to check equivalence between systems, to minimize components before composition, or to reduce a state space prior to model checking. Experimental results on bisimulation minimization in symbolic model checking contexts, however, are mixed. We explore bisimulation minimization as an optimization in symbolic model checking of invariance properties. We consider three bisimulation minimization algorithms. From each, we produce a BDD-based model checker for invariant properties and compare this model checker to a conventional one based on backwards reachability. Our comparisons, both theoretical and experimental, suggest that bisimulation minimization is not viable in the context of invariance verification, because performing the minimization requires as many, if not more, computational resources as model checking the unminimized system through backwards reachability","tok_text":"bisimul minim and symbol model check \n state space minim techniqu are crucial for combat state explos . a varieti of explicit-st verif tool use bisimul minim to check equival between system , to minim compon befor composit , or to reduc a state space prior to model check . experiment result on bisimul minim in symbol model check context , howev , are mix . we explor bisimul minim as an optim in symbol model check of invari properti . we consid three bisimul minim algorithm . from each , we produc a bdd-base model checker for invari properti and compar thi model checker to a convent one base on backward reachabl . our comparison , both theoret and experiment , suggest that bisimul minim is not viabl in the context of invari verif , becaus perform the minim requir as mani , if not more , comput resourc as model check the unminim system through backward reachabl","ordered_present_kp":[0,18,39,89,117,274,389,420,601,726],"keyphrases":["bisimulation minimization","symbolic model checking","state space minimization techniques","state explosion","explicit-state verification tools","experimental results","optimization","invariance properties","backwards reachability","invariance verification","BDD","binary decision diagram"],"prmu":["P","P","P","P","P","P","P","P","P","P","U","U"]} {"id":"123","title":"A new identification approach for FIR models","abstract":"The identification of stochastic discrete systems disturbed with noise is discussed in this brief. The concept of general prediction error (GPE) criterion is introduced for the time-domain estimate with optimal frequency estimation (OFE) introduced for the frequency-domain estimate. The two estimation methods are combined to form a new identification algorithm, which is called the empirical frequency-domain optimal parameter (EFOP) estimate, for the finite impulse response (FIR) model interfered by noise. The algorithm theoretically provides the global optimum of the model frequency-domain estimate. Some simulation examples are given to illustrate the new identification method","tok_text":"a new identif approach for fir model \n the identif of stochast discret system disturb with nois is discuss in thi brief . the concept of gener predict error ( gpe ) criterion is introduc for the time-domain estim with optim frequenc estim ( ofe ) introduc for the frequency-domain estim . the two estim method are combin to form a new identif algorithm , which is call the empir frequency-domain optim paramet ( efop ) estim , for the finit impuls respons ( fir ) model interf by nois . the algorithm theoret provid the global optimum of the model frequency-domain estim . some simul exampl are given to illustr the new identif method","ordered_present_kp":[6,27,54,195,218,264],"keyphrases":["identification approach","FIR models","stochastic discrete systems","time-domain estimate","optimal frequency estimation","frequency-domain estimate","general prediction error criterion","empirical frequency-domain optimal parameter estimate"],"prmu":["P","P","P","P","P","P","R","R"]} {"id":"792","title":"Remember e-commerce? Yeah, well, it's still here","abstract":"Sandy Kemper, the always outspoken CEO of successful e-commerce company eScout, offers his views on the purported demise of \"commerce\" in e-commerce, and what opportunities lie ahead for those bankers bold enough to act in a market turned tentative by early excesses","tok_text":"rememb e-commerc ? yeah , well , it 's still here \n sandi kemper , the alway outspoken ceo of success e-commerc compani escout , offer hi view on the purport demis of \" commerc \" in e-commerc , and what opportun lie ahead for those banker bold enough to act in a market turn tent by earli excess","ordered_present_kp":[7,232,120],"keyphrases":["e-commerce","eScout","bankers"],"prmu":["P","P","P"]} {"id":"1382","title":"Loop restructuring for data I\/O minimization on limited on-chip memory embedded processors","abstract":"In this paper, we propose a framework for analyzing the flow of values and their reuse in loop nests to minimize data traffic under the constraints of limited on-chip memory capacity and dependences. Our analysis first undertakes fusion of possible loop nests intra-procedurally and then performs loop distribution. The analysis discovers the closeness factor of two statements which is a quantitative measure of data traffic saved per unit memory occupied if the statements were under the same loop nest over the case where they are under different loop nests. We then develop a greedy algorithm which traverses the program dependence graph to group statements together under the same loop nest legally to promote maximal reuse per unit of memory occupied. We implemented our framework in Petit, a tool for dependence analysis and loop transformations. We compared our method with one based on tiling of fused loop nest and one based on a greedy strategy to purely maximize reuse. We show that our methods work better than both of these strategies in most cases for processors such as TMS320Cxx, which have a very limited amount of on-chip memory. The improvements in data I\/O range from 10 to 30 percent over tiling and from 10 to 40 percent over maximal reuse for JPEG loops","tok_text":"loop restructur for data i \/ o minim on limit on-chip memori embed processor \n in thi paper , we propos a framework for analyz the flow of valu and their reus in loop nest to minim data traffic under the constraint of limit on-chip memori capac and depend . our analysi first undertak fusion of possibl loop nest intra-procedur and then perform loop distribut . the analysi discov the close factor of two statement which is a quantit measur of data traffic save per unit memori occupi if the statement were under the same loop nest over the case where they are under differ loop nest . we then develop a greedi algorithm which travers the program depend graph to group statement togeth under the same loop nest legal to promot maxim reus per unit of memori occupi . we implement our framework in petit , a tool for depend analysi and loop transform . we compar our method with one base on tile of fuse loop nest and one base on a greedi strategi to pure maxim reus . we show that our method work better than both of these strategi in most case for processor such as tms320cxx , which have a veri limit amount of on-chip memori . the improv in data i \/ o rang from 10 to 30 percent over tile and from 10 to 40 percent over maxim reus for jpeg loop","ordered_present_kp":[0,20,46,181,61,639,796,897,385],"keyphrases":["loop restructuring","data I\/O minimization","on-chip memory","embedded processors","data traffic","closeness factor","program dependence graph","Petit","fused loop nest","loop fusion","data locality","DSP"],"prmu":["P","P","P","P","P","P","P","P","P","R","M","U"]} {"id":"844","title":"Women in computing history","abstract":"Exciting inventions, innovative technology, human interaction, and intriguing politics fill computing history. However, the recorded history is mainly composed of male achievements and involvements, even though women have played substantial roles. This situation is not unusual. Most science fields are notorious for excluding, undervaluing, or overlooking the accomplishments of their female scientists. As Lee points out, it is up to the historians and others to remedy this imbalance. Steps have been taken towards this goal through publishing biographies on women in technology, and through honoring the pioneers with various awards such as the GHC'97 Pioneering Awards, the WITI Hall of Fame, and the AWC Lovelace Award. A few online sites contain biographies of women in technology. However, even with these resources, many women who have contributed significantly to computer science are still to be discovered","tok_text":"women in comput histori \n excit invent , innov technolog , human interact , and intrigu polit fill comput histori . howev , the record histori is mainli compos of male achiev and involv , even though women have play substanti role . thi situat is not unusu . most scienc field are notori for exclud , undervalu , or overlook the accomplish of their femal scientist . as lee point out , it is up to the historian and other to remedi thi imbal . step have been taken toward thi goal through publish biographi on women in technolog , and through honor the pioneer with variou award such as the ghc'97 pioneer award , the witi hall of fame , and the awc lovelac award . a few onlin site contain biographi of women in technolog . howev , even with these resourc , mani women who have contribut significantli to comput scienc are still to be discov","ordered_present_kp":[0,9],"keyphrases":["women","computing history"],"prmu":["P","P"]} {"id":"801","title":"International customers, suppliers, and document delivery in a fee-based information service","abstract":"The Purdue University Libraries library fee-based information service, the Technical Information Service (TIS), works with both international customers and international suppliers to meet its customers' needs for difficult and esoteric document requests. Successful completion of these orders requires the ability to verify fragmentary citations; ascertain documents' availability; obtain pricing information; calculate inclusive cost quotes; meet customers' deadlines; accept international payments; and ship across borders. While international orders make tip a small percent of the total workload, these challenging and rewarding orders meet customers' needs and offer continuous improvement opportunities to the staff","tok_text":"intern custom , supplier , and document deliveri in a fee-bas inform servic \n the purdu univers librari librari fee-bas inform servic , the technic inform servic ( ti ) , work with both intern custom and intern supplier to meet it custom ' need for difficult and esoter document request . success complet of these order requir the abil to verifi fragmentari citat ; ascertain document ' avail ; obtain price inform ; calcul inclus cost quot ; meet custom ' deadlin ; accept intern payment ; and ship across border . while intern order make tip a small percent of the total workload , these challeng and reward order meet custom ' need and offer continu improv opportun to the staff","ordered_present_kp":[140,204,0,270,31,402,424,474],"keyphrases":["international customers","document delivery","Technical Information Service","international suppliers","document requests","pricing information","inclusive cost quotes","international payments","Purdue University Libraries fee-based information service","fragmentary citation verification","document availability","customer deadline meeting","continuous staff improvement"],"prmu":["P","P","P","P","P","P","P","P","R","M","R","R","R"]} {"id":"1417","title":"Craigslist: virtual community maintains human touch","abstract":"If it works why change it? This might have been the thought on the minds of dot com executives back when Internet businesses were booming, and most of the Web content was free. Web sites were overflowing with advertisements of every kind and size. Now that dot com principals know better, Web ads are no longer the only path to revenue generation. Community portals, however, never seemed to have many ads to begin with, and their content stayed truer to who they served. Many of them started off as simple places for users to list announcements, local events, want ads, real estate, and mingle with other local users. The author saw the need for San Franciscans to have a place to do all of that for free, without any annoying advertising, and ended up offering much more to his community with the creation of craigslist. \"[Polling users] was a good way for us to connect with our members, this is the way to operate successfully in situations like these - your members come first.\"","tok_text":"craigslist : virtual commun maintain human touch \n if it work whi chang it ? thi might have been the thought on the mind of dot com execut back when internet busi were boom , and most of the web content wa free . web site were overflow with advertis of everi kind and size . now that dot com princip know better , web ad are no longer the onli path to revenu gener . commun portal , howev , never seem to have mani ad to begin with , and their content stay truer to who they serv . mani of them start off as simpl place for user to list announc , local event , want ad , real estat , and mingl with other local user . the author saw the need for san franciscan to have a place to do all of that for free , without ani annoy advertis , and end up offer much more to hi commun with the creation of craigslist . \" [ poll user ] wa a good way for us to connect with our member , thi is the way to oper success in situat like these - your member come first . \"","ordered_present_kp":[13,0,149,191,352,367,537,547,561,571],"keyphrases":["craigslist","virtual community","Internet businesses","Web content","revenue generation","community portals","announcements","local events","want ads","real estate","San Francisco Bay community"],"prmu":["P","P","P","P","P","P","P","P","P","P","M"]} {"id":"1083","title":"Differential algebraic systems anew","abstract":"It is proposed to figure out the leading term in differential algebraic systems more precisely. Low index linear systems with those properly stated leading terms are considered in detail. In particular, it is asked whether a numerical integration method applied to the original system reaches the inherent regular ODE without conservation, i.e., whether the discretization and the decoupling commute in some sense. In general one cannot expect this commutativity so that additional difficulties like strong stepsize restrictions may arise. Moreover, abstract differential algebraic equations in infinite-dimensional Hilbert spaces are introduced, and the index notion is generalized to those equations. In particular, partial differential algebraic equations are considered in this abstract formulation","tok_text":"differenti algebra system anew \n it is propos to figur out the lead term in differenti algebra system more precis . low index linear system with those properli state lead term are consid in detail . in particular , it is ask whether a numer integr method appli to the origin system reach the inher regular ode without conserv , i.e. , whether the discret and the decoupl commut in some sens . in gener one can not expect thi commut so that addit difficulti like strong stepsiz restrict may aris . moreov , abstract differenti algebra equat in infinite-dimension hilbert space are introduc , and the index notion is gener to those equat . in particular , partial differenti algebra equat are consid in thi abstract formul","ordered_present_kp":[0,116,235,292,371,469,506],"keyphrases":["differential algebraic systems","low index linear systems","numerical integration method","inherent regular ODE","commutativity","stepsize restrictions","abstract differential algebraic equations"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1452","title":"Creating Web-based listings of electronic journals without creating extra work","abstract":"Creating up-to-date listings of electronic journals is challenging due to frequent changes in titles available and in URLs for electronic journal titles. However, many library users may want to browse Web pages which contain listings of electronic journals arranged by title and\/or academic disciplines. This case study examines the development of a system which automatically exports data from the online catalog and incorporates it into dynamically-generated Web sites. These sites provide multiple access points for journals, include Web-based interfaces enabling subject specialists to manage the list of titles which appears in their subject area. Because data are automatically extracted from the catalog, overlap in updating titles and URLs is avoided. Following the creation of this system, usage of electronic journals dramatically increased and feedback has been positive. Future challenges include developing more frequent updates and motivating subject specialists to more regularly monitor new titles","tok_text":"creat web-bas list of electron journal without creat extra work \n creat up-to-d list of electron journal is challeng due to frequent chang in titl avail and in url for electron journal titl . howev , mani librari user may want to brows web page which contain list of electron journal arrang by titl and\/or academ disciplin . thi case studi examin the develop of a system which automat export data from the onlin catalog and incorpor it into dynamically-gener web site . these site provid multipl access point for journal , includ web-bas interfac enabl subject specialist to manag the list of titl which appear in their subject area . becaus data are automat extract from the catalog , overlap in updat titl and url is avoid . follow the creation of thi system , usag of electron journal dramat increas and feedback ha been posit . futur challeng includ develop more frequent updat and motiv subject specialist to more regularli monitor new titl","ordered_present_kp":[6,22,160,205,236,329,406,459,807],"keyphrases":["Web-based listings","electronic journals","URL","library","Web pages","case study","online catalog","Web sites","feedback","technical services","public services partnerships"],"prmu":["P","P","P","P","P","P","P","P","P","U","U"]} {"id":"637","title":"A digital fountain approach to asynchronous reliable multicast","abstract":"The proliferation of applications that must reliably distribute large, rich content to a vast number of autonomous receivers motivates the design of new multicast and broadcast protocols. We describe an ideal, fully scalable protocol for these applications that we call a digital fountain. A digital fountain allows any number of heterogeneous receivers to acquire content with optimal efficiency at times of their choosing. Moreover, no feedback channels are needed to ensure reliable delivery, even in the face of high loss rates. We develop a protocol that closely approximates a digital fountain using two new classes of erasure codes that for large block sizes are orders of magnitude faster than standard erasure codes. We provide performance measurements that demonstrate the feasibility of our approach and discuss the design, implementation, and performance of an experimental system","tok_text":"a digit fountain approach to asynchron reliabl multicast \n the prolifer of applic that must reliabl distribut larg , rich content to a vast number of autonom receiv motiv the design of new multicast and broadcast protocol . we describ an ideal , fulli scalabl protocol for these applic that we call a digit fountain . a digit fountain allow ani number of heterogen receiv to acquir content with optim effici at time of their choos . moreov , no feedback channel are need to ensur reliabl deliveri , even in the face of high loss rate . we develop a protocol that close approxim a digit fountain use two new class of erasur code that for larg block size are order of magnitud faster than standard erasur code . we provid perform measur that demonstr the feasibl of our approach and discuss the design , implement , and perform of an experiment system","ordered_present_kp":[2,29,150,203,252,355,395,519,616,637,720],"keyphrases":["digital fountain","asynchronous reliable multicast","autonomous receivers","broadcast protocols","scalable protocol","heterogeneous receivers","optimal efficiency","high loss rates","erasure codes","large block size","performance measurements","multicast protocol","experimental system performance","Internet","FEC codes","forward error correction","RS codes","Tornado codes","Luby transform codes","bulk data distribution","IP multicast","simulation results","interoperability","content distribution methods","Reed-Solomon codes","decoder"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","R","U","M","U","M","M","M","M","M","U","U","M","M","U"]} {"id":"1262","title":"The development and evaluation of SHOKE2000: the PCI-based FPGA card","abstract":"This paper describes a PCI-based FPGA card, SHOKE2000, which was developed in order to study reconfigurable computing. Since the latest field programmable gate arrays (FPGA) consist of input\/output (I\/O) configurable blocks as well as internal configurable logic blocks, they not only realize various user logic circuits but also connect with popular I\/O standards easily. These features enable FPGA to connect several devices with different interfaces, and thus new reconfigurable systems would be realizable by connecting the FPGA with devices such as digital signal processors (DSP) and analog devices. This paper describes the basic functions of SHOKE2000, which was developed for realizing hybrid reconfigurable systems consisting of FPGA, DSP, and analog devices. We also present application examples of SHOKE2000, including a simple image recognition application, a distributed shared memory computer cluster, and teaching materials for computer education","tok_text":"the develop and evalu of shoke2000 : the pci-bas fpga card \n thi paper describ a pci-bas fpga card , shoke2000 , which wa develop in order to studi reconfigur comput . sinc the latest field programm gate array ( fpga ) consist of input \/ output ( i \/ o ) configur block as well as intern configur logic block , they not onli realiz variou user logic circuit but also connect with popular i \/ o standard easili . these featur enabl fpga to connect sever devic with differ interfac , and thu new reconfigur system would be realiz by connect the fpga with devic such as digit signal processor ( dsp ) and analog devic . thi paper describ the basic function of shoke2000 , which wa develop for realiz hybrid reconfigur system consist of fpga , dsp , and analog devic . we also present applic exampl of shoke2000 , includ a simpl imag recognit applic , a distribut share memori comput cluster , and teach materi for comput educ","ordered_present_kp":[49,25,148,184,49,388,471,567,592,602,697,825,850,894,911,339],"keyphrases":["SHOKE2000","FPGA card","FPGA","reconfigurable computing","field programmable gate arrays","user logic circuits","I\/O standard","interfaces","digital signal processors","DSP","analog devices","hybrid reconfigurable systems","image recognition application","distributed shared memory computer cluster","teaching materials","computer education","PCI","intellectual property"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","U","U"]} {"id":"1227","title":"Will new Palms win laurels.?","abstract":"PalmSource's latest operating system for mobile devices harnesses the ARM architecture to support more powerful business software, but there are concerns over compatibility with older applications","tok_text":"will new palm win laurel . ? \n palmsourc 's latest oper system for mobil devic har the arm architectur to support more power busi softwar , but there are concern over compat with older applic","ordered_present_kp":[31,51,67,87,167],"keyphrases":["PalmSource","operating system","mobile devices","ARM architecture","compatibility","Palm OS 5.0"],"prmu":["P","P","P","P","P","M"]} {"id":"959","title":"Silicon debug of a PowerPC TM microprocessor using model checking","abstract":"When silicon is available, newly designed microprocessors are tested in specially equipped hardware laboratories, where real applications can be run at hardware speeds. However, the large volumes of code being run, plus the limited access to the internal nodes of the chip, make it very difficult to characterize the nature of any failures that occur. We describe how temporal logic model checking was used to quickly characterize a design error exhibited during hardware testing of a PowerPC microprocessor. We outline the conditions under which model checking can efficiently characterize such failures, and show how the particular error we detected could have been revealed early in the design cycle, by model checking a short and simple correctness specification. We discuss the implications of this for verification methodologies over the full design cycle","tok_text":"silicon debug of a powerpc tm microprocessor use model check \n when silicon is avail , newli design microprocessor are test in special equip hardwar laboratori , where real applic can be run at hardwar speed . howev , the larg volum of code be run , plu the limit access to the intern node of the chip , make it veri difficult to character the natur of ani failur that occur . we describ how tempor logic model check wa use to quickli character a design error exhibit dure hardwar test of a powerpc microprocessor . we outlin the condit under which model check can effici character such failur , and show how the particular error we detect could have been reveal earli in the design cycl , by model check a short and simpl correct specif . we discuss the implic of thi for verif methodolog over the full design cycl","ordered_present_kp":[491,49,392,473,723,773],"keyphrases":["model checking","temporal logic","hardware testing","PowerPC microprocessor","correctness specification","verification methodologies","circuit design error","Computation Tree Logic","circuit debugging"],"prmu":["P","P","P","P","P","P","M","M","M"]} {"id":"573","title":"ECG-gated \/sup 18\/F-FDG positron emission tomography. Single test evaluation of segmental metabolism, function and contractile reserve in patients with coronary artery disease and regional dysfunction","abstract":"\/sup 18\/F-fluorodeoxyglucose (\/sup 18\/F-FDG)-positron emission tomography (PET) provides information about myocardial glucose metabolism to diagnose myocardial viability. Additional information about the functional status is necessary. Comparison of tomographic metabolic PET with data from other imaging techniques is always hampered by some transfer uncertainty and scatter. We wanted to evaluate a new Fourier-based ECG-gated PET technique using a high resolution scanner providing both metabolic and functional data with respect to feasibility in patients with diseased left ventricles. Forty-five patients with coronary artery disease and at least one left ventricular segment with severe hypokinesis or akinesis at biplane cineventriculography were included. A new Fourier-based ECG-gated metabolic \/sup 18\/F-FDG-PET was performed in these patients. Function at rest and \/sup 18\/F-FDG uptake were examined in the PET study using a 36-segment model. Segmental comparison with ventriculography revealed a high reliability in identifying dysfunctional segments (>96%). \/sup 18\/F-FDG uptake of normokinetic\/hypokinetic\/akinetic segments was 75.4+or-7.5, 65.3+or-10.5, and 35.9+or-15.2% (p<0.001). In segments >or=70% \/sup 18\/F-FDG uptake no akinesia was observed. No residual function was found below 40% \/sup 18\/F-FDG uptake. An additional dobutamine test was performed and revealed inotropic reserve (viability) in 42 akinetic segments and 45 hypokinetic segments. ECG-gated metabolic PET with pixel-based Fourier smoothing provides reliable data on regional function. Assessment of metabolism and function makes complete judgement of segmental status feasible within a single study without any transfer artefacts or test-to-test variability. The results indicate the presence of considerable amounts of viable myocardium in regions with an uptake of 40-50% \/sup 18\/F-FDG","tok_text":"ecg-gat \/sup 18 \/ f-fdg positron emiss tomographi . singl test evalu of segment metabol , function and contractil reserv in patient with coronari arteri diseas and region dysfunct \n \/sup 18 \/ f-fluorodeoxyglucos ( \/sup 18 \/ f-fdg)-positron emiss tomographi ( pet ) provid inform about myocardi glucos metabol to diagnos myocardi viabil . addit inform about the function statu is necessari . comparison of tomograph metabol pet with data from other imag techniqu is alway hamper by some transfer uncertainti and scatter . we want to evalu a new fourier-bas ecg-gat pet techniqu use a high resolut scanner provid both metabol and function data with respect to feasibl in patient with diseas left ventricl . forty-f patient with coronari arteri diseas and at least one left ventricular segment with sever hypokinesi or akinesi at biplan cineventriculographi were includ . a new fourier-bas ecg-gat metabol \/sup 18 \/ f-fdg-pet wa perform in these patient . function at rest and \/sup 18 \/ f-fdg uptak were examin in the pet studi use a 36-segment model . segment comparison with ventriculographi reveal a high reliabl in identifi dysfunct segment ( > 96 % ) . \/sup 18 \/ f-fdg uptak of normokinet \/ hypokinet \/ akinet segment wa 75.4+or-7.5 , 65.3+or-10.5 , and 35.9+or-15.2 % ( p<0.001 ) . in segment > or=70 % \/sup 18 \/ f-fdg uptak no akinesia wa observ . no residu function wa found below 40 % \/sup 18 \/ f-fdg uptak . an addit dobutamin test wa perform and reveal inotrop reserv ( viabil ) in 42 akinet segment and 45 hypokinet segment . ecg-gat metabol pet with pixel-bas fourier smooth provid reliabl data on region function . assess of metabol and function make complet judgement of segment statu feasibl within a singl studi without ani transfer artefact or test-to-test variabl . the result indic the presenc of consider amount of viabl myocardium in region with an uptak of 40 - 50 % \/sup 18 \/ f-fdg","ordered_present_kp":[838,1125,1180,1355,1424,1461,1205,1515,1560,1608,1683,1738,1833,164,285,320,90,486,544,583,124,682,137,766,796,816,827],"keyphrases":["functional","patients","coronary artery disease","regional dysfunction","myocardial glucose metabolism","myocardial viability","transfer uncertainty","Fourier-based ECG-gated PET technique","high resolution scanner","diseased left ventricles","left ventricular segment","severe hypokinesis","akinesis","biplane cineventriculography","ventriculography","dysfunctional segments","normokinetic\/hypokinetic\/akinetic segments","akinetic segments","residual function","dobutamine test","inotropic reserve","hypokinetic segments","pixel-based Fourier smoothing","regional function","segmental status","transfer artefacts","viable myocardium","Fourier-based ECG-gated metabolic \/sup 18\/F-fluorodeoxyglucose-positron emission tomography","\/sup 18\/F-fluorodeoxyglucose uptake","thirty six-segment model"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M","R","M"]} {"id":"1163","title":"Evaluating the complexity of index sets for families of general recursive functions in the arithmetic hierarchy","abstract":"The complexity of index sets of families of general recursive functions is evaluated in the Kleene-Mostowski arithmetic hierarchy","tok_text":"evalu the complex of index set for famili of gener recurs function in the arithmet hierarchi \n the complex of index set of famili of gener recurs function is evalu in the kleene-mostowski arithmet hierarchi","ordered_present_kp":[45,74,171],"keyphrases":["general recursive functions","arithmetic hierarchy","Kleene-Mostowski arithmetic hierarchy","index sets complexity"],"prmu":["P","P","P","R"]} {"id":"1126","title":"A note on an axiomatization of the core of market games","abstract":"As shown by Peleg (1993), the core of market games is characterized by nonemptiness, individual rationality, superadditivity, the weak reduced game property, the converse reduced game property, and weak symmetry. It was not known whether weak symmetry was logically independent. With the help of a certain transitive 4-person TU game, it is shown that weak symmetry is redundant in this result. Hence, the core on market games is axiomatized by the remaining five properties, if the universe of players contains at least four members","tok_text":"a note on an axiomat of the core of market game \n as shown by peleg ( 1993 ) , the core of market game is character by nonempti , individu ration , superaddit , the weak reduc game properti , the convers reduc game properti , and weak symmetri . it wa not known whether weak symmetri wa logic independ . with the help of a certain transit 4-person tu game , it is shown that weak symmetri is redund in thi result . henc , the core on market game is axiomat by the remain five properti , if the univers of player contain at least four member","ordered_present_kp":[130,165,196,230,331,392],"keyphrases":["individual rationality","weak reduced game property","converse reduced game property","weak symmetry","transitive 4-person TU game","redundant","market game core axiomatization","nonempty games","superadditive games"],"prmu":["P","P","P","P","P","P","R","R","R"]} {"id":"999","title":"The importance of continuity: a reply to Chris Eliasmith","abstract":"In his reply to Eliasmith (see ibid., vol.11, p.417-26, 2001) Poznanski considers how the notion of continuity of dynamic representations serves as a beacon for an integrative neuroscience to emerge. He considers how the importance of continuity has come under attack from Eliasmith (2001) who claims: (i) continuous nature of neurons is not relevant to the information they process, and (ii) continuity is not important for understanding cognition because the various sources of noise introduce uncertainty into spike arrival times, so encoding and decoding spike trains must be discrete at some level","tok_text":"the import of continu : a repli to chri eliasmith \n in hi repli to eliasmith ( see ibid . , vol.11 , p.417 - 26 , 2001 ) poznanski consid how the notion of continu of dynam represent serv as a beacon for an integr neurosci to emerg . he consid how the import of continu ha come under attack from eliasmith ( 2001 ) who claim : ( i ) continu natur of neuron is not relev to the inform they process , and ( ii ) continu is not import for understand cognit becaus the variou sourc of nois introduc uncertainti into spike arriv time , so encod and decod spike train must be discret at some level","ordered_present_kp":[14,167,207,350,447,495,512,550],"keyphrases":["continuity","dynamic representations","integrative neuroscience","neurons","cognition","uncertainty","spike arrival times","spike trains","cognitive systems","neural nets"],"prmu":["P","P","P","P","P","P","P","P","M","U"]} {"id":"88","title":"Planning linear construction projects: automated method for the generation of earthwork activities","abstract":"Earthworks planning for road construction projects is a complex operation and the planning rules used are usually intuitive and not well defined. An approach to automate the earthworks planning process is described and the basic techniques that are used are outlined. A computer-based system has been developed, initially to help planners use existing techniques more efficiently. With their input, the system has been extended to incorporate a knowledge base and a simulation of the earthworks processes. As well as creating activity sets in a much shorter time, the system has shown that for a real project, the model is able to generate activity sets that are comparable to those generated by a project planner","tok_text":"plan linear construct project : autom method for the gener of earthwork activ \n earthwork plan for road construct project is a complex oper and the plan rule use are usual intuit and not well defin . an approach to autom the earthwork plan process is describ and the basic techniqu that are use are outlin . a computer-bas system ha been develop , initi to help planner use exist techniqu more effici . with their input , the system ha been extend to incorpor a knowledg base and a simul of the earthwork process . as well as creat activ set in a much shorter time , the system ha shown that for a real project , the model is abl to gener activ set that are compar to those gener by a project planner","ordered_present_kp":[5,62,99,148,225,310,462],"keyphrases":["linear construction projects","earthwork activities","road construction projects","planning rules","earthworks planning process","computer-based system","knowledge base"],"prmu":["P","P","P","P","P","P","P"]} {"id":"75","title":"A portable Auto Attendant System with sophisticated dialog structure","abstract":"An attendant system connects the caller to the party he\/she wants to talk to. Traditional systems require the caller to know the full name of the party. If the caller forgets the name, the system fails to provide service for the caller. In this paper we propose a portable Auto Attendant System (AAS) with sophisticated dialog structure that gives a caller more flexibility while calling. The caller may interact with the system to request a phone number by providing just a work area, specialty, surname, or title, etc. If the party is absent, the system may provide extra information such as where he went, when he will be back, and what he is doing. The system is built modularly, with components such as speech recognizer, language model, dialog manager and text-to-speech that can be replaced if necessary. By simply changing the personnel record database, the system can easily be ported to other companies. The sophisticated dialog manager applies many strategies to allow natural interaction between user and system. Functions such as fuzzy request, user repairing, and extra information query, which are not provided by other systems, are integrated into our system. Experimental results and comparisons to other systems show that our approach provides a more user friendly and natural interaction for auto attendant system","tok_text":"a portabl auto attend system with sophist dialog structur \n an attend system connect the caller to the parti he \/ she want to talk to . tradit system requir the caller to know the full name of the parti . if the caller forget the name , the system fail to provid servic for the caller . in thi paper we propos a portabl auto attend system ( aa ) with sophist dialog structur that give a caller more flexibl while call . the caller may interact with the system to request a phone number by provid just a work area , specialti , surnam , or titl , etc . if the parti is absent , the system may provid extra inform such as where he went , when he will be back , and what he is do . the system is built modularli , with compon such as speech recogn , languag model , dialog manag and text-to-speech that can be replac if necessari . by simpli chang the personnel record databas , the system can easili be port to other compani . the sophist dialog manag appli mani strategi to allow natur interact between user and system . function such as fuzzi request , user repair , and extra inform queri , which are not provid by other system , are integr into our system . experiment result and comparison to other system show that our approach provid a more user friendli and natur interact for auto attend system","ordered_present_kp":[15,10,1037,763,731],"keyphrases":["Auto Attendant System","attendant system","speech recognizer","dialog manager","fuzzy request","clear request","semantic frame","spoken dialog systems","telephone","telephone-based system"],"prmu":["P","P","P","P","P","M","U","M","U","M"]} {"id":"921","title":"Processing of complexly shaped multiply connected domains in finite element mesh generation","abstract":"Large number of finite element models in modern materials science and engineering is defined on complexly shaped domains, quite often multiply connected. Generation of quality finite element meshes on such domains, especially in cases when the mesh must be 100% quadrilateral, is highly problematic. This paper describes mathematical fundamentals and practical -implementation of a powerful method and algorithm allowing transformation of multiply connected domains of arbitrary geometrical complexity into a set of simple domains; the latter can then be processed by broadly available finite element mesh generators. The developed method was applied to a number of complex geometries, including those arising in analysis of parasitic inductances and capacitances in printed circuit boards. The quality of practical results produced by the method and its programming implementation provide evidence that the algorithm can be applied to other finite element models with various physical backgrounds","tok_text":"process of complexli shape multipli connect domain in finit element mesh gener \n larg number of finit element model in modern materi scienc and engin is defin on complexli shape domain , quit often multipli connect . gener of qualiti finit element mesh on such domain , especi in case when the mesh must be 100 % quadrilater , is highli problemat . thi paper describ mathemat fundament and practic -implement of a power method and algorithm allow transform of multipli connect domain of arbitrari geometr complex into a set of simpl domain ; the latter can then be process by broadli avail finit element mesh gener . the develop method wa appli to a number of complex geometri , includ those aris in analysi of parasit induct and capacit in print circuit board . the qualiti of practic result produc by the method and it program implement provid evid that the algorithm can be appli to other finit element model with variou physic background","ordered_present_kp":[54,11,96,520,711,741,821,487],"keyphrases":["complexly shaped multiply connected domains","finite element mesh generation","finite element models","arbitrary geometrical complexity","set of simple domains","parasitic inductances","printed circuit boards","programming implementation","quadrilateral mesh","domains transformation","parasitic capacitances","metal forming processes","structural engineering models","iterative basis","general domain subdivision algorithm","artificial cut","automatic step calculation"],"prmu":["P","P","P","P","P","P","P","P","R","R","R","M","M","U","M","U","U"]} {"id":"964","title":"Modeling group foraging: individual suboptimality, interference, and a kind of matching","abstract":"A series of agent-based models support the hypothesis that behaviors adapted to a group situation may be suboptimal (or \"irrational\") when expressed by an isolated individual. These models focus on two areas of current concern in behavioral ecology and experimental psychology: the \"interference function\" (which relates the intake rate of a focal forager to the density of conspecifics) and the \"matching law\" (which formalizes the observation that many animals match the frequency of their response to different stimuli in proportion to the reward obtained from each stimulus type). Each model employs genetic algorithms to evolve foraging behaviors for multiple agents in spatially explicit environments, structured at the level of situated perception and action. A second concern of the article is to extend the understanding of both matching and interference per se by modeling at this level","tok_text":"model group forag : individu suboptim , interfer , and a kind of match \n a seri of agent-bas model support the hypothesi that behavior adapt to a group situat may be suboptim ( or \" irrat \" ) when express by an isol individu . these model focu on two area of current concern in behavior ecolog and experiment psycholog : the \" interfer function \" ( which relat the intak rate of a focal forag to the densiti of conspecif ) and the \" match law \" ( which formal the observ that mani anim match the frequenc of their respons to differ stimuli in proport to the reward obtain from each stimulu type ) . each model employ genet algorithm to evolv forag behavior for multipl agent in spatial explicit environ , structur at the level of situat percept and action . a second concern of the articl is to extend the understand of both match and interfer per se by model at thi level","ordered_present_kp":[6,20,83,146,211,278,298,327,381,433,617,661,678,730],"keyphrases":["group foraging","individual suboptimality","agent-based models","group situation","isolated individual","behavioral ecology","experimental psychology","interference function","focal forager","matching law","genetic algorithms","multiple agents","spatially explicit environments","situated perception","suboptimal behavior","situated action"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"127","title":"Asymptotical stability in discrete-time neural networks","abstract":"In this work, we present a proof of the existence of a fixed point and a generalized sufficient condition that guarantees the stability of it in discrete-time neural networks by using the Lyapunov function method. We also show that for both symmetric and asymmetric connections, the unique attractor is a fixed point when several conditions are satisfied. This is an extended result of Chen and Aihara (see Physica D, vol. 104, no. 3\/4, p. 286-325, 1997). In particular, we further study the stability of equilibrium in discrete-time neural networks with the connection weight matrix in form of an interval matrix. Finally, several examples are shown to illustrate and reinforce our theory","tok_text":"asymptot stabil in discrete-tim neural network \n in thi work , we present a proof of the exist of a fix point and a gener suffici condit that guarante the stabil of it in discrete-tim neural network by use the lyapunov function method . we also show that for both symmetr and asymmetr connect , the uniqu attractor is a fix point when sever condit are satisfi . thi is an extend result of chen and aihara ( see physica d , vol . 104 , no . 3\/4 , p. 286 - 325 , 1997 ) . in particular , we further studi the stabil of equilibrium in discrete-tim neural network with the connect weight matrix in form of an interv matrix . final , sever exampl are shown to illustr and reinforc our theori","ordered_present_kp":[0,100,210,276,299,9,569,605,116,19],"keyphrases":["asymptotical stability","stability","discrete-time neural networks","fixed point","generalized sufficient condition","Lyapunov function method","asymmetric connections","unique attractor","connection weight matrix","interval matrix","symmetric connections","equilibrium stability"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"1307","title":"Law librarians' survey: are academic law librarians in decline?","abstract":"The author reports on the results of one extra element in the BIALL\/SPTL survey, designed to acquire further information about academic law librarians. The survey has fulfilled the aim of providing a snapshot of the academic law library profession and has examined the concerns that have been raised. Perhaps most importantly, it has shown that more long-term work needs to be done to monitor the situation effectively. We hope that BIALL will take on this challenge and help to maintain the status of academic law librarians and aid them in their work","tok_text":"law librarian ' survey : are academ law librarian in declin ? \n the author report on the result of one extra element in the biall \/ sptl survey , design to acquir further inform about academ law librarian . the survey ha fulfil the aim of provid a snapshot of the academ law librari profess and ha examin the concern that have been rais . perhap most importantli , it ha shown that more long-term work need to be done to monitor the situat effect . we hope that biall will take on thi challeng and help to maintain the statu of academ law librarian and aid them in their work","ordered_present_kp":[16,124,29,29],"keyphrases":["survey","academic law library","academic law librarians","BIALL\/SPTL"],"prmu":["P","P","P","P"]} {"id":"1342","title":"Defending against flooding-based distributed denial-of-service attacks: a tutorial","abstract":"Flooding-based distributed denial-of-service (DDoS) attack presents a very serious threat to the stability of the Internet. In a typical DDoS attack, a large number of compromised hosts are amassed to send useless packets to jam a victim, or its Internet connection, or both. In the last two years, it was discovered that DDoS attack methods and tools are becoming more sophisticated, effective, and also more difficult to trace to the real attackers. On the defense side, current technologies are still unable to withstand large-scale attacks. The main purpose of this article is therefore twofold. The first one is to describe various DDoS attack methods, and to present a systematic review and evaluation of the existing defense mechanisms. The second is to discuss a longer-term solution, dubbed the Internet-firewall approach, that attempts to intercept attack packets in the Internet core, well before reaching the victim","tok_text":"defend against flooding-bas distribut denial-of-servic attack : a tutori \n flooding-bas distribut denial-of-servic ( ddo ) attack present a veri seriou threat to the stabil of the internet . in a typic ddo attack , a larg number of compromis host are amass to send useless packet to jam a victim , or it internet connect , or both . in the last two year , it wa discov that ddo attack method and tool are becom more sophist , effect , and also more difficult to trace to the real attack . on the defens side , current technolog are still unabl to withstand large-scal attack . the main purpos of thi articl is therefor twofold . the first one is to describ variou ddo attack method , and to present a systemat review and evalu of the exist defens mechan . the second is to discuss a longer-term solut , dub the internet-firewal approach , that attempt to intercept attack packet in the internet core , well befor reach the victim","ordered_present_kp":[15,66,374,557],"keyphrases":["flooding-based distributed denial-of-service attacks","tutorial","DDoS attack methods","large-scale attacks","Internet stability","DDoS attack tools","Internet firewall","attack packets interception","reflector attacks","distributed attack detection"],"prmu":["P","P","P","P","R","R","M","R","M","M"]} {"id":"717","title":"A network simplex algorithm with O(n) consecutive degenerate pivots","abstract":"We suggest a pivot rule for the primal simplex algorithm for the minimum cost flow problem, known as the network simplex algorithm. Due to degeneracy, cycling may occur in the network simplex algorithm. The cycling can be prevented by maintaining strongly feasible bases proposed by Cunningham (1976); however, if we do not impose any restrictions on the entering variables, the algorithm can still perform an exponentially long sequence of degenerate pivots. This phenomenon is known as stalling. Researchers have suggested several pivot rules with the following bounds on the number of consecutive degenerate pivots: m, n\/sup 2\/, k(k + 1)\/2, where n is the number of nodes in the network, m is the number of arcs in the network, and k is the number of degenerate arcs in the basis. (Observe that k 0 (a in A) where any arc-flow is bounded by a fixed proportion of the total flow value, where gamma (a)f(a) units arrive at the vertex w for each arc-flow f(a) (a identical to ( upsilon , w) in A) entering vertex upsilon in a generalized flow. Our main results are to propose two polynomial algorithms for this problem. The first algorithm runs in O(mM(n, m, B') log B) time, where B is the maximum absolute value among integral values used by an instance of the problem, and M(n, m, B') denotes the complexity of solving a generalized maximum flow problem in a network with n vertices, and m arcs, and a rational instance expressed with integers between 1 and B'. In the second algorithm, using a parameterized technique, runs in O({M(n, m, B')}\/sup 2\/) time","tok_text":"two effici algorithm for the gener maximum balanc flow problem \n minoux ( 1976 ) consid the maximum balanc flow problem , i.e. the problem of find a maximum flow in a two-termin network n = ( v , a ) with sourc s and sink t satisfi the constraint that ani arc-flow of n is bound by a fix proport of the total flow valu from s to t , where v is vertex set and a is arc set . as a gener , we focu on the problem of maxim the total flow valu of a gener flow in n with gain gamma ( a ) > 0 ( a in a ) where ani arc-flow is bound by a fix proport of the total flow valu , where gamma ( a)f(a ) unit arriv at the vertex w for each arc-flow f(a ) ( a ident to ( upsilon , w ) in a ) enter vertex upsilon in a gener flow . our main result are to propos two polynomi algorithm for thi problem . the first algorithm run in o(mm(n , m , b ' ) log b ) time , where b is the maximum absolut valu among integr valu use by an instanc of the problem , and m(n , m , b ' ) denot the complex of solv a gener maximum flow problem in a network with n vertic , and m arc , and a ration instanc express with integ between 1 and b ' . in the second algorithm , use a parameter techniqu , run in o({m(n , m , b')}\/sup 2\/ ) time","ordered_present_kp":[29,167,749,1144],"keyphrases":["generalized maximum balanced flow problem","two-terminal network","polynomial algorithms","parameterized technique"],"prmu":["P","P","P","P"]} {"id":"827","title":"Williams nears end of Chapter 11 [telecom]","abstract":"Leucadia National Corp. comes through with a $330 million boost for Williams Communications, which should keep the carrier afloat through the remainder of its bankruptcy","tok_text":"william near end of chapter 11 [ telecom ] \n leucadia nation corp. come through with a $ 330 million boost for william commun , which should keep the carrier afloat through the remaind of it bankruptci","ordered_present_kp":[111,191],"keyphrases":["Williams Communications","bankruptcy","Leucadia National Corp"],"prmu":["P","P","M"]} {"id":"1431","title":"Cataloguing to help law library users","abstract":"The author takes a broader view of the catalogue than is usual; we can include within it items that have locations other than the office\/library itself. This may well start with Internet resources, but can perfectly appropriately continue with standard works not held in the immediate collection but available in some other accessible collection, such as the local reference library. The essential feature is to include entries for the kind of material sought by users, with the addition of a location mark indicating where they can find it","tok_text":"catalogu to help law librari user \n the author take a broader view of the catalogu than is usual ; we can includ within it item that have locat other than the offic \/ librari itself . thi may well start with internet resourc , but can perfectli appropri continu with standard work not held in the immedi collect but avail in some other access collect , such as the local refer librari . the essenti featur is to includ entri for the kind of materi sought by user , with the addit of a locat mark indic where they can find it","ordered_present_kp":[17,0,208,371,485],"keyphrases":["cataloguing","law library users","Internet resources","reference library","location mark"],"prmu":["P","P","P","P","P"]} {"id":"654","title":"A question of perspective: assigning Library of Congress subject headings to classical literature and ancient history","abstract":"This article explains the concept of world view and shows how the world view of cataloguers influences the development and assignment of subject headings to works about other cultures and civilizations, using works from classical literature and ancient history as examples. Cataloguers are encouraged to evaluate the headings they assign to works in classical literature and ancient history in terms of the world views of Ancient Greece and Rome so that headings reflect the contents of the works they describe and give fuller expression to the diversity of thoughts and themes that characterize these ancient civilizations","tok_text":"a question of perspect : assign librari of congress subject head to classic literatur and ancient histori \n thi articl explain the concept of world view and show how the world view of catalogu influenc the develop and assign of subject head to work about other cultur and civil , use work from classic literatur and ancient histori as exampl . catalogu are encourag to evalu the head they assign to work in classic literatur and ancient histori in term of the world view of ancient greec and rome so that head reflect the content of the work they describ and give fuller express to the divers of thought and theme that character these ancient civil","ordered_present_kp":[142,261,272,68,90,474],"keyphrases":["classical literature","ancient history","world view","cultures","civilizations","Ancient Greece","Library of Congress subject heading assignment","Ancient Rome"],"prmu":["P","P","P","P","P","P","R","R"]} {"id":"611","title":"Intelligent optimal sieving method for FACTS device control in multi-machine systems","abstract":"A multi-target oriented optimal control strategy for FACTS devices installed in multi-machine power systems is presented in this paper, which is named the intelligent optimal sieving control (IOSC) method. This new method divides the FACTS device output region into several parts and selects one typical value from each part, which is called output candidate. Then, an intelligent optimal sieve is constructed, which predicts the impacts of each output candidate on a power system and sieves out an optimal output from all of the candidates. The artificial neural network technologies and fuzzy methods are applied to build the intelligent sieve. Finally, the real control signal of FACTS devices is calculated according to the selected optimal output through inverse system method. Simulation has been done on a three-machine power system and the results show that the proposed IOSC controller can effectively attenuate system oscillations and enhance the power system transient stability","tok_text":"intellig optim siev method for fact devic control in multi-machin system \n a multi-target orient optim control strategi for fact devic instal in multi-machin power system is present in thi paper , which is name the intellig optim siev control ( iosc ) method . thi new method divid the fact devic output region into sever part and select one typic valu from each part , which is call output candid . then , an intellig optim siev is construct , which predict the impact of each output candid on a power system and siev out an optim output from all of the candid . the artifici neural network technolog and fuzzi method are appli to build the intellig siev . final , the real control signal of fact devic is calcul accord to the select optim output through invers system method . simul ha been done on a three-machin power system and the result show that the propos iosc control can effect attenu system oscil and enhanc the power system transient stabil","ordered_present_kp":[31,0,31,53,77,0,568,606,675,728,756,803],"keyphrases":["intelligent optimal sieving method","intelligent optimal sieve","FACTS","FACTS device control","multi-machine systems","multi-target oriented optimal control strategy","artificial neural network technologies","fuzzy methods","control signal","selected optimal output","inverse system method","three-machine power system","intelligent control","system oscillations attenuation","power system transient stability enhancement"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1244","title":"Applied ethics in business information units","abstract":"The primary thesis of this paper is that business information professionals commonly overlook ethical dilemmas in the workplace. Although the thesis remains unproven, the author highlights, by way of real and hypothetical case studies, a number of situations in which ethical tensions can be identified, and suggests that information professionals need to be more aware of the moral context of their actions. Resolving ethical dilemmas should be one of the aims of competent information professionals and their managers, although it is recognized that dilemmas often cannot easily be resolved. A background to the main theories of applied ethics forms the framework for later discussion","tok_text":"appli ethic in busi inform unit \n the primari thesi of thi paper is that busi inform profession commonli overlook ethic dilemma in the workplac . although the thesi remain unproven , the author highlight , by way of real and hypothet case studi , a number of situat in which ethic tension can be identifi , and suggest that inform profession need to be more awar of the moral context of their action . resolv ethic dilemma should be one of the aim of compet inform profession and their manag , although it is recogn that dilemma often can not easili be resolv . a background to the main theori of appli ethic form the framework for later discuss","ordered_present_kp":[73,114,370,0,15],"keyphrases":["applied ethics","business information units","business information professionals","ethical dilemmas","moral context"],"prmu":["P","P","P","P","P"]} {"id":"1201","title":"Moving into the mainstream [product lifecycle management]","abstract":"Product lifecycle management (PLM) is widely recognised by most manufacturing companies, as manufacturers begin to identify and implement targeted projects intended to deliver return-on investment in a timely fashion. Vendors are also releasing second-generation PLM products that are packaged, out-of-the-box solutions","tok_text":"move into the mainstream [ product lifecycl manag ] \n product lifecycl manag ( plm ) is wide recognis by most manufactur compani , as manufactur begin to identifi and implement target project intend to deliv return-on invest in a time fashion . vendor are also releas second-gener plm product that are packag , out-of-the-box solut","ordered_present_kp":[27,110],"keyphrases":["product lifecycle management","manufacturing companies","product data management","product development","enterprise resource planning"],"prmu":["P","P","M","M","U"]} {"id":"555","title":"Computing transient gating charge movement of voltage-dependent ion channels","abstract":"The opening of voltage-gated sodium, potassium, and calcium ion channels has a steep relationship with voltage. In response to changes in the transmembrane voltage, structural movements of an ion channel that precede channel opening generate a capacitative gating current. The net gating charge displacement due to membrane depolarization is an index of the voltage sensitivity of the ion channel activation process. Understanding the molecular basis of voltage-dependent gating of ion channels requires the measurement and computation of the gating charge, Q. We derive a simple and accurate semianalytic approach to computing the voltage dependence of transient gating charge movement (Q-V relationship) of discrete Markov state models of ion channels using matrix methods. This approach allows rapid computation of Q-V curves for finite and infinite length step depolarizations and is consistent with experimentally measured transient gating charge. This computational approach was applied to Shaker potassium channel gating, including the impact of inactivating particles on potassium channel gating currents","tok_text":"comput transient gate charg movement of voltage-depend ion channel \n the open of voltage-g sodium , potassium , and calcium ion channel ha a steep relationship with voltag . in respons to chang in the transmembran voltag , structur movement of an ion channel that preced channel open gener a capacit gate current . the net gate charg displac due to membran depolar is an index of the voltag sensit of the ion channel activ process . understand the molecular basi of voltage-depend gate of ion channel requir the measur and comput of the gate charg , q. we deriv a simpl and accur semianalyt approach to comput the voltag depend of transient gate charg movement ( q-v relationship ) of discret markov state model of ion channel use matrix method . thi approach allow rapid comput of q-v curv for finit and infinit length step depolar and is consist with experiment measur transient gate charg . thi comput approach wa appli to shaker potassium channel gate , includ the impact of inactiv particl on potassium channel gate current","ordered_present_kp":[7,201,300,55,22,979,693],"keyphrases":["transient gating charge movement","charge movement","ion channels","transmembrane voltage","gating current","Markov state model","inactivation","action potentials","immobilization"],"prmu":["P","P","P","P","P","P","P","U","U"]} {"id":"982","title":"Abundance of mosaic patterns for CNN with spatially variant templates","abstract":"This work investigates the complexity of one-dimensional cellular neural network mosaic patterns with spatially variant templates on finite and infinite lattices. Various boundary conditions are considered for finite lattices and the exact number of mosaic patterns is computed precisely. The entropy of mosaic patterns with periodic templates can also be calculated for infinite lattices. Furthermore, we show the abundance of mosaic patterns with respect to template periods and, which differ greatly from cases with spatially invariant templates","tok_text":"abund of mosaic pattern for cnn with spatial variant templat \n thi work investig the complex of one-dimension cellular neural network mosaic pattern with spatial variant templat on finit and infinit lattic . variou boundari condit are consid for finit lattic and the exact number of mosaic pattern is comput precis . the entropi of mosaic pattern with period templat can also be calcul for infinit lattic . furthermor , we show the abund of mosaic pattern with respect to templat period and , which differ greatli from case with spatial invari templat","ordered_present_kp":[9,28,37,96,191,193,215],"keyphrases":["mosaic patterns","CNN","spatially variant templates","one-dimensional cellular neural network","infinite lattices","finite lattices","boundary conditions","spatial entropy","transition matrix"],"prmu":["P","P","P","P","P","P","P","R","U"]} {"id":"1145","title":"Mammogram synthesis using a 3D simulation. II. Evaluation of synthetic mammogram texture","abstract":"We have evaluated a method for synthesizing mammograms by comparing the texture of clinical and synthetic mammograms. The synthesis algorithm is based upon simulations of breast tissue and the mammographic imaging process. Mammogram texture was synthesized by projections of simulated adipose tissue compartments. It was hypothesized that the synthetic and clinical texture have similar properties, assuming that the mammogram texture reflects the 3D tissue distribution. The size of the projected compartments was computed by mathematical morphology. The texture energy and fractal dimension were also computed and analyzed in terms of the distribution of texture features within four different tissue regions in clinical and synthetic mammograms. Comparison of the cumulative distributions of the mean features computed from 95 mammograms showed that the synthetic images simulate the mean features of the texture of clinical mammograms. Correlation of clinical and synthetic texture feature histograms, averaged over all images, showed that the synthetic images can simulate the range of features seen over a large group of mammograms. The best agreement with clinical texture was achieved for simulated compartments with radii of 4-13.3 mm in predominantly adipose tissue regions, and radii of 2.7-5.33 and 1.3-2.7 mm in retroareolar and dense fibroglandular tissue regions, respectively","tok_text":"mammogram synthesi use a 3d simul . ii . evalu of synthet mammogram textur \n we have evalu a method for synthes mammogram by compar the textur of clinic and synthet mammogram . the synthesi algorithm is base upon simul of breast tissu and the mammograph imag process . mammogram textur wa synthes by project of simul adipos tissu compart . it wa hypothes that the synthet and clinic textur have similar properti , assum that the mammogram textur reflect the 3d tissu distribut . the size of the project compart wa comput by mathemat morpholog . the textur energi and fractal dimens were also comput and analyz in term of the distribut of textur featur within four differ tissu region in clinic and synthet mammogram . comparison of the cumul distribut of the mean featur comput from 95 mammogram show that the synthet imag simul the mean featur of the textur of clinic mammogram . correl of clinic and synthet textur featur histogram , averag over all imag , show that the synthet imag can simul the rang of featur seen over a larg group of mammogram . the best agreement with clinic textur wa achiev for simul compart with radii of 4 - 13.3 mm in predominantli adipos tissu region , and radii of 2.7 - 5.33 and 1.3 - 2.7 mm in retroareolar and dens fibroglandular tissu region , respect","ordered_present_kp":[0,25,50,317,458,524,567,736,810,1245],"keyphrases":["mammogram synthesis","3D simulation","synthetic mammogram texture","adipose tissue compartments","3D tissue distribution","mathematical morphology","fractal dimension","cumulative distributions","synthetic images","dense fibroglandular tissue regions","breast tissue simulation","retroareolar tissue regions","X-ray image acquisition","computationally compressed phantom"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R","M","M"]} {"id":"1100","title":"Evaluation of existing and new feature recognition algorithms. 2. Experimental results","abstract":"For pt.1 see ibid., p.839-851. This is the second of two papers investigating the performance of general-purpose feature detection techniques. The first paper describes the development of a methodology to synthesize possible general feature detection face sets. Six algorithms resulting from the synthesis have been designed and implemented on a SUN Workstation in C++ using ACIS as the geometric modelling system. In this paper, extensive tests and comparative analysis are conducted on the feature detection algorithms, using carefully selected components from the public domain, mostly from the National Design Repository. The results show that the new and enhanced algorithms identify face sets that previously published algorithms cannot detect. The tests also show that each algorithm can detect, among other types, a certain type of feature that is unique to it. Hence, most of the algorithms discussed in this paper would have to be combined to obtain complete coverage","tok_text":"evalu of exist and new featur recognit algorithm . 2 . experiment result \n for pt.1 see ibid . , p.839 - 851 . thi is the second of two paper investig the perform of general-purpos featur detect techniqu . the first paper describ the develop of a methodolog to synthes possibl gener featur detect face set . six algorithm result from the synthesi have been design and implement on a sun workstat in c++ use aci as the geometr model system . in thi paper , extens test and compar analysi are conduct on the featur detect algorithm , use care select compon from the public domain , mostli from the nation design repositori . the result show that the new and enhanc algorithm identifi face set that previous publish algorithm can not detect . the test also show that each algorithm can detect , among other type , a certain type of featur that is uniqu to it . henc , most of the algorithm discuss in thi paper would have to be combin to obtain complet coverag","ordered_present_kp":[23,166,596,297],"keyphrases":["feature recognition algorithms","general-purpose feature detection techniques","face sets","National Design Repository","convex hull","concavity"],"prmu":["P","P","P","P","U","U"]} {"id":"93","title":"Help-desk support is key to wireless success [finance]","abstract":"A well thought out help desk can make or break an institution's mobile play. Schwab, Ameritrade and RBC are taking their support function seriously","tok_text":"help-desk support is key to wireless success [ financ ] \n a well thought out help desk can make or break an institut 's mobil play . schwab , ameritrad and rbc are take their support function serious","ordered_present_kp":[47,77,133,142,156,28],"keyphrases":["wireless","finance","help desk","Schwab","Ameritrade","RBC"],"prmu":["P","P","P","P","P","P"]} {"id":"568","title":"Modeling cutting temperatures for turning inserts with various tool geometries and materials","abstract":"Temperatures are of interest in machining because cutting tools often fail by thermal softening or temperature-activated wear. Many models for cutting temperatures have been developed, but these models consider only simple tool geometries such as a rectangular slab with a sharp corner. This report describes a finite element study of tool temperatures in cutting that accounts for tool nose radius and included angle effects. A temperature correction factor model that can be used in the design and selection of inserts is developed to account for these effects. A parametric mesh generator is used to generate the finite element models of tool and inserts of varying geometries. The steady-state temperature response is calculated using NASTRAN solver. Several finite element analysis (FEA) runs are performed to quantify the effects of inserts included angle, nose radius, and materials for the insert and the tool holder on the cutting temperature at the insert rake face. The FEA results are then utilized to develop a temperature correction factor model that accounts for these effects. The temperature correction factor model is integrated with an analytical temperature model for rectangular inserts to predict cutting temperatures for contour turning with inserts of various shapes and nose radii. Finally, experimental measurements of cutting temperature using the tool-work thermocouple technique are performed and compared with the predictions of the new temperature model. The comparisons show good agreement","tok_text":"model cut temperatur for turn insert with variou tool geometri and materi \n temperatur are of interest in machin becaus cut tool often fail by thermal soften or temperature-activ wear . mani model for cut temperatur have been develop , but these model consid onli simpl tool geometri such as a rectangular slab with a sharp corner . thi report describ a finit element studi of tool temperatur in cut that account for tool nose radiu and includ angl effect . a temperatur correct factor model that can be use in the design and select of insert is develop to account for these effect . a parametr mesh gener is use to gener the finit element model of tool and insert of vari geometri . the steady-st temperatur respons is calcul use nastran solver . sever finit element analysi ( fea ) run are perform to quantifi the effect of insert includ angl , nose radiu , and materi for the insert and the tool holder on the cut temperatur at the insert rake face . the fea result are then util to develop a temperatur correct factor model that account for these effect . the temperatur correct factor model is integr with an analyt temperatur model for rectangular insert to predict cut temperatur for contour turn with insert of variou shape and nose radii . final , experiment measur of cut temperatur use the tool-work thermocoupl techniqu are perform and compar with the predict of the new temperatur model . the comparison show good agreement","ordered_present_kp":[25,106,417,586,626,460,49],"keyphrases":["turning inserts","tool geometries","machining","tool nose radius","temperature correction factor","parametric mesh generator","finite element models","cutting temperature model","insert shape effects"],"prmu":["P","P","P","P","P","P","P","R","R"]} {"id":"1178","title":"Network-centric systems","abstract":"The author describes a graduate-level course that addresses cutting-edge issues in network-centric systems while following a more traditional graduate seminar format","tok_text":"network-centr system \n the author describ a graduate-level cours that address cutting-edg issu in network-centr system while follow a more tradit graduat seminar format","ordered_present_kp":[0],"keyphrases":["network-centric systems","graduate level course"],"prmu":["P","M"]} {"id":"1284","title":"A linear time special case for MC games","abstract":"MC games are infinite duration two-player games played on graphs. Deciding the winner in MC games is equivalent to the the modal mu-calculus model checking. In this article we provide a linear time algorithm for a class of MC games. We show that, if all cycles in each strongly connected component of the game graph have at least one common vertex, the winner can be found in linear time. Our results hold also for parity games, which are equivalent to MC games","tok_text":"a linear time special case for mc game \n mc game are infinit durat two-play game play on graph . decid the winner in mc game is equival to the the modal mu-calculu model check . in thi articl we provid a linear time algorithm for a class of mc game . we show that , if all cycl in each strongli connect compon of the game graph have at least one common vertex , the winner can be found in linear time . our result hold also for pariti game , which are equival to mc game","ordered_present_kp":[2,31,67,147,204],"keyphrases":["linear time special case","MC games","two-player games","modal mu-calculus model checking","linear time algorithm"],"prmu":["P","P","P","P","P"]} {"id":"694","title":"A novel genetic algorithm for the design of a signed power-of-two coefficient quadrature mirror filter lattice filter bank","abstract":"A novel genetic algorithm (GA) for the design of a canonical signed power-of-two (SPT) coefficient lattice structure quadrature mirror filter bank is presented. Genetic operations may render the SPT representation of a value noncanonical. A new encoding scheme is introduced to encode the SPT values. In this new scheme, the canonical property of the SPT values is preserved under genetic operations. Additionally, two new features that drastically improve the performance of our GA are introduced. (1) An additional level of natural selection is introduced to simulate the effect of natural selection when sperm cells compete to fertilize an ovule; this dramatically improves the offspring survival rate. A conventional GA is analogous to intracytoplasmic sperm injection and has an extremely low offspring survival rate, resulting in very slow convergence. (2) The probability of mutation for each codon of a chromosome is weighted by the reciprocal of its effect. Because of these new features, the performance of our new GA outperforms conventional GAs","tok_text":"a novel genet algorithm for the design of a sign power-of-two coeffici quadratur mirror filter lattic filter bank \n a novel genet algorithm ( ga ) for the design of a canon sign power-of-two ( spt ) coeffici lattic structur quadratur mirror filter bank is present . genet oper may render the spt represent of a valu noncanon . a new encod scheme is introduc to encod the spt valu . in thi new scheme , the canon properti of the spt valu is preserv under genet oper . addit , two new featur that drastic improv the perform of our ga are introduc . ( 1 ) an addit level of natur select is introduc to simul the effect of natur select when sperm cell compet to fertil an ovul ; thi dramat improv the offspr surviv rate . a convent ga is analog to intracytoplasm sperm inject and ha an extrem low offspr surviv rate , result in veri slow converg . ( 2 ) the probabl of mutat for each codon of a chromosom is weight by the reciproc of it effect . becaus of these new featur , the perform of our new ga outperform convent ga","ordered_present_kp":[8,71,95,333,571,697],"keyphrases":["genetic algorithm","quadrature mirror filter","lattice filter bank","encoding scheme","natural selection","offspring survival rate","signed power-of-two coefficient lattice structure","QMF","chromosome codon","signal processing","perfect reconstruction"],"prmu":["P","P","P","P","P","P","R","U","R","U","U"]} {"id":"1279","title":"Place\/Transition Petri net evolutions: recording ways, analysis and synthesis","abstract":"Four semantic domains for Place\/Transition Petri nets and their relationships are considered. They are monoids of respectively: firing sequences, processes, traces and dependence graphs. For each of them the analysis and synthesis problem is stated and solved. The monoid of processes is defined in a non-standard way, Nets under consideration involve weights of arrows and capacities (finite or infinite) of places. However, the analysis and synthesis tasks require nets to be pure, i.e. each of their transition must have the pre-set and post-set disjoint","tok_text":"place \/ transit petri net evolut : record way , analysi and synthesi \n four semant domain for place \/ transit petri net and their relationship are consid . they are monoid of respect : fire sequenc , process , trace and depend graph . for each of them the analysi and synthesi problem is state and solv . the monoid of process is defin in a non-standard way , net under consider involv weight of arrow and capac ( finit or infinit ) of place . howev , the analysi and synthesi task requir net to be pure , i.e. each of their transit must have the pre-set and post-set disjoint","ordered_present_kp":[0,76,165,185,220,559],"keyphrases":["place\/transition Petri net evolutions","semantic domains","monoids","firing sequences","dependence graphs","post-set disjoint","pre-set disjoint"],"prmu":["P","P","P","P","P","P","R"]} {"id":"1185","title":"Trading exchanges: online marketplaces evolve","abstract":"Looks at how trading exchanges are evolving rapidly to help manufacturers keep up with customer demand","tok_text":"trade exchang : onlin marketplac evolv \n look at how trade exchang are evolv rapidli to help manufactur keep up with custom demand","ordered_present_kp":[16,0,93,117],"keyphrases":["trading exchanges","online marketplaces","manufacturers","customer demand","enterprise platforms","supply chain management","enterprise resource planning","core software platform","private exchanges","integration technology","middleware","XML standards","content management capabilities"],"prmu":["P","P","P","P","U","U","U","U","M","U","U","U","U"]} {"id":"907","title":"Development of an integrated and open-architecture precision motion control system","abstract":"In this paper, the development of an integrated and open-architecture precision motion control system is presented. The control system is generally applicable, but it is developed with a particular focus on direct drive servo systems based on linear motors. The overall control system is comprehensive, comprising of various selected control and instrumentation components, integrated within a configuration of hardware architecture centred around a dSPACE DS1004 DSP processor board. These components include a precision composite controller (comprising of feedforward and feedback control), a disturbance observer, an adaptive notch filter, and a geometrical error compensator. The hardware architecture, software development platform, user interface, and all constituent control components are described","tok_text":"develop of an integr and open-architectur precis motion control system \n in thi paper , the develop of an integr and open-architectur precis motion control system is present . the control system is gener applic , but it is develop with a particular focu on direct drive servo system base on linear motor . the overal control system is comprehens , compris of variou select control and instrument compon , integr within a configur of hardwar architectur centr around a dspace ds1004 dsp processor board . these compon includ a precis composit control ( compris of feedforward and feedback control ) , a disturb observ , an adapt notch filter , and a geometr error compens . the hardwar architectur , softwar develop platform , user interfac , and all constitu control compon are describ","ordered_present_kp":[49,257,291,533,563,579,622,25,42,649],"keyphrases":["open-architecture","precision","motion control","direct drive servo systems","linear motors","composite controller","feedforward","feedback","adaptive notch filter","geometrical error compensation","dSPACE DS1004 processor"],"prmu":["P","P","P","P","P","P","P","P","P","P","R"]} {"id":"144","title":"Development of a 3.5 inch magneto-optical disk with a capacity of 2.3 GB","abstract":"The recording capacity of GIGAMO media was enlarged from 1.3 GB to 2.3 GB for 3.5 inch magneto-optical (MO) disks while maintaining downward compatibility. For the new GIGAMO technology, a land and groove recording method was applied in addition to magnetically induced super resolution (MSR) media. Furthermore, a novel address format suitable for the land and groove recording method was adopted. The specifications of the new GIGAMO media were examined to satisfy requirements for practical use with respect to margins. Durability of more than 10\/sup 6\/ rewritings and an enough lifetime were confirmed","tok_text":"develop of a 3.5 inch magneto-opt disk with a capac of 2.3 gb \n the record capac of gigamo media wa enlarg from 1.3 gb to 2.3 gb for 3.5 inch magneto-opt ( mo ) disk while maintain downward compat . for the new gigamo technolog , a land and groov record method wa appli in addit to magnet induc super resolut ( msr ) media . furthermor , a novel address format suitabl for the land and groov record method wa adopt . the specif of the new gigamo media were examin to satisfi requir for practic use with respect to margin . durabl of more than 10 \/ sup 6\/ rewrit and an enough lifetim were confirm","ordered_present_kp":[22,68,84,282,311,346,576,13,55],"keyphrases":["3.5 inch","magneto-optical disk","2.3 GB","recording capacity","GIGAMO media","magnetically induced super resolution","MSR","address format","lifetime","MO disks","land-groove recording method","rewriting durability","crosstalk","SiN-GdFeCo-GdFe-TbFeCo-SiN-Al"],"prmu":["P","P","P","P","P","P","P","P","P","R","M","R","U","U"]} {"id":"595","title":"Six common enterprise programming mistakes","abstract":"Instead of giving you tips to use in your programming (at least directly), I want to look at some common mistakes made in enterprise programming. Instead of focusing on what to do, I want to look at what you should not do. Most programmers take books like mine and add in the good things, but they leave their mistakes in the very same programs! So I touch on several common errors I see in enterprise programming, and then briefly mention how to avoid those mistakes","tok_text":"six common enterpris program mistak \n instead of give you tip to use in your program ( at least directli ) , i want to look at some common mistak made in enterpris program . instead of focus on what to do , i want to look at what you should not do . most programm take book like mine and add in the good thing , but they leav their mistak in the veri same program ! so i touch on sever common error i see in enterpris program , and then briefli mention how to avoid those mistak","ordered_present_kp":[11,386],"keyphrases":["enterprise programming mistakes","common errors","data store","database","XML","Enterprise JavaBeans","vendor-specific programming"],"prmu":["P","P","U","U","U","M","M"]} {"id":"942","title":"Micro-optical realization of arrays of selectively addressable dipole traps: a scalable configuration for quantum computation with atomic qubits","abstract":"We experimentally demonstrate novel structures for the realization of registers of atomic qubits: We trap neutral atoms in one- and two-dimensional arrays of far-detuned dipole traps obtained by focusing a red-detuned laser beam with a microfabricated array of microlenses. We are able to selectively address individual trap sites due to their large lateral separation of 125 mu m. We initialize and read out different internal states for the individual sites. We also create two interleaved sets of trap arrays with adjustable separation, as required for many proposed implementations of quantum gate operations","tok_text":"micro-opt realiz of array of select address dipol trap : a scalabl configur for quantum comput with atom qubit \n we experiment demonstr novel structur for the realiz of regist of atom qubit : we trap neutral atom in one- and two-dimension array of far-detun dipol trap obtain by focus a red-detun laser beam with a microfabr array of microlens . we are abl to select address individu trap site due to their larg later separ of 125 mu m. we initi and read out differ intern state for the individu site . we also creat two interleav set of trap array with adjust separ , as requir for mani propos implement of quantum gate oper","ordered_present_kp":[100,169,200,248,287,315,334,466,608,80,59],"keyphrases":["scalable configuration","quantum computation","atomic qubits","registers","neutral atoms","far-detuned dipole traps","red-detuned laser beam","microfabricated array","microlenses","internal states","quantum gate operations"],"prmu":["P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1364","title":"An adaptive sphere-fitting method for sequential tolerance control","abstract":"The machining of complex parts typically involves a logical and chronological sequence of n operations on m machine tools. Because manufacturing datums cannot always match design constraints, some of the design specifications imposed on the part are usually satisfied by distinct subsets of the n operations prescribed in the process plan. Conventional tolerance control specifies a fixed set point for each operation and a permissible variation about this set point to insure compliance with the specifications, whereas sequential tolerance control (STC) uses real-time measurement information at the completion of one stage to reposition the set point for subsequent operations. However, it has been shown that earlier sphere-fitting methods for STC can lead to inferior solutions when the process distributions are skewed. This paper introduces an extension of STC that uses an adaptive sphere-fitting method that significantly improves the yield in the presence of skewed distributions as well as significantly reducing the computational effort required by earlier probabilistic search methods","tok_text":"an adapt sphere-fit method for sequenti toler control \n the machin of complex part typic involv a logic and chronolog sequenc of n oper on m machin tool . becaus manufactur datum can not alway match design constraint , some of the design specif impos on the part are usual satisfi by distinct subset of the n oper prescrib in the process plan . convent toler control specifi a fix set point for each oper and a permiss variat about thi set point to insur complianc with the specif , wherea sequenti toler control ( stc ) use real-tim measur inform at the complet of one stage to reposit the set point for subsequ oper . howev , it ha been shown that earlier sphere-fit method for stc can lead to inferior solut when the process distribut are skew . thi paper introduc an extens of stc that use an adapt sphere-fit method that significantli improv the yield in the presenc of skew distribut as well as significantli reduc the comput effort requir by earlier probabilist search method","ordered_present_kp":[141,31,3,199,455,525,875,925],"keyphrases":["adaptive sphere-fitting method","sequential tolerance control","machine tools","design constraints","compliance","real-time measurement information","skewed distributions","computational effort","yield improvement"],"prmu":["P","P","P","P","P","P","P","P","R"]} {"id":"731","title":"Aggregate bandwidth estimation in stored video distribution systems","abstract":"Multimedia applications like video on demand, distance learning, Internet video broadcast, etc. will play a fundamental role in future broadband networks. A common aspect of such applications is the transmission of video streams that require a sustained relatively high bandwidth with stringent requirements of quality of service. In this paper various original algorithms for evaluating, in a video distribution system, a statistical estimation of aggregate bandwidth needed by a given number of smoothed video streams are proposed and discussed. The variable bit rate traffic generated by each video stream is characterized by its marginal distribution and by conditional probabilities between rates of temporary closed streams. The developed iterative algorithms evaluate an upper and lower bound of needed bandwidth for guaranteeing a given loss probability. The obtained results are compared with simulations and with other results, based on similar assumptions, already presented in the literature. Some considerations on the developed algorithms are made, in order to evaluate the effectiveness of the proposed methods","tok_text":"aggreg bandwidth estim in store video distribut system \n multimedia applic like video on demand , distanc learn , internet video broadcast , etc . will play a fundament role in futur broadband network . a common aspect of such applic is the transmiss of video stream that requir a sustain rel high bandwidth with stringent requir of qualiti of servic . in thi paper variou origin algorithm for evalu , in a video distribut system , a statist estim of aggreg bandwidth need by a given number of smooth video stream are propos and discuss . the variabl bit rate traffic gener by each video stream is character by it margin distribut and by condit probabl between rate of temporari close stream . the develop iter algorithm evalu an upper and lower bound of need bandwidth for guarante a given loss probabl . the obtain result are compar with simul and with other result , base on similar assumpt , alreadi present in the literatur . some consider on the develop algorithm are made , in order to evalu the effect of the propos method","ordered_present_kp":[0,26,57,80,98,114,183,333,434,543,614,638,669,706,740,791,840],"keyphrases":["aggregate bandwidth estimation","stored video distribution systems","multimedia applications","video on demand","distance learning","Internet video broadcast","broadband networks","quality of service","statistical estimation","variable bit rate traffic","marginal distribution","conditional probabilities","temporary closed streams","iterative algorithms","lower bound","loss probability","simulations","video streams transmission","upper bound","VoD","video coding","QoS"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","U","M","U"]} {"id":"774","title":"Keeping Web accessibility in mind: I&R services for all","abstract":"After presenting three compelling reasons for making Web sites accessible to persons with a broad range of disabilities (it's the morally right thing to do, it's the smart thing to do from an economic perspective, and it's required by law), the author discusses design issues that impact persons with particular types of disabilities. She presents practical advice for assessing and addressing accessibility problems. An extensive list of resources for further information is appended, as is a list of sites which simulate the impact of specific accessibility problems on persons with disabilities","tok_text":"keep web access in mind : i&r servic for all \n after present three compel reason for make web site access to person with a broad rang of disabl ( it 's the moral right thing to do , it 's the smart thing to do from an econom perspect , and it 's requir by law ) , the author discuss design issu that impact person with particular type of disabl . she present practic advic for assess and address access problem . an extens list of resourc for further inform is append , as is a list of site which simul the impact of specif access problem on person with disabl","ordered_present_kp":[90,137],"keyphrases":["Web site accessibility","disabilities","information and referral services"],"prmu":["P","P","M"]} {"id":"1449","title":"Raising the standard of management education for electronic commerce professionals","abstract":"The teaching of electronic commerce in universities has become a growth industry in itself. The rapid expansion of electronic commerce programmes raises the question of what actually is being taught. The association of electronic commerce as primarily a technical or information technology (IT) phenomenon has not been sufficient to constrain it to IT and information systems departments. Business schools have been keen entrants into the electronic commerce coursework race and they are developing electronic commerce programmes in an environment where there is no agreed definition of the term. This paper draws on the work of Kenneth Boulding who argued that the dynamics of change in society are largely a product of changing skills and the way these skills are arranged into roles at the organizational level. It is argued that an overly technical interpretation of electronic commerce narrows the skills being acquired as part of formal education. Universities, under pressure from the market and technological change, are changing their roles resulting in a further narrowing of the breadth of issues that is seen as legitimate to be included as electronic commerce. The outcome is that aspiring electronic commerce professionals are not being exposed to a wide enough agenda of ideas and concepts that will assist them to make better business decisions","tok_text":"rais the standard of manag educ for electron commerc profession \n the teach of electron commerc in univers ha becom a growth industri in itself . the rapid expans of electron commerc programm rais the question of what actual is be taught . the associ of electron commerc as primarili a technic or inform technolog ( it ) phenomenon ha not been suffici to constrain it to it and inform system depart . busi school have been keen entrant into the electron commerc coursework race and they are develop electron commerc programm in an environ where there is no agre definit of the term . thi paper draw on the work of kenneth bould who argu that the dynam of chang in societi are larg a product of chang skill and the way these skill are arrang into role at the organiz level . it is argu that an overli technic interpret of electron commerc narrow the skill be acquir as part of formal educ . univers , under pressur from the market and technolog chang , are chang their role result in a further narrow of the breadth of issu that is seen as legitim to be includ as electron commerc . the outcom is that aspir electron commerc profession are not be expos to a wide enough agenda of idea and concept that will assist them to make better busi decis","ordered_present_kp":[36,99,297,137,378,401,758,876,614],"keyphrases":["electronic commerce professionals","universities","IT","information technology","information systems","business schools","Kenneth Boulding","organizational level","formal education","management education standards improvement"],"prmu":["P","P","P","P","P","P","P","P","P","M"]} {"id":"1098","title":"Instability phenomena in the gas-metal arc welding self-regulation process","abstract":"Arc instability is a very important determinant of weld quality. The instability behaviour of the gas-metal arc welding (GMAW) process is characterized by strong oscillations in arc length and current. In the paper, a model of the GMAW process is developed using an exact arc voltage characteristic. This model is used to study stability of the self-regulation process and to develop a simulation program that helps to understand the transient or dynamic nature of the GMAW process and relationships among current, electrode extension and contact tube-work distance. The process is shown to exhibit instabilities at both long electrode extension and normal extension. Results obtained from simulation runs of the model were also experimentally confirmed by the present author, as reported in this study. In order to explain the concept of the instability phenomena, the metal transfer mode and the arc voltage-current characteristic were examined. Based on this examination, the conclusion of this study is that their combined effects lead to the oscillations in arc current and length","tok_text":"instabl phenomena in the gas-met arc weld self-regul process \n arc instabl is a veri import determin of weld qualiti . the instabl behaviour of the gas-met arc weld ( gmaw ) process is character by strong oscil in arc length and current . in the paper , a model of the gmaw process is develop use an exact arc voltag characterist . thi model is use to studi stabil of the self-regul process and to develop a simul program that help to understand the transient or dynam natur of the gmaw process and relationship among current , electrod extens and contact tube-work distanc . the process is shown to exhibit instabl at both long electrod extens and normal extens . result obtain from simul run of the model were also experiment confirm by the present author , as report in thi studi . in order to explain the concept of the instabl phenomena , the metal transfer mode and the arc voltage-curr characterist were examin . base on thi examin , the conclus of thi studi is that their combin effect lead to the oscil in arc current and length","ordered_present_kp":[0,25,42,63,104,269,300,848],"keyphrases":["instability phenomena","gas-metal arc welding","self-regulation process","arc instability","weld quality","GMAW process","exact arc voltage characteristic","metal transfer mode"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1020","title":"Supersampling multiframe blind deconvolution resolution enhancement of adaptive optics compensated imagery of low earth orbit satellites","abstract":"We describe a postprocessing methodology for reconstructing undersampled image sequences with randomly varying blur that can provide image enhancement beyond the sampling resolution of the sensor. This method is demonstrated on simulated imagery and on adaptive-optics-(AO)-compensated imagery taken by the Starfire Optical Range 3.5-m telescope that has been artificially undersampled. Also shown are the results of multiframe blind deconvolution of some of the highest quality optical imagery of low earth orbit satellites collected with a ground-based telescope to date. The algorithm used is a generalization of multiframe blind deconvolution techniques that include a representation of spatial sampling by the focal plane array elements based on a forward stochastic model. This generalization enables the random shifts and shape of the AO-compensated point spread function (PSF) to be used to partially eliminate the aliasing effects associated with sub-Nyquist sampling of the image by the focal plane array. The method could be used to reduce resolution loss that occurs when imaging in wide-field-of-view (FOV) modes","tok_text":"supersampl multifram blind deconvolut resolut enhanc of adapt optic compens imageri of low earth orbit satellit \n we describ a postprocess methodolog for reconstruct undersampl imag sequenc with randomli vari blur that can provid imag enhanc beyond the sampl resolut of the sensor . thi method is demonstr on simul imageri and on adaptive-optics-(ao)-compens imageri taken by the starfir optic rang 3.5-m telescop that ha been artifici undersampl . also shown are the result of multifram blind deconvolut of some of the highest qualiti optic imageri of low earth orbit satellit collect with a ground-bas telescop to date . the algorithm use is a gener of multifram blind deconvolut techniqu that includ a represent of spatial sampl by the focal plane array element base on a forward stochast model . thi gener enabl the random shift and shape of the ao-compens point spread function ( psf ) to be use to partial elimin the alias effect associ with sub-nyquist sampl of the imag by the focal plane array . the method could be use to reduc resolut loss that occur when imag in wide-field-of-view ( fov ) mode","ordered_present_kp":[0,56,87,127,195,230,309,11,593,718,739,775,820,850,923,948,1038],"keyphrases":["supersampling multiframe blind deconvolution resolution enhancement","multiframe blind deconvolution","adaptive optics compensated imagery","low earth orbit satellites","postprocessing methodology","randomly varying blur","image enhancement","simulated imagery","ground-based telescope","spatial sampling","focal plane array elements","forward stochastic model","random shifts","AO-compensated point spread function","aliasing effects","sub-Nyquist sampling","resolution loss","undersampled image sequence reconstruction","sensor sampling resolution","Starfire Optical Range telescope","wide-field-of-view modes","3.5 m"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R","U"]} {"id":"1065","title":"Quantum universal variable-length source coding","abstract":"We construct an optimal quantum universal variable-length code that achieves the admissible minimum rate, i.e., our code is used for any probability distribution of quantum states. Its probability of exceeding the admissible minimum rate exponentially goes to 0. Our code is optimal in the sense of its exponent. In addition, its average error asymptotically tends to 0","tok_text":"quantum univers variable-length sourc code \n we construct an optim quantum univers variable-length code that achiev the admiss minimum rate , i.e. , our code is use for ani probabl distribut of quantum state . it probabl of exceed the admiss minimum rate exponenti goe to 0 . our code is optim in the sens of it expon . in addit , it averag error asymptot tend to 0","ordered_present_kp":[0,61,120,173,194,255,334],"keyphrases":["quantum universal variable-length source coding","optimal quantum universal variable-length code","admissible minimum rate","probability distribution","quantum states","exponent","average error","quantum information theory","quantum cryptography","optimal code"],"prmu":["P","P","P","P","P","P","P","M","M","R"]} {"id":"8","title":"New investors get steal of a deal [Global Crossing]","abstract":"Hutchison Telecommunications and Singapore Technologies take control of Global Crossing for a lot less money than they originally offered. The deal leaves the bankrupt carrier intact, but doesn't put it in the clear just yet","tok_text":"new investor get steal of a deal [ global cross ] \n hutchison telecommun and singapor technolog take control of global cross for a lot less money than they origin offer . the deal leav the bankrupt carrier intact , but doe n't put it in the clear just yet","ordered_present_kp":[52,77,35,189],"keyphrases":["Global Crossing","Hutchison Telecommunications","Singapore Technologies","bankrupt"],"prmu":["P","P","P","P"]} {"id":"923","title":"Design and manufacture of a lightweight piezo-composite curved actuator","abstract":"In this paper we are concerned with the design, manufacture and performance test of a lightweight piezo-composite curved actuator (called LIPCA) using a top carbon fiber composite layer with near-zero coefficient of thermal expansion (CTE), a middle PZT ceramic wafer, and a bottom glass\/epoxy layer with a high CTE. The main point of the design for LIPCA is to replace the heavy metal layers of THUNDER TM by lightweight fiber reinforced plastic layers without losing the capabilities for generating high force and large displacement. It is possible to save up to about 40% of the weight if we replace the metallic backing material by the light fiber composite layer. We can also have design flexibility by selecting the fiber direction and the size of prepreg layers. In addition to the lightweight advantage and design flexibility, the proposed device can be manufactured without adhesive layers when we use an epoxy resin prepreg system. Glass\/epoxy prepregs, a ceramic wafer with electrode surfaces, and a carbon prepreg were simply stacked and cured at an elevated temperature (177 degrees C) after following an autoclave bagging process. We found that the manufactured composite laminate device had a sufficient curvature after being detached from a flat mould. An analysis method using the classical lamination theory is presented to predict the curvature of LIPCA after curing at an elevated temperature. The predicted curvatures are in quite good agreement with the experimental values. In order to investigate the merits of LIPCA, performance tests of both LIPCA and THUNDER TM have been conducted under the same boundary conditions. From the experimental actuation tests, it was observed that the developed actuator could generate larger actuation displacement than THUNDER TM","tok_text":"design and manufactur of a lightweight piezo-composit curv actuat \n in thi paper we are concern with the design , manufactur and perform test of a lightweight piezo-composit curv actuat ( call lipca ) use a top carbon fiber composit layer with near-zero coeffici of thermal expans ( cte ) , a middl pzt ceram wafer , and a bottom glass \/ epoxi layer with a high cte . the main point of the design for lipca is to replac the heavi metal layer of thunder tm by lightweight fiber reinforc plastic layer without lose the capabl for gener high forc and larg displac . it is possibl to save up to about 40 % of the weight if we replac the metal back materi by the light fiber composit layer . we can also have design flexibl by select the fiber direct and the size of prepreg layer . in addit to the lightweight advantag and design flexibl , the propos devic can be manufactur without adhes layer when we use an epoxi resin prepreg system . glass \/ epoxi prepreg , a ceram wafer with electrod surfac , and a carbon prepreg were simpli stack and cure at an elev temperatur ( 177 degre c ) after follow an autoclav bag process . we found that the manufactur composit lamin devic had a suffici curvatur after be detach from a flat mould . an analysi method use the classic lamin theori is present to predict the curvatur of lipca after cure at an elev temperatur . the predict curvatur are in quit good agreement with the experiment valu . in order to investig the merit of lipca , perform test of both lipca and thunder tm have been conduct under the same boundari condit . from the experiment actuat test , it wa observ that the develop actuat could gener larger actuat displac than thunder tm","ordered_present_kp":[129,27,193,211,244,299,330,471,1360,129,1548,445,1068],"keyphrases":["lightweight piezo-composite curved actuator","performance test","performance test","LIPCA","carbon fiber composite layer","near-zero coefficient of thermal expansion","PZT ceramic wafer","glass\/epoxy layer","THUNDER","fiber reinforced plastic layers","177 degrees C","predicted curvatures","boundary conditions","performance tests","177 degC"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","M"]} {"id":"966","title":"Controlling in between the Lorenz and the Chen systems","abstract":"This letter investigates a new chaotic system and its role as a joint function between two complex chaotic systems, the Lorenz and the Chen systems, using a simple variable constant controller. With the gradual tuning of the controller, the controlled system evolves from the canonical Lorenz attractor to the Chen attractor through the new transition chaotic attractor. This evolving procedure reveals the forming mechanisms of all similar and closely related chaotic systems, and demonstrates that a simple control technique can be very useful in generating and analyzing some complex chaotic dynamical phenomena","tok_text":"control in between the lorenz and the chen system \n thi letter investig a new chaotic system and it role as a joint function between two complex chaotic system , the lorenz and the chen system , use a simpl variabl constant control . with the gradual tune of the control , the control system evolv from the canon lorenz attractor to the chen attractor through the new transit chaotic attractor . thi evolv procedur reveal the form mechan of all similar and close relat chaotic system , and demonstr that a simpl control techniqu can be veri use in gener and analyz some complex chaotic dynam phenomena","ordered_present_kp":[337,38,251,313,368],"keyphrases":["Chen system","tuning","Lorenz attractor","Chen attractors","transition chaotic attractor","Lorenz system"],"prmu":["P","P","P","P","P","R"]} {"id":"125","title":"A fast implementation of correlation of long data sequences for coherent receivers","abstract":"Coherent reception depends upon matching of phase between the transmitted and received signal. Fast convolution techniques based on fast Fourier transform (FFT) are widely used for extracting time delay information from such matching. The latency in processing a large data window of the received signal is a serious overhead for mission critical real time applications. The implementation of a parallel algorithm for correlation of long data sequences in multiprocessor environment is demonstrated here. The algorithm does processing while acquiring the received signal and reduces the computation overhead considerably because of inherent parallelism","tok_text":"a fast implement of correl of long data sequenc for coher receiv \n coher recept depend upon match of phase between the transmit and receiv signal . fast convolut techniqu base on fast fourier transform ( fft ) are wide use for extract time delay inform from such match . the latenc in process a larg data window of the receiv signal is a seriou overhead for mission critic real time applic . the implement of a parallel algorithm for correl of long data sequenc in multiprocessor environ is demonstr here . the algorithm doe process while acquir the receiv signal and reduc the comput overhead consider becaus of inher parallel","ordered_present_kp":[20,30,52,132,179,235,275,358,411,465,578],"keyphrases":["correlation","long data sequences","coherent receivers","received signal","fast Fourier transform","time delay information","latency","mission critical real time applications","parallel algorithm","multiprocessor environment","computation","transmitted signal"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"77","title":"Modeling frequently accessed wireless data with weak consistency","abstract":"To reduce the response times of wireless data access in a mobile network, caches are utilized in wireless handheld devices. If the original data entry has been updated, the cached data in the handheld device becomes stale. Thus, a mechanism is required to predict when the cached copy will expire. This paper studies a weakly consistent data access mechanism that computes the time-to-live (TTL) interval to predict the expiration time. We propose an analytic model to investigate this TTL-based algorithm for frequently accessed data. The analytic model is validated against simulation experiments. Our study quantitatively indicates how the TTL-based algorithm reduces the wireless communication cost by increasing the probability of stale accesses. Depending on the requirements of the application, appropriate parameter values can be selected based on the guidelines provided","tok_text":"model frequent access wireless data with weak consist \n to reduc the respons time of wireless data access in a mobil network , cach are util in wireless handheld devic . if the origin data entri ha been updat , the cach data in the handheld devic becom stale . thu , a mechan is requir to predict when the cach copi will expir . thi paper studi a weakli consist data access mechan that comput the time-to-l ( ttl ) interv to predict the expir time . we propos an analyt model to investig thi ttl-base algorithm for frequent access data . the analyt model is valid against simul experi . our studi quantit indic how the ttl-base algorithm reduc the wireless commun cost by increas the probabl of stale access . depend on the requir of the applic , appropri paramet valu can be select base on the guidelin provid","ordered_present_kp":[41,85,111,127,144,184,463,572,648],"keyphrases":["weak consistency","wireless data access","mobile network","caches","wireless handheld devices","data entry","analytic model","simulation experiments","wireless communication cost","frequently accessed wireless data modeling","response time reduction","time-to-live interval","expiration time prediction","stale access probability"],"prmu":["P","P","P","P","P","P","P","P","P","R","M","R","R","R"]} {"id":"608","title":"How closely can a personal computer clock track the UTC timescale via the Internet?","abstract":"Nowadays many software packages allow you to keep the clock of your personal computer synchronized to time servers spread over the internet. We present how a didactic laboratory can evaluate, in a statistical sense, the minimum synch error of this process (the other extreme, the maximum, is guaranteed by the code itself). The measurement set-up utilizes the global positioning system satellite constellation in 'common view' between two similar timing stations: one acts as a time server for the other, so the final timing difference at the second station represents the total synch error through the internet. Data recorded over batches of 10000 samples show a typical RMS value of 35 ms. This measurement configuration allows students to obtain a much better understanding of the synch task and pushes them, at all times, to look for an experimental verification of data results, even when they come from the most sophisticated 'black boxes' now readily available off the shelf","tok_text":"how close can a person comput clock track the utc timescal via the internet ? \n nowaday mani softwar packag allow you to keep the clock of your person comput synchron to time server spread over the internet . we present how a didact laboratori can evalu , in a statist sens , the minimum synch error of thi process ( the other extrem , the maximum , is guarante by the code itself ) . the measur set-up util the global posit system satellit constel in ' common view ' between two similar time station : one act as a time server for the other , so the final time differ at the second station repres the total synch error through the internet . data record over batch of 10000 sampl show a typic rm valu of 35 ms . thi measur configur allow student to obtain a much better understand of the synch task and push them , at all time , to look for an experiment verif of data result , even when they come from the most sophist ' black box ' now readili avail off the shelf","ordered_present_kp":[16,46,67,93,170,226,261,412,551,288,923],"keyphrases":["personal computer clock","UTC timescale","internet","software packages","time servers","didactic laboratory","statistical sense","synch error","global positioning system satellite constellation","final timing difference","black boxes"],"prmu":["P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1218","title":"Knowledge acquisition for expert systems in accounting and financial problem domains","abstract":"Since the mid-1980s, expert systems have been developed for a variety of problems in accounting and finance. The most commonly cited problems in developing these systems are the unavailability of the experts and knowledge engineers and difficulties with the rule extraction process. Within the field of artificial intelligence, this has been called the 'knowledge acquisition' (KA) problem and has been identified as a major bottleneck in the expert system development process. Recent empirical research reveals that certain KA techniques are significantly more efficient than others in helping to extract certain types of knowledge within specific problem domains. This paper presents a mapping between these empirical studies and a generic taxonomy of expert system problem domains. To accomplish this, we first examine the range of problem domains and suggest a mapping of accounting and finance tasks to a generic problem domain taxonomy. We then identify and describe the most prominent KA techniques employed in developing expert systems in accounting and finance. After examining and summarizing the existing empirical KA work, we conclude by showing how the empirical KA research in the various problem domains can be used to provide guidance to developers of expert systems in the fields of accounting and finance","tok_text":"knowledg acquisit for expert system in account and financi problem domain \n sinc the mid-1980 , expert system have been develop for a varieti of problem in account and financ . the most commonli cite problem in develop these system are the unavail of the expert and knowledg engin and difficulti with the rule extract process . within the field of artifici intellig , thi ha been call the ' knowledg acquisit ' ( ka ) problem and ha been identifi as a major bottleneck in the expert system develop process . recent empir research reveal that certain ka techniqu are significantli more effici than other in help to extract certain type of knowledg within specif problem domain . thi paper present a map between these empir studi and a gener taxonomi of expert system problem domain . to accomplish thi , we first examin the rang of problem domain and suggest a map of account and financ task to a gener problem domain taxonomi . we then identifi and describ the most promin ka techniqu employ in develop expert system in account and financ . after examin and summar the exist empir ka work , we conclud by show how the empir ka research in the variou problem domain can be use to provid guidanc to develop of expert system in the field of account and financ","ordered_present_kp":[0,22,39,51,305,348,902],"keyphrases":["knowledge acquisition","expert systems","accounting","finance","rule extraction process","artificial intelligence","problem domain taxonomy"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1119","title":"A component-based software configuration management model and its supporting system","abstract":"Software configuration management (SCM) is an important key technology in software development. Component-based software development (CBSD) is an emerging paradigm in software development. However, to apply CBSD effectively in real world practice, supporting SCM in CBSD needs to be further investigated. In this paper, the objects that need to be managed in CBSD is analyzed and a component-based SCM model is presented. In this model, components, as the integral logical constituents in a system, are managed as the basic configuration items in SCM, and the relationships between\/among components are defined and maintained. Based on this model, a configuration management system is implemented","tok_text":"a component-bas softwar configur manag model and it support system \n softwar configur manag ( scm ) is an import key technolog in softwar develop . component-bas softwar develop ( cbsd ) is an emerg paradigm in softwar develop . howev , to appli cbsd effect in real world practic , support scm in cbsd need to be further investig . in thi paper , the object that need to be manag in cbsd is analyz and a component-bas scm model is present . in thi model , compon , as the integr logic constitu in a system , are manag as the basic configur item in scm , and the relationship between \/ among compon are defin and maintain . base on thi model , a configur manag system is implement","ordered_present_kp":[2,130,472],"keyphrases":["component-based software configuration management model","software development","integral logical constituents","software reuse","version control"],"prmu":["P","P","P","M","U"]} {"id":"1004","title":"Games machines play","abstract":"Individual rationality, or doing what is best for oneself, is a standard model used to explain and predict human behavior, and von Neumann-Morgenstern game theory is the classical mathematical formalization of this theory in multiple-agent settings. Individual rationality, however, is an inadequate model for the synthesis of artificial social systems where cooperation is essential, since it does not permit the accommodation of group interests other than as aggregations of individual interests. Satisficing game theory is based upon a well-defined notion of being good enough, and does accommodate group as well as individual interests through the use of conditional preference relationships, whereby a decision maker is able to adjust its preferences as a function of the preferences, and not just the options, of others. This new theory is offered as an alternative paradigm to construct artificial societies that are capable of complex behavior that goes beyond exclusive self interest","tok_text":"game machin play \n individu ration , or do what is best for oneself , is a standard model use to explain and predict human behavior , and von neumann-morgenstern game theori is the classic mathemat formal of thi theori in multiple-ag set . individu ration , howev , is an inadequ model for the synthesi of artifici social system where cooper is essenti , sinc it doe not permit the accommod of group interest other than as aggreg of individu interest . satisf game theori is base upon a well-defin notion of be good enough , and doe accommod group as well as individu interest through the use of condit prefer relationship , wherebi a decis maker is abl to adjust it prefer as a function of the prefer , and not just the option , of other . thi new theori is offer as an altern paradigm to construct artifici societi that are capabl of complex behavior that goe beyond exclus self interest","ordered_present_kp":[19,117,162,222,306,335,596,800,876],"keyphrases":["individual rationality","human behavior","game theory","multiple-agent","artificial social systems","cooperation","conditional preference relationships","artificial societies","self interest","decision theory","group rationality"],"prmu":["P","P","P","P","P","P","P","P","P","R","R"]} {"id":"1041","title":"Fractional differentiation in passive vibration control","abstract":"From a single-degree-of-freedom model used to illustrate the concept of vibration isolation, a method to transform the design for a suspension into a design for a robust controller is presented. Fractional differentiation is used to model the viscoelastic behaviour of the suspension. The use of fractional differentiation not only permits optimisation of just four suspension parameters, showing the 'compactness' of the fractional derivative operator, but also leads to robustness of the suspension's performance to uncertainty of the sprung mass. As an example, an engine suspension is studied","tok_text":"fraction differenti in passiv vibrat control \n from a single-degree-of-freedom model use to illustr the concept of vibrat isol , a method to transform the design for a suspens into a design for a robust control is present . fraction differenti is use to model the viscoelast behaviour of the suspens . the use of fraction differenti not onli permit optimis of just four suspens paramet , show the ' compact ' of the fraction deriv oper , but also lead to robust of the suspens 's perform to uncertainti of the sprung mass . as an exampl , an engin suspens is studi","ordered_present_kp":[0,23,115,168,196,264,510,542],"keyphrases":["fractional differentiation","passive vibration control","vibration isolation","suspension","robust controller","viscoelastic behaviour","sprung mass","engine suspension"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"886","title":"A fractional-flow model of serial manufacturing systems with rework and its reachability and controllability properties","abstract":"A dynamic fractional-flow model of a serial manufacturing system incorporating rework is considered. Using some results on reachability and controllability of positive linear systems the ability of serial manufacturing systems with rework to \"move in space\", that is their reachability and controllability properties, are studied. These properties are important not only for optimising the performance of the manufacturing system, possibly off-line, but also to improve its functioning by using feedback control online","tok_text":"a fractional-flow model of serial manufactur system with rework and it reachabl and control properti \n a dynam fractional-flow model of a serial manufactur system incorpor rework is consid . use some result on reachabl and control of posit linear system the abil of serial manufactur system with rework to \" move in space \" , that is their reachabl and control properti , are studi . these properti are import not onli for optimis the perform of the manufactur system , possibl off-lin , but also to improv it function by use feedback control onlin","ordered_present_kp":[27,57,71,84,105,234,526],"keyphrases":["serial manufacturing systems","rework","reachability","controllability","dynamic fractional-flow model","positive linear systems","feedback control","performance optimisation"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"1428","title":"Syndicators turn to the enterprise","abstract":"Syndicators have started reshaping offerings, products, and services towards the marketplace that was looking for enterprise-wide content syndication technology and service. Syndication companies are turning themselves into infrastructure companies. Many syndication companies are now focusing their efforts on enterprise clients instead of the risky dot coms","tok_text":"syndic turn to the enterpris \n syndic have start reshap offer , product , and servic toward the marketplac that wa look for enterprise-wid content syndic technolog and servic . syndic compani are turn themselv into infrastructur compani . mani syndic compani are now focus their effort on enterpris client instead of the riski dot com","ordered_present_kp":[124,289,215],"keyphrases":["enterprise-wide content syndication technology","infrastructure companies","enterprise clients","business model","aggregator","business Web sites","customer base"],"prmu":["P","P","P","U","U","U","U"]} {"id":"1305","title":"Learning nonregular languages: a comparison of simple recurrent networks and LSTM","abstract":"Rodriguez (2001) examined the learning ability of simple recurrent nets (SRNs) (Elman, 1990) on simple context-sensitive and context-free languages. In response to Rodriguez's (2001) article, we compare the performance of simple recurrent nets and long short-term memory recurrent nets on context-free and context-sensitive languages","tok_text":"learn nonregular languag : a comparison of simpl recurr network and lstm \n rodriguez ( 2001 ) examin the learn abil of simpl recurr net ( srn ) ( elman , 1990 ) on simpl context-sensit and context-fre languag . in respons to rodriguez 's ( 2001 ) articl , we compar the perform of simpl recurr net and long short-term memori recurr net on context-fre and context-sensit languag","ordered_present_kp":[68,355,189,270,307],"keyphrases":["LSTM","context-free languages","performance","short-term memory recurrent nets","context-sensitive languages","nonregular language learning","recurrent neural networks"],"prmu":["P","P","P","P","P","R","M"]} {"id":"1340","title":"Orthogonal decompositions of complete digraphs","abstract":"A family G of isomorphic copies of a given digraph G is said to be an orthogonal decomposition of the complete digraph D\/sub n\/ by G, if every arc of D\/sub n\/ belongs to exactly one member of G and the union of any two different elements from G contains precisely one pair of reverse arcs. Given a digraph h, an h family mh is the vertex-disjoint union of m copies of h . In this paper, we consider orthogonal decompositions by h-families. Our objective is to prove the existence of such an orthogonal decomposition whenever certain necessary conditions hold and m is sufficiently large","tok_text":"orthogon decomposit of complet digraph \n a famili g of isomorph copi of a given digraph g is said to be an orthogon decomposit of the complet digraph d \/ sub n\/ by g , if everi arc of d \/ sub n\/ belong to exactli one member of g and the union of ani two differ element from g contain precis one pair of revers arc . given a digraph h , an h famili mh is the vertex-disjoint union of m copi of h . in thi paper , we consid orthogon decomposit by h-famili . our object is to prove the exist of such an orthogon decomposit whenev certain necessari condit hold and m is suffici larg","ordered_present_kp":[0,23,55,358,535],"keyphrases":["orthogonal decompositions","complete digraphs","isomorphic copies","vertex-disjoint union","necessary conditions"],"prmu":["P","P","P","P","P"]} {"id":"715","title":"The quadratic 0-1 knapsack problem with series-parallel support","abstract":"We consider various special cases of the quadratic 0-1 knapsack problem (QKP) for which the underlying graph structure is fairly simple. For the variant with edge series-parallel graphs, we give a dynamic programming algorithm with pseudo-polynomial time complexity, and a fully polynomial time approximation scheme. In strong contrast to this, the variant with vertex series-parallel graphs is shown to be strongly NP-complete","tok_text":"the quadrat 0 - 1 knapsack problem with series-parallel support \n we consid variou special case of the quadrat 0 - 1 knapsack problem ( qkp ) for which the underli graph structur is fairli simpl . for the variant with edg series-parallel graph , we give a dynam program algorithm with pseudo-polynomi time complex , and a fulli polynomi time approxim scheme . in strong contrast to thi , the variant with vertex series-parallel graph is shown to be strongli np-complet","ordered_present_kp":[4,40,156,256,285,322],"keyphrases":["quadratic 0-1 knapsack problem","series-parallel support","underlying graph structure","dynamic programming algorithm","pseudo-polynomial time complexity","fully polynomial time approximation scheme","NP-complete problem"],"prmu":["P","P","P","P","P","P","R"]} {"id":"750","title":"Automated cerebrum segmentation from three-dimensional sagittal brain MR images","abstract":"We present a fully automated cerebrum segmentation algorithm for full three-dimensional sagittal brain MR images. First, cerebrum segmentation from a midsagittal brain MR image is performed utilizing landmarks, anatomical information, and a connectivity-based threshold segmentation algorithm as previously reported. Recognizing that the cerebrum in laterally adjacent slices tends to have similar size and shape, we use the cerebrum segmentation result from the midsagittal brain MR image as a mask to guide cerebrum segmentation in adjacent lateral slices in an iterative fashion. This masking operation yields a masked image (preliminary cerebrum segmentation) for the next lateral slice, which may truncate brain region(s). Truncated regions are restored by first finding end points of their boundaries, by comparing the mask image and masked image boundaries, and then applying a connectivity-based algorithm. The resulting final extracted cerebrum image for this slice is then used as a mask for the next lateral slice. The algorithm yielded satisfactory fully automated cerebrum segmentations in three-dimensional sagittal brain MR images, and had performance superior to conventional edge detection algorithms for segmentation of cerebrum from 3D sagittal brain MR images","tok_text":"autom cerebrum segment from three-dimension sagitt brain mr imag \n we present a fulli autom cerebrum segment algorithm for full three-dimension sagitt brain mr imag . first , cerebrum segment from a midsagitt brain mr imag is perform util landmark , anatom inform , and a connectivity-bas threshold segment algorithm as previous report . recogn that the cerebrum in later adjac slice tend to have similar size and shape , we use the cerebrum segment result from the midsagitt brain mr imag as a mask to guid cerebrum segment in adjac later slice in an iter fashion . thi mask oper yield a mask imag ( preliminari cerebrum segment ) for the next later slice , which may truncat brain region( ) . truncat region are restor by first find end point of their boundari , by compar the mask imag and mask imag boundari , and then appli a connectivity-bas algorithm . the result final extract cerebrum imag for thi slice is then use as a mask for the next later slice . the algorithm yield satisfactori fulli autom cerebrum segment in three-dimension sagitt brain mr imag , and had perform superior to convent edg detect algorithm for segment of cerebrum from 3d sagitt brain mr imag","ordered_present_kp":[80,199,239,250,272,366,571,793,831],"keyphrases":["fully automated cerebrum segmentation algorithm","midsagittal brain MR image","landmarks","anatomical information","connectivity-based threshold segmentation algorithm","laterally adjacent slices","masking operation","masked image boundaries","connectivity-based algorithm","full 3D sagittal brain MR images","brain region truncation","boundary end points"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"846","title":"Female computer science doctorates: what does the survey of earned doctorates reveal?","abstract":"Based on the National Center for Education Statistics (2000), in the 1997-1998 academic year 26.7% of earned bachelors' degrees, 29.0% of earned masters' degrees and 16.3% of earned doctorates' degrees in computer science were awarded to women. As these percentages suggest, women are underrepresented at all academic levels in computer science (Camp, 1997). The most severe shortage occurs at the top level-the doctorate in computer science. We know very little about the women who persist to the top level of academic achievement in computer science. This paper examines a subset of data collected through the Survey of Earned Doctorates (SED). The specific focus of this paper is to identify trends that have emerged from the SED with respect to females completing doctorates in computer science between the academic years 1990-1991 and 1999-2000. Although computer science doctorates include doctorates in information science, prior research (Camp, 1997) suggests that the percentage of women completing doctorates in information science as compared to computer science is low. The specific research questions are: 1. How does the percentage of women who complete doctorates in computer science compare to those that complete doctorates in other fields? 2. How does the length of time in school and the sources of funding differ for females as compared to males who complete doctorates in computer science? 3. Where do women go after completing doctorates in computer science and what positions do they acquire? How do these experiences differ from their male peers?","tok_text":"femal comput scienc doctor : what doe the survey of earn doctor reveal ? \n base on the nation center for educ statist ( 2000 ) , in the 1997 - 1998 academ year 26.7 % of earn bachelor ' degre , 29.0 % of earn master ' degre and 16.3 % of earn doctor ' degre in comput scienc were award to women . as these percentag suggest , women are underrepres at all academ level in comput scienc ( camp , 1997 ) . the most sever shortag occur at the top level-th doctor in comput scienc . we know veri littl about the women who persist to the top level of academ achiev in comput scienc . thi paper examin a subset of data collect through the survey of earn doctor ( sed ) . the specif focu of thi paper is to identifi trend that have emerg from the sed with respect to femal complet doctor in comput scienc between the academ year 1990 - 1991 and 1999 - 2000 . although comput scienc doctor includ doctor in inform scienc , prior research ( camp , 1997 ) suggest that the percentag of women complet doctor in inform scienc as compar to comput scienc is low . the specif research question are : 1 . how doe the percentag of women who complet doctor in comput scienc compar to those that complet doctor in other field ? 2 . how doe the length of time in school and the sourc of fund differ for femal as compar to male who complet doctor in comput scienc ? 3 . where do women go after complet doctor in comput scienc and what posit do they acquir ? how do these experi differ from their male peer ?","ordered_present_kp":[0,42,898],"keyphrases":["female computer science doctorates","Survey of Earned Doctorates","information science"],"prmu":["P","P","P"]} {"id":"803","title":"The mutual effects of grid and wind turbine voltage stability control","abstract":"This note considers the results of wind turbine modelling and power system stability investigations. Voltage stability of the power grid with grid-connected wind turbines will be improved by using blade angle control for a temporary reduction of the wind turbine power during and shortly after a short circuit fault in the grid","tok_text":"the mutual effect of grid and wind turbin voltag stabil control \n thi note consid the result of wind turbin model and power system stabil investig . voltag stabil of the power grid with grid-connect wind turbin will be improv by use blade angl control for a temporari reduct of the wind turbin power dure and shortli after a short circuit fault in the grid","ordered_present_kp":[30,96,118,170,186,233,325],"keyphrases":["wind turbine voltage stability control","wind turbine modelling","power system stability","power grid","grid-connected wind turbines","blade angle control","short circuit fault","grid voltage stability control","wind turbine power reduction","offshore wind turbines"],"prmu":["P","P","P","P","P","P","P","R","R","M"]} {"id":"1415","title":"The disconnect continues [digital content providers]","abstract":"The relationships between the people who buy digital content and those who sell it are probably more acrimonious than ever before, says Dick Curtis, a director and lead analyst for the research firm Outsell Inc., where he covers econtent contract and negotiation strategies. Several buyers agree with his observation. They cite aggressive sales tactics, an unwillingness to deliver content in formats buyers need, a reluctance to provide licensing terms that take into account the structure of today's corporations, and inadequate service and support as a few of the factors underlying the acrimony. Still, many buyers remain optimistic that compromises can be reached on some of these issues. But first, they say, sellers must truly understand the econtent needs of today's enterprises","tok_text":"the disconnect continu [ digit content provid ] \n the relationship between the peopl who buy digit content and those who sell it are probabl more acrimoni than ever befor , say dick curti , a director and lead analyst for the research firm outsel inc. , where he cover econt contract and negoti strategi . sever buyer agre with hi observ . they cite aggress sale tactic , an unwilling to deliv content in format buyer need , a reluct to provid licens term that take into account the structur of today 's corpor , and inadequ servic and support as a few of the factor underli the acrimoni . still , mani buyer remain optimist that compromis can be reach on some of these issu . but first , they say , seller must truli understand the econt need of today 's enterpris","ordered_present_kp":[25,269,358],"keyphrases":["digital content","econtent contract","sales tactics","econtent negotiation","econtent buyers","news databases","Web site"],"prmu":["P","P","P","R","R","U","U"]} {"id":"1081","title":"Stability of W-methods with applications to operator splitting and to geometric theory","abstract":"We analyze the stability properties of W-methods applied to the parabolic initial value problem u' + Au = Bu. We work in an abstract Banach space setting, assuming that A is the generator of an analytic semigroup and that B is relatively bounded with respect to A. Since W-methods treat the term with A implicitly, whereas the term involving B is discretized in an explicit way, they can be regarded as splitting methods. As an application of our stability results, convergence for nonsmooth initial data is shown. Moreover, the layout of a geometric theory for discretizations of semilinear parabolic problems u' + Au = f (u) is presented","tok_text":"stabil of w-method with applic to oper split and to geometr theori \n we analyz the stabil properti of w-method appli to the parabol initi valu problem u ' + au = bu . we work in an abstract banach space set , assum that a is the gener of an analyt semigroup and that b is rel bound with respect to a. sinc w-method treat the term with a implicitli , wherea the term involv b is discret in an explicit way , they can be regard as split method . as an applic of our stabil result , converg for nonsmooth initi data is shown . moreov , the layout of a geometr theori for discret of semilinear parabol problem u ' + au = f ( u ) is present","ordered_present_kp":[34,52,124,181,241,492],"keyphrases":["operator splitting","geometric theory","parabolic initial value problem","abstract Banach space","analytic semigroup","nonsmooth initial data","W-methods stability","linearly implicit Runge-Kutta methods"],"prmu":["P","P","P","P","P","P","R","M"]} {"id":"1450","title":"Networking in the palm of your hand [PDA buyer's guide]","abstract":"As PDAs move beyond the personal space and into the enterprise, you need to get a firm grip on the options available for your users. What operating system do you choose? What features do you and your company need? How will these devices fit into the existing corporate infrastructure? What about developer support?","tok_text":"network in the palm of your hand [ pda buyer 's guid ] \n as pda move beyond the person space and into the enterpris , you need to get a firm grip on the option avail for your user . what oper system do you choos ? what featur do you and your compani need ? how will these devic fit into the exist corpor infrastructur ? what about develop support ?","ordered_present_kp":[35,187,297,331,39],"keyphrases":["PDAs","buyer's guide","operating system","corporate infrastructure","developer support"],"prmu":["P","P","P","P","P"]} {"id":"1338","title":"The chromatic spectrum of mixed hypergraphs","abstract":"A mixed hypergraph is a triple H = (X, C, D), where X is the vertex set, and each of C, D is a list of subsets of X. A strict k-coloring of H is a surjection c : X {1,..., k} such that each member of le has two vertices assigned a common value and each member of D has two vertices assigned distinct values. The feasible set of H is {k: H has a strict k-coloring}. Among other results, we prove that a finite set of positive integers is the feasible set of some mixed hypergraph if and only if it omits the number I or is an interval starting with 1. For the set {s, t} with 2