{"id":"1833","title":"British Standard 7666 as a framework for geocoding land and property information the UK","abstract":"The article examines the role of British Standard 7666 in the development of a national framework for geocoding land and property information in the United Kingdom. The author assesses how local authorities, and other agencies concerned with property and address datasets, are coping with the introduction of British Standard 7666, and examines the prospects and limitations of this development. British Standard 7666 has four parts, comprising specifications for street gazetteer; land and property gazetteer; addresses; and public rights of way. The organisation coordinating the introduction of British Standard 7666, Improvement and Development Agency (IDeA), is also overseeing the development and maintenance of a National Land and Property Gazetteer (NLPG) based on British Standard 7666. The introduction of the new addressing standard has mainly been prompted by Britain's effort to set up a national cadastral service to replace the obsolescent property registration system currently in place","tok_text":"british standard 7666 as a framework for geocod land and properti inform the uk \n the articl examin the role of british standard 7666 in the develop of a nation framework for geocod land and properti inform in the unit kingdom . the author assess how local author , and other agenc concern with properti and address dataset , are cope with the introduct of british standard 7666 , and examin the prospect and limit of thi develop . british standard 7666 ha four part , compris specif for street gazett ; land and properti gazett ; address ; and public right of way . the organis coordin the introduct of british standard 7666 , improv and develop agenc ( idea ) , is also overse the develop and mainten of a nation land and properti gazett ( nlpg ) base on british standard 7666 . the introduct of the new address standard ha mainli been prompt by britain 's effort to set up a nation cadastr servic to replac the obsolesc properti registr system current in place","ordered_present_kp":[0,41,57,77,154,214,251,308,488,513,308,545,628,655,708,742,806,878,923],"keyphrases":["British Standard 7666","geocoding","property information","UK","national framework","United Kingdom","local authorities","address datasets","addresses","street gazetteer","property gazetteer","public rights of way","Improvement and Development Agency","IDeA","National Land and Property Gazetteer","NLPG","addressing standard","national cadastral service","property registration system","land information","property datasets","land gazetteer","land information systems"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R"]} {"id":"1876","title":"The development of CASC [automated theorem proving]","abstract":"Researchers who make theoretical advances also need some way to demonstrate that an advance really does have general, overall positive consequences for system performance. For this it is necessary to evaluate the system on a set of problems that is sufficiently large and diverse to be somehow representative of the intended application area as a whole. It is only a small step from system evaluation to a communal system competition. The CADE ATP System Competition (CASC) has been run annually since 1996. Any competition is difficult to design and organize in the first instance, and to then run over the years. In order to obtain the full benefits of a competition, a thoroughly organized event, with an unambiguous and motivated design, is necessary. For some issues relevant to the CASC design, inevitable constraints have emerged. For other issues there have been several choices, and decisions have had to be made. This paper describes the evolution of CASC, paying particular attention to its design, design changes, and organization","tok_text":"the develop of casc [ autom theorem prove ] \n research who make theoret advanc also need some way to demonstr that an advanc realli doe have gener , overal posit consequ for system perform . for thi it is necessari to evalu the system on a set of problem that is suffici larg and divers to be somehow repres of the intend applic area as a whole . it is onli a small step from system evalu to a commun system competit . the cade atp system competit ( casc ) ha been run annual sinc 1996 . ani competit is difficult to design and organ in the first instanc , and to then run over the year . in order to obtain the full benefit of a competit , a thoroughli organ event , with an unambigu and motiv design , is necessari . for some issu relev to the casc design , inevit constraint have emerg . for other issu there have been sever choic , and decis have had to be made . thi paper describ the evolut of casc , pay particular attent to it design , design chang , and organ","ordered_present_kp":[15,174,376,423,22],"keyphrases":["CASC","automated theorem proving","system performance","system evaluation","CADE ATP System Competition","automated deduction","AI","artificial intelligence","classical first order logic"],"prmu":["P","P","P","P","P","M","U","U","M"]} {"id":"1605","title":"A GRASP heuristic for the mixed Chinese postman problem","abstract":"Arc routing problems (ARPs) consist of finding a traversal on a graph satisfying some conditions related to the links of the graph. In the Chinese postman problem (CPP) the aim is to find a minimum cost tour (closed walk) traversing all the links of the graph at least once. Both the Undirected CPP, where all the links are edges that can be traversed in both ways, and the Directed CPP, where all the links are arcs that must be traversed in a specified way, are known to be polynomially solvable. However, if we deal with a mixed graph (having edges and arcs), the problem turns out to be NP-hard. In this paper, we present a heuristic algorithm for this problem, the so-called Mixed CPP (MCPP), based on greedy randomized adaptive search procedure (GRASP) techniques. The algorithm has been tested and compared with other known and recent methods from the literature on a wide collection of randomly generated instances, with up to 200 nodes and 600 links, producing encouraging computational results. As far as we know, this is the best heuristic algorithm for the MCPP, with respect to solution quality, published up to now","tok_text":"a grasp heurist for the mix chines postman problem \n arc rout problem ( arp ) consist of find a travers on a graph satisfi some condit relat to the link of the graph . in the chines postman problem ( cpp ) the aim is to find a minimum cost tour ( close walk ) travers all the link of the graph at least onc . both the undirect cpp , where all the link are edg that can be travers in both way , and the direct cpp , where all the link are arc that must be travers in a specifi way , are known to be polynomi solvabl . howev , if we deal with a mix graph ( have edg and arc ) , the problem turn out to be np-hard . in thi paper , we present a heurist algorithm for thi problem , the so-cal mix cpp ( mcpp ) , base on greedi random adapt search procedur ( grasp ) techniqu . the algorithm ha been test and compar with other known and recent method from the literatur on a wide collect of randomli gener instanc , with up to 200 node and 600 link , produc encourag comput result . as far as we know , thi is the best heurist algorithm for the mcpp , with respect to solut qualiti , publish up to now","ordered_present_kp":[24,2,53,227,247,641,715],"keyphrases":["GRASP heuristic","mixed Chinese postman problem","arc routing problems","minimum cost tour","closed walk","heuristic algorithm","greedy randomized adaptive search procedure","graph traversal","NP-hard problem","optimization problems","metaheuristics"],"prmu":["P","P","P","P","P","P","P","R","R","M","U"]} {"id":"1640","title":"Integration is LIMS inspiration","abstract":"For software manufacturers, blessings come in the form of fast-moving application areas. In the case of LIMS, biotechnology is still in the driving seat, inspiring developers to maintain consistently rapid and creative levels of innovation. Current advancements are no exception. Integration and linking initiatives are still popular and much of the activity appears to be coming from a very productive minority","tok_text":"integr is lim inspir \n for softwar manufactur , bless come in the form of fast-mov applic area . in the case of lim , biotechnolog is still in the drive seat , inspir develop to maintain consist rapid and creativ level of innov . current advanc are no except . integr and link initi are still popular and much of the activ appear to be come from a veri product minor","ordered_present_kp":[27,10,118],"keyphrases":["LIMS","software manufacturers","biotechnology"],"prmu":["P","P","P"]} {"id":"151","title":"Extending CTL with actions and real time","abstract":"In this paper, we present the logic ATCTL, which is intended to be used for model checking models that have been specified in a lightweight version of the Unified Modelling Language (UML). Elsewhere, we have defined a formal semantics for LUML to describe the models. This paper's goal is to give a specification language for properties that fits LUML; LUML includes states, actions and real time. ATCTL extends CTL with concurrent actions and real time. It is based on earlier extensions of CTL by R. De Nicola and F. Vaandrager (ACTL) (1990) and R. Alur et aL (TCTL) (1993). This makes it easier to adapt existing model checkers to ATCTL. To show that we can check properties specified in ATCTL in models specified in LUML, we give a small example using the Kronos model checker","tok_text":"extend ctl with action and real time \n in thi paper , we present the logic atctl , which is intend to be use for model check model that have been specifi in a lightweight version of the unifi model languag ( uml ) . elsewher , we have defin a formal semant for luml to describ the model . thi paper 's goal is to give a specif languag for properti that fit luml ; luml includ state , action and real time . atctl extend ctl with concurr action and real time . it is base on earlier extens of ctl by r. de nicola and f. vaandrag ( actl ) ( 1990 ) and r. alur et al ( tctl ) ( 1993 ) . thi make it easier to adapt exist model checker to atctl . to show that we can check properti specifi in atctl in model specifi in luml , we give a small exampl use the krono model checker","ordered_present_kp":[16,69,113,186,243,320,753],"keyphrases":["actions","logic ATCTL","model checking models","Unified Modelling Language","formal semantics","specification language","Kronos model checker","real time logic","computation tree logic"],"prmu":["P","P","P","P","P","P","P","R","M"]} {"id":"1504","title":"Designing human-centered distributed information systems","abstract":"Many computer systems are designed according to engineering and technology principles and are typically difficult to learn and use. The fields of human-computer interaction, interface design, and human factors have made significant contributions to ease of use and are primarily concerned with the interfaces between systems and users, not with the structures that are often more fundamental for designing truly human-centered systems. The emerging paradigm of human-centered computing (HCC)-which has taken many forms-offers a new look at system design. HCC requires more than merely designing an artificial agent to supplement a human agent. The dynamic interactions in a distributed system composed of human and artificial agents-and the context in which the system is situated-are indispensable factors. While we have successfully applied our methodology in designing a prototype of a human-centered intelligent flight-surgeon console at NASA Johnson Space Center, this article presents a methodology for designing human-centered computing systems using electronic medical records (EMR) systems","tok_text":"design human-cent distribut inform system \n mani comput system are design accord to engin and technolog principl and are typic difficult to learn and use . the field of human-comput interact , interfac design , and human factor have made signific contribut to eas of use and are primarili concern with the interfac between system and user , not with the structur that are often more fundament for design truli human-cent system . the emerg paradigm of human-cent comput ( hcc)-which ha taken mani forms-off a new look at system design . hcc requir more than mere design an artifici agent to supplement a human agent . the dynam interact in a distribut system compos of human and artifici agents-and the context in which the system is situated-ar indispens factor . while we have success appli our methodolog in design a prototyp of a human-cent intellig flight-surgeon consol at nasa johnson space center , thi articl present a methodolog for design human-cent comput system use electron medic record ( emr ) system","ordered_present_kp":[573,604,169,193,215,950,879],"keyphrases":["human-computer interaction","interface design","human factors","artificial agents","human agents","NASA Johnson Space Center","human-centered computing systems","human-centered distributed information systems design","distributed cognition","multiple analysis levels","human-centered intelligent flight surgeon console","electronic medical records systems"],"prmu":["P","P","P","P","P","P","P","R","M","U","M","R"]} {"id":"1541","title":"The AT89C51\/52 flash memory programmers","abstract":"When faced with a plethora of applications to design, it's essential to have a versatile microcontroller in hand. The author describes the AT89C51\/52 microcontrollers. To get you started, he'll describe his inexpensive microcontroller programmer","tok_text":"the at89c51\/52 flash memori programm \n when face with a plethora of applic to design , it 's essenti to have a versatil microcontrol in hand . the author describ the at89c51\/52 microcontrol . to get you start , he 'll describ hi inexpens microcontrol programm","ordered_present_kp":[4,15,120,238],"keyphrases":["AT89C51\/52","flash memory programmers","microcontrollers","microcontroller programmer","device programmer"],"prmu":["P","P","P","P","M"]} {"id":"1680","title":"Minimizing weighted number of early and tardy jobs with a common due window involving location penalty","abstract":"Studies a single machine scheduling problem to minimize the weighted number of early and tardy jobs with a common due window. There are n non-preemptive and simultaneously available jobs. Each job will incur an early (tardy) penalty if it is early (tardy) with respect to the common due window under a given schedule. The window size is a given parameter but the window location is a decision variable. The objective of the problem is to find a schedule that minimizes the weighted number of early and tardy jobs and the location penalty. We show that the problem is NP-complete in the ordinary sense and develop a dynamic programming based pseudo-polynomial algorithm. We conduct computational experiments, the results of which show that the performance of the dynamic algorithm is very good in terms of memory requirement and CPU time. We also provide polynomial time algorithms for two special cases","tok_text":"minim weight number of earli and tardi job with a common due window involv locat penalti \n studi a singl machin schedul problem to minim the weight number of earli and tardi job with a common due window . there are n non-preempt and simultan avail job . each job will incur an earli ( tardi ) penalti if it is earli ( tardi ) with respect to the common due window under a given schedul . the window size is a given paramet but the window locat is a decis variabl . the object of the problem is to find a schedul that minim the weight number of earli and tardi job and the locat penalti . we show that the problem is np-complet in the ordinari sens and develop a dynam program base pseudo-polynomi algorithm . we conduct comput experi , the result of which show that the perform of the dynam algorithm is veri good in term of memori requir and cpu time . we also provid polynomi time algorithm for two special case","ordered_present_kp":[33,50,99,449,75,662,681],"keyphrases":["tardy jobs","common due window","location penalty","single machine scheduling problem","decision variable","dynamic programming","pseudo-polynomial algorithm","early jobs","NP-complete problem"],"prmu":["P","P","P","P","P","P","P","R","R"]} {"id":"1539","title":"Comments on some recent methods for the simultaneous determination of polynomial zeros","abstract":"In this note we give some comments on the recent results concerning a simultaneous method of the fourth-order for finding complex zeros in circular interval arithmetic. The main discussion is directed to a rediscovered iterative formula and its modification, presented recently in Sun and Kosmol, (2001). The presented comments include some critical parts of the papers Petkovic, Trickovic, Herceg, (1998) and Sun and Kosmol, (2001) which treat the same subject","tok_text":"comment on some recent method for the simultan determin of polynomi zero \n in thi note we give some comment on the recent result concern a simultan method of the fourth-ord for find complex zero in circular interv arithmet . the main discuss is direct to a rediscov iter formula and it modif , present recent in sun and kosmol , ( 2001 ) . the present comment includ some critic part of the paper petkov , trickov , herceg , ( 1998 ) and sun and kosmol , ( 2001 ) which treat the same subject","ordered_present_kp":[59,68,182,198,266],"keyphrases":["polynomial","zeros","complex zeros","circular interval arithmetic","iterative formula"],"prmu":["P","P","P","P","P"]} {"id":"1913","title":"A six-degree-of-freedom precision motion stage","abstract":"This article presents the design and performance evaluation of a six-degree-of-freedom piezoelectrically actuated fine motion stage that will be used for three dimensional error compensation of a long-range translation mechanism. Development of a single element, piezoelectric linear displacement actuator capable of translations of 1.67 mu m with 900 V potential across the electrodes and under a 27.4 N axial load and 0.5 mm lateral distortion is presented. Finite element methods have been developed and used to evaluate resonant frequencies of the stage platform and the complete assembly with and without a platform payload. In general, an error of approximately 10.0% between the finite element results and the experimentally measured values were observed. The complete fine motion stage provided approximately +or-0.93 mu m of translation and +or-38.0 mu rad of rotation in all three planes of motion using an excitation range of 1000 V. An impulse response indicating a fundamental mode resonance at 162 Hz was measured with a 0.650 kg payload rigidly mounted to the top of the stage","tok_text":"a six-degree-of-freedom precis motion stage \n thi articl present the design and perform evalu of a six-degree-of-freedom piezoelectr actuat fine motion stage that will be use for three dimension error compens of a long-rang translat mechan . develop of a singl element , piezoelectr linear displac actuat capabl of translat of 1.67 mu m with 900 v potenti across the electrod and under a 27.4 n axial load and 0.5 mm later distort is present . finit element method have been develop and use to evalu reson frequenc of the stage platform and the complet assembl with and without a platform payload . in gener , an error of approxim 10.0 % between the finit element result and the experiment measur valu were observ . the complet fine motion stage provid approxim + or-0.93 mu m of translat and + or-38.0 mu rad of rotat in all three plane of motion use an excit rang of 1000 v. an impuls respons indic a fundament mode reson at 162 hz wa measur with a 0.650 kg payload rigidli mount to the top of the stage","ordered_present_kp":[2,80,69,121,214,444,500,522,580,880,903,342,927],"keyphrases":["six-degree-of-freedom precision motion stage","design","performance evaluation","piezoelectrically actuated fine motion stage","long-range translation mechanism","900 V","finite element methods","resonant frequency","stage platform","platform payload","impulse response","fundamental mode resonance","162 Hz","3D error compensation","single element piezoelectric linear displacement actuator","1.67 micron","1000 V","0.93 to -0.93 micron","650.0 gm"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","M","R","M","R","M","U"]} {"id":"191","title":"Linear, parameter-varying control and its application to a turbofan engine","abstract":"This paper describes application of parameter-dependent control design methods to a turbofan engine. Parameter-dependent systems are linear systems, whose state-space descriptions are known functions of time-varying parameters. The time variation of each of the parameters is not known in advance, but is assumed to be measurable in real-time. Three linear, parameter-varying (LPV) approaches to control design are discussed. The first method is based on linear fractional transformations which relies on the small gain theorem for bounds on performance and robustness. The other methods make use of either a single (SQLF) or parameter-dependent (PDQLF) quadratic Lyapunov function to bound the achievable level of performance. The latter two techniques are used to synthesize controllers for a high-performance turbofan engine. A LPV model of the turbofan engine is constructed from Jacobian linearizations at fixed power codes for control design. The control problem is formulated as a model matching problem in the H\/sub infinity \/ and LPV framework. The objective is decoupled command response of the closed-loop system to pressure and rotor speed requests. The performance of linear, H\/sub infinity \/ point designs are compared with the SQLF and PDQLF controllers. Nonlinear simulations indicate that the controller synthesized using the SQLF approach is slightly more conservative than the PDQLF controller. Nonlinear simulations with the SQLF and PDQLF controllers show very robust designs that achieve all desired performance objectives","tok_text":"linear , parameter-vari control and it applic to a turbofan engin \n thi paper describ applic of parameter-depend control design method to a turbofan engin . parameter-depend system are linear system , whose state-spac descript are known function of time-vari paramet . the time variat of each of the paramet is not known in advanc , but is assum to be measur in real-tim . three linear , parameter-vari ( lpv ) approach to control design are discuss . the first method is base on linear fraction transform which reli on the small gain theorem for bound on perform and robust . the other method make use of either a singl ( sqlf ) or parameter-depend ( pdqlf ) quadrat lyapunov function to bound the achiev level of perform . the latter two techniqu are use to synthes control for a high-perform turbofan engin . a lpv model of the turbofan engin is construct from jacobian linear at fix power code for control design . the control problem is formul as a model match problem in the h \/ sub infin \/ and lpv framework . the object is decoupl command respons of the closed-loop system to pressur and rotor speed request . the perform of linear , h \/ sub infin \/ point design are compar with the sqlf and pdqlf control . nonlinear simul indic that the control synthes use the sqlf approach is slightli more conserv than the pdqlf control . nonlinear simul with the sqlf and pdqlf control show veri robust design that achiev all desir perform object","ordered_present_kp":[51,96,207,249,480,524,864,1031,1062,954,1216,1388],"keyphrases":["turbofan engine","parameter-dependent control design methods","state-space descriptions","time-varying parameters","linear fractional transformations","small gain theorem","Jacobian linearizations","model matching problem","decoupled command response","closed-loop system","nonlinear simulations","very robust designs","linear parameter-varying control","performance bounds","robustness bounds","single quadratic Lyapunov function","parameter-dependent quadratic Lyapunov function","H\/sub infinity \/ framework"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R","R","R"]} {"id":"1638","title":"The chemical brotherhood","abstract":"It has always been more difficult for chemistry to keep up in the Internet age but a new language could herald a new era for the discipline. The paper discusses CML, or chemical mark-up language. The eXtensible Mark-up Language provides a universal format for structured documents and data on the Web and so offers a way for scientists and others to carry a wide range of information types across the net in a transparent way. All that is needed is an XML browser","tok_text":"the chemic brotherhood \n it ha alway been more difficult for chemistri to keep up in the internet age but a new languag could herald a new era for the disciplin . the paper discuss cml , or chemic mark-up languag . the extens mark-up languag provid a univers format for structur document and data on the web and so offer a way for scientist and other to carri a wide rang of inform type across the net in a transpar way . all that is need is an xml browser","ordered_present_kp":[61,89,181,190,219,445],"keyphrases":["chemistry","Internet","CML","chemical mark-up language","eXtensible Mark-up Language","XML browser","structured document format","World Wide Web"],"prmu":["P","P","P","P","P","P","R","M"]} {"id":"1760","title":"Dihedral congruence primes and class fields of real quadratic fields","abstract":"We show that for a real quadratic field F the dihedral congruence primes with respect to F for cusp forms of weight k and quadratic nebentypus are essentially the primes dividing expressions of the form epsilon \/sub +\/\/sup k-1\/+or-1 where epsilon \/sub +\/ is a totally positive fundamental unit of F. This extends work of Hida. Our results allow us to identify a family of (ray) class fields of F which are generated by torsion points on modular abelian varieties","tok_text":"dihedr congruenc prime and class field of real quadrat field \n we show that for a real quadrat field f the dihedr congruenc prime with respect to f for cusp form of weight k and quadrat nebentypu are essenti the prime divid express of the form epsilon \/sub + \/\/sup k-1\/+or-1 where epsilon \/sub + \/ is a total posit fundament unit of f. thi extend work of hida . our result allow us to identifi a famili of ( ray ) class field of f which are gener by torsion point on modular abelian varieti","ordered_present_kp":[0,27,42,178,450,467],"keyphrases":["dihedral congruence primes","class fields","real quadratic fields","quadratic nebentypus","torsion points","modular abelian varieties","class field theory"],"prmu":["P","P","P","P","P","P","M"]} {"id":"1725","title":"Cutting the cord [wireless health care]","abstract":"More and more healthcare executives are electing to cut the cord to their existing computer systems by implementing mobile technology. The allure of information anywhere, anytime is intoxicating, demonstrated by the cell phones and personal digital assistants (PDAs) that adorn today's professionals. The utility and convenience of these devices is undeniable. But what is the best strategy for implementing a mobile solution within a healthcare enterprise, be it large or small-and under what circumstances? What types of healthcare workers benefit most from mobile technology? And how state-of-the-art is security for wireless applications and devices? These are the questions that healthcare executives are asking-and should be asking-as they evaluate mobile solutions","tok_text":"cut the cord [ wireless health care ] \n more and more healthcar execut are elect to cut the cord to their exist comput system by implement mobil technolog . the allur of inform anywher , anytim is intox , demonstr by the cell phone and person digit assist ( pda ) that adorn today 's profession . the util and conveni of these devic is undeni . but what is the best strategi for implement a mobil solut within a healthcar enterpris , be it larg or small-and under what circumst ? what type of healthcar worker benefit most from mobil technolog ? and how state-of-the-art is secur for wireless applic and devic ? these are the question that healthcar execut are asking-and should be asking-a they evalu mobil solut","ordered_present_kp":[54,574],"keyphrases":["healthcare","security","mobile computing","wireless computing"],"prmu":["P","P","R","R"]} {"id":"1461","title":"Adaptive multiresolution approach for solution of hyperbolic PDEs","abstract":"This paper establishes an innovative and efficient multiresolution adaptive approach combined with high-resolution methods, for the numerical solution of a single or a system of partial differential equations. The proposed methodology is unconditionally bounded (even for hyperbolic equations) and dynamically adapts the grid so that higher spatial resolution is automatically allocated to domain regions where strong gradients are observed, thus possessing the two desired properties of a numerical approach: stability and accuracy. Numerical results for five test problems are presented which clearly show the robustness and cost effectiveness of the proposed method","tok_text":"adapt multiresolut approach for solut of hyperbol pde \n thi paper establish an innov and effici multiresolut adapt approach combin with high-resolut method , for the numer solut of a singl or a system of partial differenti equat . the propos methodolog is uncondit bound ( even for hyperbol equat ) and dynam adapt the grid so that higher spatial resolut is automat alloc to domain region where strong gradient are observ , thu possess the two desir properti of a numer approach : stabil and accuraci . numer result for five test problem are present which clearli show the robust and cost effect of the propos method","ordered_present_kp":[96,136,166,339,395,481,492,573,584],"keyphrases":["multiresolution adaptive approach","high-resolution methods","numerical solution","spatial resolution","strong gradients","stability","accuracy","robustness","cost effectiveness","hyperbolic partial differential equations","dynamic grid adaptation","unconditionally bounded methodology"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1659","title":"Mobile commerce: transforming the vision into reality","abstract":"This editorial preface investigates current developments in mobile commerce (M-commerce) and proposes an integrated architecture that supports business and consumer needs in an optimal way to successfully implement M-commerce business processes. The key line of thought is based on the heuristic observation that customers will not want to receive M-commerce offerings to their mobile telephones. As a result, a pull as opposed to a push approach becomes a necessary requirement to conduct M-commerce. In addition, M-commerce has to rely on local, regional, demographic and many other variables to be truly effective. Both observations necessitate an M-commerce architecture that allows the coherent integration of enterprise-level systems as well as the aggregation of product and service offerings from many different and partially competing parties into a collaborative M-commerce platform. The key software component within this integrated architecture is an event management engine to monitor, detect, store, process and measure information about outside events that are relevant to all participants in M-commerce","tok_text":"mobil commerc : transform the vision into realiti \n thi editori prefac investig current develop in mobil commerc ( m-commerc ) and propos an integr architectur that support busi and consum need in an optim way to success implement m-commerc busi process . the key line of thought is base on the heurist observ that custom will not want to receiv m-commerc offer to their mobil telephon . as a result , a pull as oppos to a push approach becom a necessari requir to conduct m-commerc . in addit , m-commerc ha to reli on local , region , demograph and mani other variabl to be truli effect . both observ necessit an m-commerc architectur that allow the coher integr of enterprise-level system as well as the aggreg of product and servic offer from mani differ and partial compet parti into a collabor m-commerc platform . the key softwar compon within thi integr architectur is an event manag engin to monitor , detect , store , process and measur inform about outsid event that are relev to all particip in m-commerc","ordered_present_kp":[115,0,141,182,371,880],"keyphrases":["mobile commerce","M-commerce","integrated architecture","consumer needs","mobile telephones","event management engine","business needs","pull approach","collaborative platform"],"prmu":["P","P","P","P","P","P","R","R","R"]} {"id":"148","title":"Axioms for branching time","abstract":"Logics of general branching time, or historical necessity, have long been studied but important axiomatization questions remain open. Here the difficulties of finding axioms for such logics are considered and ideas for solving some of the main open problems are presented. A new, more expressive logical account is also given to support Peirce's prohibition on truth values being attached to the contingent future","tok_text":"axiom for branch time \n logic of gener branch time , or histor necess , have long been studi but import axiomat question remain open . here the difficulti of find axiom for such logic are consid and idea for solv some of the main open problem are present . a new , more express logic account is also given to support peirc 's prohibit on truth valu be attach to the conting futur","ordered_present_kp":[0,10,338],"keyphrases":["axioms","branching time","truth values","temporal logic"],"prmu":["P","P","P","M"]} {"id":"1558","title":"Orthogonality of the Jacobi polynomials with negative integer parameters","abstract":"It is well known that the Jacobi polynomials P\/sub n\/\/sup ( alpha , beta )\/(x) are orthogonal with respect to a quasi-definite linear functional whenever alpha , beta , and alpha + beta + 1 are not negative integer numbers. Recently, Sobolev orthogonality for these polynomials has been obtained for alpha a negative integer and beta not a negative integer and also for the case alpha = beta negative integer numbers. In this paper, we give a Sobolev orthogonality for the Jacobi polynomials in the remainder cases","tok_text":"orthogon of the jacobi polynomi with neg integ paramet \n it is well known that the jacobi polynomi p \/ sub n\/\/sup ( alpha , beta ) \/(x ) are orthogon with respect to a quasi-definit linear function whenev alpha , beta , and alpha + beta + 1 are not neg integ number . recent , sobolev orthogon for these polynomi ha been obtain for alpha a neg integ and beta not a neg integ and also for the case alpha = beta neg integ number . in thi paper , we give a sobolev orthogon for the jacobi polynomi in the remaind case","ordered_present_kp":[0,168,277,16,37],"keyphrases":["orthogonality","Jacobi polynomials","negative integer parameters","quasi-definite linear functional","Sobolev orthogonality"],"prmu":["P","P","P","P","P"]} {"id":"1892","title":"Closed-loop persistent identification of linear systems with unmodeled dynamics and stochastic disturbances","abstract":"The essential issues of time complexity and probing signal selection are studied for persistent identification of linear time-invariant systems in a closed-loop setting. By establishing both upper and lower bounds on identification accuracy as functions of the length of observation, size of unmodeled dynamics, and stochastic disturbances, we demonstrate the inherent impact of unmodeled dynamics on identification accuracy, reduction of time complexity by stochastic averaging on disturbances, and probing capability of full rank periodic signals for closed-loop persistent identification. These findings indicate that the mixed formulation, in which deterministic uncertainty of system dynamics is blended with random disturbances, is beneficial to reduction of identification complexity","tok_text":"closed-loop persist identif of linear system with unmodel dynam and stochast disturb \n the essenti issu of time complex and probe signal select are studi for persist identif of linear time-invari system in a closed-loop set . by establish both upper and lower bound on identif accuraci as function of the length of observ , size of unmodel dynam , and stochast disturb , we demonstr the inher impact of unmodel dynam on identif accuraci , reduct of time complex by stochast averag on disturb , and probe capabl of full rank period signal for closed-loop persist identif . these find indic that the mix formul , in which determinist uncertainti of system dynam is blend with random disturb , is benefici to reduct of identif complex","ordered_present_kp":[0,50,177,254,269,514,68,107,124],"keyphrases":["closed-loop persistent identification","unmodeled dynamics","stochastic disturbances","time complexity","probing signal selection","linear time-invariant systems","lower bounds","identification accuracy","full rank periodic signals","upper bounds"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"1744","title":"Convergence of Toland's critical points for sequences of DC functions and application to the resolution of semilinear elliptic problems","abstract":"We prove that if a sequence (f\/sub n\/)\/sub n\/ of DC functions (difference of two convex functions) converges to a DC function f in some appropriate way and if u\/sub n\/ is a critical point of f\/sub n\/, in the sense described by Toland (1978, 1979), and is such that (u\/sub n\/)\/sub n\/ converges to u, then u is a critical point of f, still in Toland's sense. We also build a new algorithm which searches for this critical point u and then apply it in order to compute the solution of a semilinear elliptic equation","tok_text":"converg of toland 's critic point for sequenc of dc function and applic to the resolut of semilinear ellipt problem \n we prove that if a sequenc ( f \/ sub n\/)\/sub n\/ of dc function ( differ of two convex function ) converg to a dc function f in some appropri way and if u \/ sub n\/ is a critic point of f \/ sub n\/ , in the sens describ by toland ( 1978 , 1979 ) , and is such that ( u \/ sub n\/)\/sub n\/ converg to u , then u is a critic point of f , still in toland 's sens . we also build a new algorithm which search for thi critic point u and then appli it in order to comput the solut of a semilinear ellipt equat","ordered_present_kp":[90,592],"keyphrases":["semilinear elliptic problems","semilinear elliptic equation","critical point convergence","DC function sequences","convex function difference"],"prmu":["P","P","R","R","R"]} {"id":"1701","title":"Estimation of 3-D left ventricular deformation from medical images using biomechanical models","abstract":"The quantitative estimation of regional cardiac deformation from three-dimensional (3-D) image sequences has important clinical implications for the assessment of viability in the heart wall. We present here a generic methodology for estimating soft tissue deformation which integrates image-derived information with biomechanical models, and apply it to the problem of cardiac deformation estimation. The method is image modality independent. The images are segmented interactively and then initial correspondence is established using a shape-tracking approach. A dense motion field is then estimated using a transversely isotropic, linear-elastic model, which accounts for the muscle fiber directions in the left ventricle. The dense motion field is in turn used to calculate the deformation of the heart wall in terms of strain in cardiac specific directions. The strains obtained using this approach in open-chest dogs before and after coronary occlusion, exhibit a high correlation with strains produced in the same animals using implanted markers. Further, they show good agreement with previously published results in the literature. This proposed method provides quantitative regional 3-D estimates of heart deformation","tok_text":"estim of 3-d left ventricular deform from medic imag use biomechan model \n the quantit estim of region cardiac deform from three-dimension ( 3-d ) imag sequenc ha import clinic implic for the assess of viabil in the heart wall . we present here a gener methodolog for estim soft tissu deform which integr image-deriv inform with biomechan model , and appli it to the problem of cardiac deform estim . the method is imag modal independ . the imag are segment interact and then initi correspond is establish use a shape-track approach . a dens motion field is then estim use a transvers isotrop , linear-elast model , which account for the muscl fiber direct in the left ventricl . the dens motion field is in turn use to calcul the deform of the heart wall in term of strain in cardiac specif direct . the strain obtain use thi approach in open-chest dog befor and after coronari occlus , exhibit a high correl with strain produc in the same anim use implant marker . further , they show good agreement with previous publish result in the literatur . thi propos method provid quantit region 3-d estim of heart deform","ordered_present_kp":[57,96,79,777,839,638,247],"keyphrases":["biomechanical models","quantitative estimation","regional cardiac deformation","generic methodology","muscle fiber directions","cardiac specific directions","open-chest dogs","3-D left ventricular deformation estimation","medical diagnostic imaging","transversely isotropic linear-elastic model","interactively segmented images","3-D image sequences","nonrigid motion estimation","magnetic resonance imaging","left ventricular motion estimation"],"prmu":["P","P","P","P","P","P","P","R","M","R","R","R","M","M","R"]} {"id":"1779","title":"Maybe it's not too late to join the circus: books for midlife career management","abstract":"Midcareer librarians looking for career management help on the bookshelf face thousands of choices. This article reviews thirteen popular career self-help books. The reviewed books cover various aspects of career management and provide information on which might be best suited for particular goals, including career change, career tune-up, and personal and professional self-evaluation. The comments reflect issues of interest to midcareer professionals","tok_text":"mayb it 's not too late to join the circu : book for midlif career manag \n midcar librarian look for career manag help on the bookshelf face thousand of choic . thi articl review thirteen popular career self-help book . the review book cover variou aspect of career manag and provid inform on which might be best suit for particular goal , includ career chang , career tune-up , and person and profession self-evalu . the comment reflect issu of interest to midcar profession","ordered_present_kp":[53,82,196,347,394],"keyphrases":["midlife career management","librarians","career self-help books","career change","professional self-evaluation","personal self-evaluation","libraries"],"prmu":["P","P","P","P","P","R","U"]} {"id":"1817","title":"Nonlinear adaptive control via sliding-mode state and perturbation observer","abstract":"The paper presents a nonlinear adaptive controller (NAC) for single-input single-output feedback linearisable nonlinear systems. A sliding-mode state and perturbation observer is designed to estimate the system states and perturbation which includes the combined effect of system nonlinearities, uncertainties and external disturbances. The NAC design does not require the details of the nonlinear system model and full system states. It possesses an adaptation capability to deal with system parameter uncertainties, unmodelled system dynamics and external disturbances. The convergence of the observer and the stability analysis of the controller\/observer system are given. The proposed control scheme is applied for control of a synchronous generator, in comparison with a state-feedback linearising controller (FLC). Simulation study is carried out based on a single-generator infinite-bus power system to show the performance of the controller\/observer system","tok_text":"nonlinear adapt control via sliding-mod state and perturb observ \n the paper present a nonlinear adapt control ( nac ) for single-input single-output feedback linearis nonlinear system . a sliding-mod state and perturb observ is design to estim the system state and perturb which includ the combin effect of system nonlinear , uncertainti and extern disturb . the nac design doe not requir the detail of the nonlinear system model and full system state . it possess an adapt capabl to deal with system paramet uncertainti , unmodel system dynam and extern disturb . the converg of the observ and the stabil analysi of the control \/ observ system are given . the propos control scheme is appli for control of a synchron gener , in comparison with a state-feedback linearis control ( flc ) . simul studi is carri out base on a single-gener infinite-bu power system to show the perform of the control \/ observ system","ordered_present_kp":[0,50,113,502,524,343,570,748,782,825],"keyphrases":["nonlinear adaptive control","perturbation observer","NAC","external disturbances","parameter uncertainties","unmodelled system dynamics","convergence","state-feedback linearising controller","FLC","single-generator infinite-bus power system","sliding-mode state observer","SISO feedback linearisable nonlinear systems","synchronous generator control"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","M","R"]} {"id":"1485","title":"Telemedicine in the management of a cervical dislocation by a mobile neurosurgeon","abstract":"Neurosurgical teams, who are normally located in specialist centres, frequently use teleradiology to make a decision about the transfer of a patient to the nearest neurosurgical department. This decision depends on the type of pathology, the clinical status of the patient and the prognosis. If the transfer of the patient is not possible, for example because of an unstable clinical status, a mobile neurosurgical team may be used. We report a case which was dealt with in a remote French military airborne surgical unit, in the Republic of Chad. The unit, which provides health-care to the French military personnel stationed there, also provides free medical care for the local population. It conducts about 100 operations each month. The unit comprises two surgeons (an orthopaedic and a general surgeon), one anaesthetist, two anaesthetic nurses, one operating room nurse, two nurses, three paramedics and a secretary. The civilian patient presented with unstable cervical trauma. A mobile neurosurgeon operated on her, and used telemedicine before, during and after surgery","tok_text":"telemedicin in the manag of a cervic disloc by a mobil neurosurgeon \n neurosurg team , who are normal locat in specialist centr , frequent use teleradiolog to make a decis about the transfer of a patient to the nearest neurosurg depart . thi decis depend on the type of patholog , the clinic statu of the patient and the prognosi . if the transfer of the patient is not possibl , for exampl becaus of an unstabl clinic statu , a mobil neurosurg team may be use . we report a case which wa dealt with in a remot french militari airborn surgic unit , in the republ of chad . the unit , which provid health-car to the french militari personnel station there , also provid free medic care for the local popul . it conduct about 100 oper each month . the unit compris two surgeon ( an orthopaed and a gener surgeon ) , one anaesthetist , two anaesthet nurs , one oper room nurs , two nurs , three paramed and a secretari . the civilian patient present with unstabl cervic trauma . a mobil neurosurgeon oper on her , and use telemedicin befor , dure and after surgeri","ordered_present_kp":[49,143,0,505,556,615,922,952,1054],"keyphrases":["telemedicine","mobile neurosurgeon","teleradiology","remote French military airborne surgical unit","Republic of Chad","French military personnel","civilian patient","unstable cervical trauma","surgery","cervical dislocation management","health care"],"prmu":["P","P","P","P","P","P","P","P","P","R","M"]} {"id":"1852","title":"The design and performance evaluation of alternative XML storage strategies","abstract":"This paper studies five strategies for storing XML documents including one that leaves documents in the file system, three that use a relational database system, and one that uses an object manager. We implement and evaluate each approach using a number of XQuery queries. A number of interesting insights are gained from these experiments and a summary of the advantages and disadvantages of the approaches is presented","tok_text":"the design and perform evalu of altern xml storag strategi \n thi paper studi five strategi for store xml document includ one that leav document in the file system , three that use a relat databas system , and one that use an object manag . we implement and evalu each approach use a number of xqueri queri . a number of interest insight are gain from these experi and a summari of the advantag and disadvantag of the approach is present","ordered_present_kp":[151,182,225,15,293],"keyphrases":["performance evaluation","file system","relational database system","object manager","XQuery queries","XML document storage"],"prmu":["P","P","P","P","P","R"]} {"id":"1478","title":"The effects of work pace on within-participant and between-participant keying force, electromyography, and fatigue","abstract":"A laboratory study was conducted to determine the effects of work pace on typing force, electromyographic (EMG) activity, and subjective discomfort. We found that as participants typed faster, their typing force and finger flexor and extensor EMG activity increased linearly. There was also an increase in subjective discomfort, with a sharp threshold between participants' self-selected pace and their maximum typing speed. The results suggest that participants self-select a typing pace that maximizes typing speed and minimizes discomfort. The fastest typists did not produce significantly more finger flexor EMG activity but did produce proportionately less finger extensor EMG activity compared with the slower typists. We hypothesize that fast typists may use different muscle recruitment patterns that allow them to be more efficient than slower typists at striking the keys. In addition, faster typists do not experience more discomfort than slow typists. These findings show that the relative pace of typing is more important than actual typing speed with regard to discomfort and muscle activity. These results suggest that typists may benefit from skill training to increase maximum typing speed. Potential applications of this research includes skill training for typists","tok_text":"the effect of work pace on within-particip and between-particip key forc , electromyographi , and fatigu \n a laboratori studi wa conduct to determin the effect of work pace on type forc , electromyograph ( emg ) activ , and subject discomfort . we found that as particip type faster , their type forc and finger flexor and extensor emg activ increas linearli . there wa also an increas in subject discomfort , with a sharp threshold between particip ' self-select pace and their maximum type speed . the result suggest that particip self-select a type pace that maxim type speed and minim discomfort . the fastest typist did not produc significantli more finger flexor emg activ but did produc proportion less finger extensor emg activ compar with the slower typist . we hypothes that fast typist may use differ muscl recruit pattern that allow them to be more effici than slower typist at strike the key . in addit , faster typist do not experi more discomfort than slow typist . these find show that the rel pace of type is more import than actual type speed with regard to discomfort and muscl activ . these result suggest that typist may benefit from skill train to increas maximum type speed . potenti applic of thi research includ skill train for typist","ordered_present_kp":[332,224,305,487,232,614,812,64,1155],"keyphrases":["keying force","subjective discomfort","discomfort","finger flexor","EMG activity","typing speed","typists","muscle recruitment patterns","skill training","work pace effect"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"1784","title":"CyberEthics bibliography 2002: a select list of recent works","abstract":"Included in the 2002 annual bibliography update is a select list of recent books and conference proceedings that have been published since 2000. Also included is a select list of special issues of journals and periodicals that were recently published. For additional lists of recently published books and articles, see ibid. (June 2000, June 2001)","tok_text":"cybereth bibliographi 2002 : a select list of recent work \n includ in the 2002 annual bibliographi updat is a select list of recent book and confer proceed that have been publish sinc 2000 . also includ is a select list of special issu of journal and period that were recent publish . for addit list of recent publish book and articl , see ibid . ( june 2000 , june 2001 )","ordered_present_kp":[0,74,125,141,223,239,251],"keyphrases":["CyberEthics bibliography","2002 annual bibliography","recent books","conference proceedings","special issues","journals","periodicals"],"prmu":["P","P","P","P","P","P","P"]} {"id":"175","title":"Diagnostic expert system using non-monotonic reasoning","abstract":"The objective of this work is to develop an expert system for cucumber disorder diagnosis using non-monotonic reasoning to handle the situation when the system cannot reach a conclusion. One reason for this situation is when the information is incomplete. Another reason is when the domain knowledge itself is incomplete. Another reason is when the information is inconsistent. This method maintains the truth of the system in case of changing a piece of information. The proposed method uses two types of non-monotonic reasoning namely: default reasoning and reasoning in the presence of inconsistent information to achieve its goal","tok_text":"diagnost expert system use non-monoton reason \n the object of thi work is to develop an expert system for cucumb disord diagnosi use non-monoton reason to handl the situat when the system can not reach a conclus . one reason for thi situat is when the inform is incomplet . anoth reason is when the domain knowledg itself is incomplet . anoth reason is when the inform is inconsist . thi method maintain the truth of the system in case of chang a piec of inform . the propos method use two type of non-monoton reason name : default reason and reason in the presenc of inconsist inform to achiev it goal","ordered_present_kp":[0,106,568,524],"keyphrases":["diagnostic expert system","cucumber disorder diagnosis","default reasoning","inconsistent information","nonmonotonic reasoning","incomplete information","truth maintenance","agriculture"],"prmu":["P","P","P","P","M","R","M","U"]} {"id":"1520","title":"Uniform hyperbolic polynomial B-spline curves","abstract":"This paper presents a new kind of uniform splines, called hyperbolic polynomial B-splines, generated over the space Omega =span{sinh t, cosh t, t\/sup k-3\/, t\/sup k-3\/, t\/sup k-4\/, ..., t 1} in which k is an arbitrary integer larger than or equal to 3. Hyperbolic polynomial B-splines share most of the properties of B-splines in polynomial space. We give subdivision formulae for this new kind of curve and then prove that they have variation diminishing properties and the control polygons of the subdivisions converge. Hyperbolic polynomial B-splines can handle freeform curves as well as remarkable curves such as the hyperbola and the catenary. The generation of tensor product surfaces using these new splines is straightforward. Examples of such tensor product surfaces: the saddle surface, the catenary cylinder, and a certain kind of ruled surface are given","tok_text":"uniform hyperbol polynomi b-spline curv \n thi paper present a new kind of uniform spline , call hyperbol polynomi b-spline , gener over the space omega = span{sinh t , cosh t , t \/ sup k-3\/ , t \/ sup k-3\/ , t \/ sup k-4\/ , ... , t 1 } in which k is an arbitrari integ larger than or equal to 3 . hyperbol polynomi b-spline share most of the properti of b-spline in polynomi space . we give subdivis formula for thi new kind of curv and then prove that they have variat diminish properti and the control polygon of the subdivis converg . hyperbol polynomi b-spline can handl freeform curv as well as remark curv such as the hyperbola and the catenari . the gener of tensor product surfac use these new spline is straightforward . exampl of such tensor product surfac : the saddl surfac , the catenari cylind , and a certain kind of rule surfac are given","ordered_present_kp":[0,251,389,494,389,573,622,640,771,790,830],"keyphrases":["uniform hyperbolic polynomial B-spline curves","arbitrary integer","subdivision formulae","subdivisions","control polygons","freeform curves","hyperbola","catenary","saddle surface","catenary cylinder","ruled surface","tensor product surface generation"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1565","title":"On lag windows connected with Jacobi polynomials","abstract":"Lag windows whose corresponding spectral windows are Jacobi polynomials or sums of Jacobi polynomials are introduced. The bias and variance of their spectral density estimators are investigated and their window bandwidth and characteristic exponent are determined","tok_text":"on lag window connect with jacobi polynomi \n lag window whose correspond spectral window are jacobi polynomi or sum of jacobi polynomi are introduc . the bia and varianc of their spectral densiti estim are investig and their window bandwidth and characterist expon are determin","ordered_present_kp":[3,27,73,179,225,246],"keyphrases":["lag windows","Jacobi polynomials","spectral windows","spectral density estimators","window bandwidth","characteristic exponent"],"prmu":["P","P","P","P","P","P"]} {"id":"1699","title":"Time-domain reconstruction for thermoacoustic tomography in a spherical geometry","abstract":"Reconstruction-based microwave-induced thermoacoustic tomography in a spherical configuration is presented. Thermoacoustic waves from biological tissue samples excited by microwave pulses are measured by a wide-band unfocused ultrasonic transducer, which is set on a spherical surface enclosing the sample. Sufficient data are acquired from different directions to reconstruct the microwave absorption distribution. An exact reconstruction solution is derived and approximated to a modified backprojection algorithm. Experiments demonstrate that the reconstructed images agree well with the original samples. The spatial resolution of the system reaches 0.5 mm","tok_text":"time-domain reconstruct for thermoacoust tomographi in a spheric geometri \n reconstruction-bas microwave-induc thermoacoust tomographi in a spheric configur is present . thermoacoust wave from biolog tissu sampl excit by microwav puls are measur by a wide-band unfocus ultrason transduc , which is set on a spheric surfac enclos the sampl . suffici data are acquir from differ direct to reconstruct the microwav absorpt distribut . an exact reconstruct solut is deriv and approxim to a modifi backproject algorithm . experi demonstr that the reconstruct imag agre well with the origin sampl . the spatial resolut of the system reach 0.5 mm","ordered_present_kp":[28,0,486,435,193,251,542,57,633],"keyphrases":["time-domain reconstruction","thermoacoustic tomography","spherical geometry","biological tissue samples","wide-band unfocused ultrasonic transducer","exact reconstruction solution","modified backprojection algorithm","reconstructed images","0.5 mm","medical diagnostic imaging","spherical surface enclosing sample","system spatial resolution"],"prmu":["P","P","P","P","P","P","P","P","P","M","R","R"]} {"id":"1621","title":"Current-mode fully-programmable piece-wise-linear block for neuro-fuzzy applications","abstract":"A new method to implement an arbitrary piece-wise-linear characteristic in current mode is presented. Each of the breaking points and each slope is separately controllable. As an example a block that implements an N-shaped piece-wise-linearity has been designed. The N-shaped block operates in the subthreshold region and uses only ten transistors. These characteristics make it especially suitable for large arrays of neuro-fuzzy systems where the number of transistors and power consumption per cell is an important concern. A prototype of this block has been fabricated in a 0.35 mu m CMOS technology. The functionality and programmability of this circuit has been verified through experimental results","tok_text":"current-mod fully-programm piece-wise-linear block for neuro-fuzzi applic \n a new method to implement an arbitrari piece-wise-linear characterist in current mode is present . each of the break point and each slope is separ control . as an exampl a block that implement an n-shape piece-wise-linear ha been design . the n-shape block oper in the subthreshold region and use onli ten transistor . these characterist make it especi suitabl for larg array of neuro-fuzzi system where the number of transistor and power consumpt per cell is an import concern . a prototyp of thi block ha been fabric in a 0.35 mu m cmo technolog . the function and programm of thi circuit ha been verifi through experiment result","ordered_present_kp":[105,149,187,217,272,345,455,509,610],"keyphrases":["arbitrary piece-wise-linear characteristic","current mode","breaking points","separately controllable","N-shaped piece-wise-linearity","subthreshold region","neuro-fuzzy systems","power consumption","CMOS","VLSI","0.35 micron"],"prmu":["P","P","P","P","P","P","P","P","P","U","M"]} {"id":"1664","title":"Disappointment reigns [retail IT]","abstract":"CPFR remains at the forefront of CIOs' minds, but a number of barriers, such as secretive corporate cultures and spotty data integrity, stand between retail organizations and true supply-chain collaboration. CIOs remain vexed at these obstacles, as was evidenced at a roundtable discussion by retail and consumer-goods IT leaders at the Retail Systems 2002 conference, held in Chicago by the consultancy MoonWatch Media Inc., Newton Upper Falls, Mass. Other annoyances discussed by retail CIOs include poorly designed business processes and retail's poor image with the IT talent emerging from school into the job market","tok_text":"disappoint reign [ retail it ] \n cpfr remain at the forefront of cio ' mind , but a number of barrier , such as secret corpor cultur and spotti data integr , stand between retail organ and true supply-chain collabor . cio remain vex at these obstacl , as wa evidenc at a roundtabl discuss by retail and consumer-good it leader at the retail system 2002 confer , held in chicago by the consult moonwatch media inc. , newton upper fall , mass. other annoy discuss by retail cio includ poorli design busi process and retail 's poor imag with the it talent emerg from school into the job market","ordered_present_kp":[19,393,334,65],"keyphrases":["retail","CIOs","Retail Systems 2002 conference","MoonWatch Media","collaborative planning forecasting and replenishment"],"prmu":["P","P","P","P","M"]} {"id":"1598","title":"A decision support model for selecting product\/service benefit positionings","abstract":"The art (and science) of successful product\/service positioning generally hinges on the firm's ability to select a set of attractively priced consumer benefits that are: valued by the buyer, distinctive in one or more respects, believable, deliverable, and sustainable (under actual or potential competitive abilities to imitate, neutralize, or overcome) in the target markets that the firm selects. For many years, the ubiquitous quadrant chart has been used to provide a simple graph of product\/service benefits (usually called product\/service attributes) described in terms of consumers' perceptions of the importance of attributes (to brand\/supplier choice) and the performance of competing firms on these attributes. This paper describes a model that extends the quadrant chart concept to a decision support system that optimizes a firm's market share for a specified product\/service. In particular, we describe a decision support model that utilizes relatively simple marketing research data on consumers' judged benefit importances, and supplier performances on these benefits to develop message components for specified target buyers. A case study is used to illustrate the model. The study deals with developing advertising message components for a relatively new entrant in the US air shipping market. We also discuss, more briefly, management reactions to application of the model to date, and areas for further research and model extension","tok_text":"a decis support model for select product \/ servic benefit posit \n the art ( and scienc ) of success product \/ servic posit gener hing on the firm 's abil to select a set of attract price consum benefit that are : valu by the buyer , distinct in one or more respect , believ , deliver , and sustain ( under actual or potenti competit abil to imit , neutral , or overcom ) in the target market that the firm select . for mani year , the ubiquit quadrant chart ha been use to provid a simpl graph of product \/ servic benefit ( usual call product \/ servic attribut ) describ in term of consum ' percept of the import of attribut ( to brand \/ supplier choic ) and the perform of compet firm on these attribut . thi paper describ a model that extend the quadrant chart concept to a decis support system that optim a firm 's market share for a specifi product \/ servic . in particular , we describ a decis support model that util rel simpl market research data on consum ' judg benefit import , and supplier perform on these benefit to develop messag compon for specifi target buyer . a case studi is use to illustr the model . the studi deal with develop advertis messag compon for a rel new entrant in the us air ship market . we also discuss , more briefli , manag reaction to applic of the model to date , and area for further research and model extens","ordered_present_kp":[33,2,173,443,482,535,630,933,1037,1149,1201,1255,1149],"keyphrases":["decision support model","product\/service benefit positionings","attractively priced consumer benefits","quadrant chart","simple graph","product\/service attributes","brand\/supplier choice","marketing research data","message components","advertising message components","advertising","US air shipping market","management reactions","market share optimization","consumer judged benefit importances","greedy heuristic","optimal message design"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","U","M"]} {"id":"188","title":"Sampled-data implementation of a gain scheduled controller","abstract":"A continuous-time gain-scheduled controller must be transformed to a corresponding discrete-time controller for sampled-data implementation. We show that certain linearization properties of a continuous-time gain scheduled controller are inherited by its sampled-data implementation. We also show that a similar relationship exists for multi-rate gain scheduled controllers arising in flight control applications","tok_text":"sampled-data implement of a gain schedul control \n a continuous-tim gain-schedul control must be transform to a correspond discrete-tim control for sampled-data implement . we show that certain linear properti of a continuous-tim gain schedul control are inherit by it sampled-data implement . we also show that a similar relationship exist for multi-r gain schedul control aris in flight control applic","ordered_present_kp":[28,0,53,123,194,345,382],"keyphrases":["sampled-data implementation","gain scheduled controller","continuous-time gain-scheduled controller","discrete-time controller","linearization properties","multi-rate gain scheduled controllers","flight control applications"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1786","title":"A humane tool for aiding computer science advisors, computer science students, and parents","abstract":"Over the past few years, the computer science department faculty at Baylor has observed that some students who perform adequately during the freshman and sophomore years have substantial difficulty during the junior and senior years of study. Baylor University is an institution committed to being caring of its students. The objective for this study grew out of these two realities. There are three objectives of this research. One objective is to identify students, no later than the sophomore year, who are less likely to succeed as computer science majors. A second objective is to accomplish this identification by using data from seniors majoring in computer science. A third objective is to begin to use this information at the end of their sophomore year when meeting with a computer science faculty advisor. A regression study is conducted on the data from all students classified as seniors, majoring in computer science in May 2001, showing grades in six freshman and sophomore courses, and showing grades for at least five junior or senior level computer science courses. These students and their course performance data constituted the study sample","tok_text":"a human tool for aid comput scienc advisor , comput scienc student , and parent \n over the past few year , the comput scienc depart faculti at baylor ha observ that some student who perform adequ dure the freshman and sophomor year have substanti difficulti dure the junior and senior year of studi . baylor univers is an institut commit to be care of it student . the object for thi studi grew out of these two realiti . there are three object of thi research . one object is to identifi student , no later than the sophomor year , who are less like to succeed as comput scienc major . a second object is to accomplish thi identif by use data from senior major in comput scienc . a third object is to begin to use thi inform at the end of their sophomor year when meet with a comput scienc faculti advisor . a regress studi is conduct on the data from all student classifi as senior , major in comput scienc in may 2001 , show grade in six freshman and sophomor cours , and show grade for at least five junior or senior level comput scienc cours . these student and their cours perform data constitut the studi sampl","ordered_present_kp":[2,21,45,73,301,218,565,811,1073],"keyphrases":["humane tool","computer science advisors","computer science students","parents","sophomore year","Baylor University","computer science majors","regression study","course performance data","student care"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"1815","title":"Control of integral processes with dead-time. 1. Disturbance observer-based 2 DOF control scheme","abstract":"A disturbance observer-based control scheme (a version of 2 DOF internal model control) which is very effective in controlling integral processes with dead time is presented. The controller can be designed to reject ramp disturbances as well as step disturbances and even arbitrary disturbances. When the plant model is available only two parameters are left to tune. One is the time constant of the set-point response and the other is the time constant of the disturbance response. The latter is tuned according to the compromise between disturbance response and robustness. This control scheme has a simple, clear, easy-to-design, easy-to-implement structure and good performance. It is compared to the best results (so far) using some simulation examples","tok_text":"control of integr process with dead-tim . 1 . disturb observer-bas 2 dof control scheme \n a disturb observer-bas control scheme ( a version of 2 dof intern model control ) which is veri effect in control integr process with dead time is present . the control can be design to reject ramp disturb as well as step disturb and even arbitrari disturb . when the plant model is avail onli two paramet are left to tune . one is the time constant of the set-point respons and the other is the time constant of the disturb respons . the latter is tune accord to the compromis between disturb respons and robust . thi control scheme ha a simpl , clear , easy-to-design , easy-to-impl structur and good perform . it is compar to the best result ( so far ) use some simul exampl","ordered_present_kp":[11,31,46,143,447,426,507,596],"keyphrases":["integral processes","dead-time","disturbance observer-based 2 DOF control scheme","2 DOF internal model control","time constant","set-point response","disturbance response","robustness","ramp disturbances rejection"],"prmu":["P","P","P","P","P","P","P","P","R"]} {"id":"1487","title":"Assessment of prehospital chest pain using telecardiology","abstract":"Two hundred general practitioners were equipped with a portable electrocardiograph which could transmit a 12-lead electrocardiogram (ECG) via a telephone line. A cardiologist was available 24 h a day for an interactive teleconsultation. In a 13 month period there were 5073 calls to the telecardiology service and 952 subjects with chest pain were identified. The telecardiology service allowed the general practitioners to manage 700 cases (74%) themselves; further diagnostic tests were requested for 162 patients (17%) and 83 patients (9%) were sent to the hospital emergency department. In the last group a cardiological diagnosis was confirmed in 60 patients and refuted in 23. Seven patients in whom the telecardiology service failed to detect a cardiac problem were hospitalized in the subsequent 48 h. The telecardiology service showed a sensitivity of 97.4%, a specificity of 89.5% and a diagnostic accuracy of 86.9% for chest pain. Telemedicine could be a useful tool in the diagnosis of chest pain in primary care","tok_text":"assess of prehospit chest pain use telecardiolog \n two hundr gener practition were equip with a portabl electrocardiograph which could transmit a 12-lead electrocardiogram ( ecg ) via a telephon line . a cardiologist wa avail 24 h a day for an interact teleconsult . in a 13 month period there were 5073 call to the telecardiolog servic and 952 subject with chest pain were identifi . the telecardiolog servic allow the gener practition to manag 700 case ( 74 % ) themselv ; further diagnost test were request for 162 patient ( 17 % ) and 83 patient ( 9 % ) were sent to the hospit emerg depart . in the last group a cardiolog diagnosi wa confirm in 60 patient and refut in 23 . seven patient in whom the telecardiolog servic fail to detect a cardiac problem were hospit in the subsequ 48 h. the telecardiolog servic show a sensit of 97.4 % , a specif of 89.5 % and a diagnost accuraci of 86.9 % for chest pain . telemedicin could be a use tool in the diagnosi of chest pain in primari care","ordered_present_kp":[35,61,96,186,244,518,483,575,824,845,868,978,272],"keyphrases":["telecardiology","general practitioners","portable electrocardiograph","telephone line","interactive teleconsultation","13 month","diagnostic tests","patients","hospital emergency department","sensitivity","specificity","diagnostic accuracy","primary care","prehospital chest pain assessment","electrocardiogram transmission"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","R","M"]} {"id":"1850","title":"The n-tier hub technology","abstract":"During 2001, the Enterprise Engineering Laboratory at George Mason University was contracted by the Boeing Company to develop an eHub capability for aerospace suppliers in Taiwan. In a laboratory environment, the core technology was designed, developed, and tested, and now a large first-tier aerospace supplier in Taiwan is commercializing the technology. The project objective was to provide layered network and application services for transporting XML-based business transaction flows across multi-tier, heterogeneous data processing environments. This paper documents the business scenario, the eHub application, and the network transport mechanisms that were used to build the n-tier hub. In contrast to most eHubs, this solution takes the point of view of suppliers, pushing data in accordance with supplier requirements; hence, enhancing the probability of supplier adoption. The unique contribution of this project is the development of an eHub that meets the needs of small and medium enterprises (SMEs) and first-tier suppliers","tok_text":"the n-tier hub technolog \n dure 2001 , the enterpris engin laboratori at georg mason univers wa contract by the boe compani to develop an ehub capabl for aerospac supplier in taiwan . in a laboratori environ , the core technolog wa design , develop , and test , and now a larg first-tier aerospac supplier in taiwan is commerci the technolog . the project object wa to provid layer network and applic servic for transport xml-base busi transact flow across multi-ti , heterogen data process environ . thi paper document the busi scenario , the ehub applic , and the network transport mechan that were use to build the n-tier hub . in contrast to most ehub , thi solut take the point of view of supplier , push data in accord with supplier requir ; henc , enhanc the probabl of supplier adopt . the uniqu contribut of thi project is the develop of an ehub that meet the need of small and medium enterpris ( sme ) and first-tier supplier","ordered_present_kp":[4,154,112,175,422,524,566,777,877,916],"keyphrases":["n-tier hub technology","Boeing Company","aerospace suppliers","Taiwan","XML-based business transaction flows","business scenario","network transport mechanisms","supplier adoption","small and medium enterprises","first-tier suppliers","multi-tier heterogeneous data processing environments"],"prmu":["P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1908","title":"Explicit solutions for transcendental equations","abstract":"A simple method to formulate an explicit expression for the roots of any analytic transcendental function is presented. The method is based on Cauchy's integral theorem and uses only basic concepts of complex integration. A convenient method for numerically evaluating the exact expression is presented. The application of both the formulation and evaluation of the exact expression is illustrated for several classical root finding problems","tok_text":"explicit solut for transcendent equat \n a simpl method to formul an explicit express for the root of ani analyt transcendent function is present . the method is base on cauchi 's integr theorem and use onli basic concept of complex integr . a conveni method for numer evalu the exact express is present . the applic of both the formul and evalu of the exact express is illustr for sever classic root find problem","ordered_present_kp":[19,224,395],"keyphrases":["transcendental equations","complex integration","root finding","analytic functions","Cauchy integral theorem","singularity","polynomial","Fourier transform"],"prmu":["P","P","P","R","R","U","U","U"]} {"id":"1623","title":"Transmission of real-time video over IP differentiated services","abstract":"Multimedia applications require high bandwidth and guaranteed quality of service (QoS). The current Internet, which provides 'best effort' services, cannot meet the stringent QoS requirements for delivering MPEG videos. It is proposed that MPEG frames are transported through various service models of DiffServ. Performance analysis and simulation results show that the proposed approach can not only guarantee QoS but can also achieve high bandwidth utilisation","tok_text":"transmiss of real-tim video over ip differenti servic \n multimedia applic requir high bandwidth and guarante qualiti of servic ( qo ) . the current internet , which provid ' best effort ' servic , can not meet the stringent qo requir for deliv mpeg video . it is propos that mpeg frame are transport through variou servic model of diffserv . perform analysi and simul result show that the propos approach can not onli guarante qo but can also achiev high bandwidth utilis","ordered_present_kp":[33,56,109,148,244,331,450],"keyphrases":["IP differentiated services","multimedia applications","quality of service","Internet","MPEG video","DiffServ","high bandwidth utilisation","real-time video transmission","QoS guarantees"],"prmu":["P","P","P","P","P","P","P","R","R"]} {"id":"1666","title":"Airline base schedule optimisation by flight network annealing","abstract":"A system for rigorous airline base schedule optimisation is described. The architecture of the system reflects the underlying problem structure. The architecture is hierarchical consisting of a master problem for logical aircraft schedule optimisation and a sub-problem for schedule evaluation. The sub-problem is made up of a number of component sub-problems including connection generation, passenger choice modelling, passenger traffic allocation by simulation and revenue and cost determination. Schedule optimisation is carried out by means of simulated annealing of flight networks. The operators for the simulated annealing process are feasibility preserving and form a complete set of operators","tok_text":"airlin base schedul optimis by flight network anneal \n a system for rigor airlin base schedul optimis is describ . the architectur of the system reflect the underli problem structur . the architectur is hierarch consist of a master problem for logic aircraft schedul optimis and a sub-problem for schedul evalu . the sub-problem is made up of a number of compon sub-problem includ connect gener , passeng choic model , passeng traffic alloc by simul and revenu and cost determin . schedul optimis is carri out by mean of simul anneal of flight network . the oper for the simul anneal process are feasibl preserv and form a complet set of oper","ordered_present_kp":[0,31,225,244,297,381,397,419,465,521,558],"keyphrases":["airline base schedule optimisation","flight network annealing","master problem","logical aircraft schedule optimisation","schedule evaluation","connection generation","passenger choice modelling","passenger traffic allocation","cost determination","simulated annealing","operators","system architecture","hierarchical architecture","time complexity"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","R","U"]} {"id":"177","title":"Turning telecommunications call details to churn prediction: a data mining approach","abstract":"As deregulation, new technologies, and new competitors open up the mobile telecommunications industry, churn prediction and management has become of great concern to mobile service providers. A mobile service provider wishing to retain its subscribers needs to be able to predict which of them may be at-risk of changing services and will make those subscribers the focus of customer retention efforts. In response to the limitations of existing churn-prediction systems and the unavailability of customer demographics in the mobile telecommunications provider investigated, we propose, design, and experimentally evaluate a churn-prediction technique that predicts churning from subscriber contractual information and call pattern changes extracted from call details. This proposed technique is capable of identifying potential churners at the contract level for a specific prediction time-period. In addition, the proposed technique incorporates the multi-classifier class-combiner approach to address the challenge of a highly skewed class distribution between churners and non-churners. The empirical evaluation results suggest that the proposed call-behavior-based churn-prediction technique exhibits satisfactory predictive effectiveness when more recent call details are employed for the churn prediction model construction. Furthermore, the proposed technique is able to demonstrate satisfactory or reasonable predictive power within the one-month interval between model construction and churn prediction. Using a previous demographics-based churn-prediction system as a reference, the lift factors attained by our proposed technique appear largely satisfactory","tok_text":"turn telecommun call detail to churn predict : a data mine approach \n as deregul , new technolog , and new competitor open up the mobil telecommun industri , churn predict and manag ha becom of great concern to mobil servic provid . a mobil servic provid wish to retain it subscrib need to be abl to predict which of them may be at-risk of chang servic and will make those subscrib the focu of custom retent effort . in respons to the limit of exist churn-predict system and the unavail of custom demograph in the mobil telecommun provid investig , we propos , design , and experiment evalu a churn-predict techniqu that predict churn from subscrib contractu inform and call pattern chang extract from call detail . thi propos techniqu is capabl of identifi potenti churner at the contract level for a specif predict time-period . in addit , the propos techniqu incorpor the multi-classifi class-combin approach to address the challeng of a highli skew class distribut between churner and non-churn . the empir evalu result suggest that the propos call-behavior-bas churn-predict techniqu exhibit satisfactori predict effect when more recent call detail are employ for the churn predict model construct . furthermor , the propos techniqu is abl to demonstr satisfactori or reason predict power within the one-month interv between model construct and churn predict . use a previou demographics-bas churn-predict system as a refer , the lift factor attain by our propos techniqu appear larg satisfactori","ordered_present_kp":[5,130,211,73,394,490,640,670,875,948,1435],"keyphrases":["telecommunications call details","deregulation","mobile telecommunications industry","mobile service providers","customer retention efforts","customer demographics","subscriber contractual information","call pattern changes","multi-classifier class-combiner approach","skewed class distribution","lift factors","decision tree induction"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","U"]} {"id":"1522","title":"Waltzing through Port 80 [Web security]","abstract":"Web services follow the trusting model of the Internet, but allow ever more powerful payloads to travel between businesses and consumers. Before you leap online, the author advises to scan the security concerns and the available fixes. He looks at how we define and store Web services and incorporate them into business processes","tok_text":"waltz through port 80 [ web secur ] \n web servic follow the trust model of the internet , but allow ever more power payload to travel between busi and consum . befor you leap onlin , the author advis to scan the secur concern and the avail fix . he look at how we defin and store web servic and incorpor them into busi process","ordered_present_kp":[38,79,60,314],"keyphrases":["Web services","trust","Internet","business processes","data security"],"prmu":["P","P","P","P","M"]} {"id":"1567","title":"Asymptotic expansions for the zeros of certain special functions","abstract":"We derive asymptotic expansions for the zeros of the cosine-integral Ci(x) and the Struve function H\/sub 0\/(x), and extend the available formulae for the zeros of Kelvin functions. Numerical evidence is provided to illustrate the accuracy of the expansions","tok_text":"asymptot expans for the zero of certain special function \n we deriv asymptot expans for the zero of the cosine-integr ci(x ) and the struve function h \/ sub 0\/(x ) , and extend the avail formula for the zero of kelvin function . numer evid is provid to illustr the accuraci of the expans","ordered_present_kp":[0,24,104,133,211,265],"keyphrases":["asymptotic expansions","zeros","cosine-integral","Struve function","Kelvin functions","accuracy"],"prmu":["P","P","P","P","P","P"]} {"id":"18","title":"Differential and integral calculus on discrete time series data","abstract":"It has been found that discontinuity plays a crucial role in natural evolutions (Lin 1998). In this presentation, we will generalize the idea of integration and differentiation, we developed in calculus, to the study of time series in the hope that the problem of outliers and discontinuities can be resolved more successfully than simply deleting the outliers and avoiding discontinuities from the overall data analysis. In general, appearances of outliers tend to mean existence of discontinuities, explosive growth or decline in the evolution. At the same time, our approach can be employed to partially overcome the problem of not having enough data values in any available time series. At the end, we will look at some real-life problems of prediction in order to see the power of this new approach","tok_text":"differenti and integr calculu on discret time seri data \n it ha been found that discontinu play a crucial role in natur evolut ( lin 1998 ) . in thi present , we will gener the idea of integr and differenti , we develop in calculu , to the studi of time seri in the hope that the problem of outlier and discontinu can be resolv more success than simpli delet the outlier and avoid discontinu from the overal data analysi . in gener , appear of outlier tend to mean exist of discontinu , explos growth or declin in the evolut . at the same time , our approach can be employ to partial overcom the problem of not have enough data valu in ani avail time seri . at the end , we will look at some real-lif problem of predict in order to see the power of thi new approach","ordered_present_kp":[114,15,0,41,291,712],"keyphrases":["differentiation","integration","time series","natural evolutions","outliers","prediction"],"prmu":["P","P","P","P","P","P"]} {"id":"1828","title":"Exploiting structure in quantified formulas","abstract":"We study the computational problem \"find the value of the quantified formula obtained by quantifying the variables in a sum of terms.\" The \"sum\" can be based on any commutative monoid, the \"quantifiers\" need only satisfy two simple conditions, and the variables can have any finite domain. This problem is a generalization of the problem \"given a sum-of-products of terms, find the value of the sum\" studied by R.E. Stearns and H.B. Hunt III (1996). A data structure called a \"structure tree\" is defined which displays information about \"subproblems\" that can be solved independently during the process of evaluating the formula. Some formulas have \"good\" structure trees which enable certain generic algorithms to evaluate the formulas in significantly less time than by brute force evaluation. By \"generic algorithm,\" we mean an algorithm constructed from uninterpreted function symbols, quantifier symbols, and monoid operations. The algebraic nature of the model facilitates a formal treatment of \"local reductions\" based on the \"local replacement\" of terms. Such local reductions \"preserve formula structure\" in the sense that structure trees with nice properties transform into structure trees with similar properties. These local reductions can also be used to transform hierarchical specified problems with useful structure into hierarchically specified problems having similar structure","tok_text":"exploit structur in quantifi formula \n we studi the comput problem \" find the valu of the quantifi formula obtain by quantifi the variabl in a sum of term . \" the \" sum \" can be base on ani commut monoid , the \" quantifi \" need onli satisfi two simpl condit , and the variabl can have ani finit domain . thi problem is a gener of the problem \" given a sum-of-product of term , find the valu of the sum \" studi by r.e. stearn and h.b. hunt iii ( 1996 ) . a data structur call a \" structur tree \" is defin which display inform about \" subproblem \" that can be solv independ dure the process of evalu the formula . some formula have \" good \" structur tree which enabl certain gener algorithm to evalu the formula in significantli less time than by brute forc evalu . by \" gener algorithm , \" we mean an algorithm construct from uninterpret function symbol , quantifi symbol , and monoid oper . the algebra natur of the model facilit a formal treatment of \" local reduct \" base on the \" local replac \" of term . such local reduct \" preserv formula structur \" in the sens that structur tree with nice properti transform into structur tree with similar properti . these local reduct can also be use to transform hierarch specifi problem with use structur into hierarch specifi problem have similar structur","ordered_present_kp":[20,190,456,479,673,837,855,877,1206],"keyphrases":["quantified formulas","commutative monoid","data structure","structure tree","generic algorithms","function symbols","quantifier symbols","monoid operations","hierarchically specified problems","structure exploitation","satisfiability problems","constraint satisfaction problems","dynamic programming","computational complexity"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","M","U","M"]} {"id":"1746","title":"The exact solution of coupled thermoelectroelastic behavior of piezoelectric laminates","abstract":"Exact solutions for static analysis of thermoelectroelastic laminated plates are presented. In this analysis, a new concise procedure for the analytical solution of composite laminated plates with piezoelectric layers is developed. A simple eigenvalue formula in real number form is directly developed from the basic coupled piezoelectric differential equations and the difficulty of treating imaginary eigenvalues is avoided. The solution is defined in the trigonometric series and can be applied to thin and thick plates. Numerical studies are conducted on a five-layer piezoelectric plate and the complexity of stresses and deformations under combined loading is illustrated. The results could be used as a benchmark for assessing any numerical solution by approximate approaches such as the finite element method while also providing useful physical insight into the behavior of piezoelectric plates in a thermal environment","tok_text":"the exact solut of coupl thermoelectroelast behavior of piezoelectr lamin \n exact solut for static analysi of thermoelectroelast lamin plate are present . in thi analysi , a new concis procedur for the analyt solut of composit lamin plate with piezoelectr layer is develop . a simpl eigenvalu formula in real number form is directli develop from the basic coupl piezoelectr differenti equat and the difficulti of treat imaginari eigenvalu is avoid . the solut is defin in the trigonometr seri and can be appli to thin and thick plate . numer studi are conduct on a five-lay piezoelectr plate and the complex of stress and deform under combin load is illustr . the result could be use as a benchmark for assess ani numer solut by approxim approach such as the finit element method while also provid use physic insight into the behavior of piezoelectr plate in a thermal environ","ordered_present_kp":[4,19,56,110,202,218,244,283,304,356,476,522,565,536,611,622,635,759,861],"keyphrases":["exact solution","coupled thermoelectroelastic behavior","piezoelectric laminates","thermoelectroelastic laminated plates","analytical solution","composite laminated plates","piezoelectric layers","eigenvalue formula","real number form","coupled piezoelectric differential equations","trigonometric series","thick plates","numerical study","five-layer piezoelectric plate","stresses","deformations","combined loading","finite element method","thermal environment","thin plates"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1703","title":"Statistical analysis of nonlinearly reconstructed near-infrared tomographic images. II. Experimental interpretation","abstract":"For pt. I see ibid., vol. 21, no. 7, p. 755-63 (2002). Image error analysis of a diffuse near-infrared tomography (NIR) system has been carried out on simulated data using a statistical approach described in pt. I of this paper (Pogue et al., 2002). The methodology is used here with experimental data acquired on phantoms with a prototype imaging system intended for characterizing breast tissue. Results show that imaging performance is not limited by random measurement error, but rather by calibration issues. The image error over the entire field of view is generally not minimized when an accurate homogeneous estimate of the phantom properties is available; however, local image error over a target region of interest (ROI) is reduced. The image reconstruction process which includes a Levenberg-Marquardt style regularization provides good minimization of the objective function, yet its reduction is not always correlated with an overall image error decrease. Minimization of the bias in an ROI which contains localized changes in the optical properties can be achieved through five to nine iterations of the algorithm. Precalibration of the algorithm through statistical evaluation of phantom studies may provide a better measure of the image accuracy than that implied by minimization of the standard objective function","tok_text":"statist analysi of nonlinearli reconstruct near-infrar tomograph imag . ii . experiment interpret \n for pt . i see ibid . , vol . 21 , no . 7 , p. 755 - 63 ( 2002 ) . imag error analysi of a diffus near-infrar tomographi ( nir ) system ha been carri out on simul data use a statist approach describ in pt . i of thi paper ( pogu et al . , 2002 ) . the methodolog is use here with experiment data acquir on phantom with a prototyp imag system intend for character breast tissu . result show that imag perform is not limit by random measur error , but rather by calibr issu . the imag error over the entir field of view is gener not minim when an accur homogen estim of the phantom properti is avail ; howev , local imag error over a target region of interest ( roi ) is reduc . the imag reconstruct process which includ a levenberg-marquardt style regular provid good minim of the object function , yet it reduct is not alway correl with an overal imag error decreas . minim of the bia in an roi which contain local chang in the optic properti can be achiev through five to nine iter of the algorithm . precalibr of the algorithm through statist evalu of phantom studi may provid a better measur of the imag accuraci than that impli by minim of the standard object function","ordered_present_kp":[19,167,524,732,645,672,821],"keyphrases":["nonlinearly reconstructed near-infrared tomographic images","image error","random measurement error","accurate homogeneous estimate","phantom properties","target region of interest","Levenberg-Marquardt style regularization","medical diagnostic imaging","algorithm precalibration","hemoglobin","bias minimization","algorithm iterations","objective function minimization"],"prmu":["P","P","P","P","P","P","P","M","R","U","R","R","R"]} {"id":"1890","title":"Robustness of trajectories with finite time extent","abstract":"The problem of estimating perturbation bounds of finite trajectories is considered. The trajectory is assumed to be generated by a linear system with uncertainty characterized in terms of integral quadratic constraints. It is shown that such perturbation bounds can be obtained as the solution to a nonconvex quadratic optimization problem, which can be addressed using Lagrange relaxation. The result can be used in robustness analysis of hybrid systems and switched dynamical systems","tok_text":"robust of trajectori with finit time extent \n the problem of estim perturb bound of finit trajectori is consid . the trajectori is assum to be gener by a linear system with uncertainti character in term of integr quadrat constraint . it is shown that such perturb bound can be obtain as the solut to a nonconvex quadrat optim problem , which can be address use lagrang relax . the result can be use in robust analysi of hybrid system and switch dynam system","ordered_present_kp":[26,67,154,173,206,302,361,402,420,438],"keyphrases":["finite time extent","perturbation bounds","linear system","uncertainty","integral quadratic constraints","nonconvex quadratic optimization problem","Lagrange relaxation","robustness analysis","hybrid systems","switched dynamical systems","trajectories robustness"],"prmu":["P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1911","title":"Pulmonary perfusion patterns and pulmonary arterial pressure","abstract":"Uses artificial intelligence methods to determine whether quantitative parameters describing the perfusion image can be synthesized to make a reasonable estimate of the pulmonary arterial (PA) pressure measured at angiography. Radionuclide perfusion images were obtained in 120 patients with normal chest radiographs who also underwent angiographic PA pressure measurement within 3 days of the radionuclide study. An artificial neural network (ANN) was constructed from several image parameters describing statistical and boundary characteristics of the perfusion images. With use of a leave-one-out cross-validation technique, this method was used to predict the PA systolic pressure in cases on which the ANN had not been trained. A Pearson correlation coefficient was determined between the predicted and measured PA systolic pressures. ANN predictions correlated with measured pulmonary systolic pressures (r=0.846, P<.001). The accuracy of the predictions was not influenced by the presence of pulmonary embolism. None of the 51 patients with predicted PA pressures of less than 29 mm Hg had pulmonary hypertension at angiography. All 13 patients with predicted PA pressures greater than 48 mm Hg had pulmonary hypertension at angiography. Meaningful information regarding PA pressure can be derived from noninvasive radionuclide perfusion scanning. The use of image analysis in concert with artificial intelligence methods helps to reveal physiologic information not readily apparent at visual image inspection","tok_text":"pulmonari perfus pattern and pulmonari arteri pressur \n use artifici intellig method to determin whether quantit paramet describ the perfus imag can be synthes to make a reason estim of the pulmonari arteri ( pa ) pressur measur at angiographi . radionuclid perfus imag were obtain in 120 patient with normal chest radiograph who also underw angiograph pa pressur measur within 3 day of the radionuclid studi . an artifici neural network ( ann ) wa construct from sever imag paramet describ statist and boundari characterist of the perfus imag . with use of a leave-one-out cross-valid techniqu , thi method wa use to predict the pa systol pressur in case on which the ann had not been train . a pearson correl coeffici wa determin between the predict and measur pa systol pressur . ann predict correl with measur pulmonari systol pressur ( r=0.846 , p<.001 ) . the accuraci of the predict wa not influenc by the presenc of pulmonari embol . none of the 51 patient with predict pa pressur of less than 29 mm hg had pulmonari hypertens at angiographi . all 13 patient with predict pa pressur greater than 48 mm hg had pulmonari hypertens at angiographi . meaning inform regard pa pressur can be deriv from noninvas radionuclid perfus scan . the use of imag analysi in concert with artifici intellig method help to reveal physiolog inform not readili appar at visual imag inspect","ordered_present_kp":[0,866,924,1015,1205,1251,1320,1358,470,503,560,696,60,105,133,232,246,289,302],"keyphrases":["pulmonary perfusion patterns","artificial intelligence methods","quantitative parameters","perfusion image","angiography","radionuclide perfusion images","patients","normal chest radiographs","image parameters","boundary characteristics","leave-one-out cross-validation technique","Pearson correlation coefficient","accuracy","pulmonary embolism","pulmonary hypertension","noninvasive radionuclide perfusion scanning","image analysis","physiologic information","visual image inspection","angiographic pulmonary arterial pressure measurement","artificial neural network predictions","statistical characteristics","pulmonary arterial systolic pressure","29 Pa","48 Pa"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R","R","R"]} {"id":"1583","title":"Cutting through the confusion [workflow & content management]","abstract":"Information management vendors are rushing to re-position themselves and put a portal spin on their products, says ITNET's Graham Urquhart. The result is confusion, with a range of different definitions and claims clouding the true picture","tok_text":"cut through the confus [ workflow & content manag ] \n inform manag vendor are rush to re-posit themselv and put a portal spin on their product , say itnet 's graham urquhart . the result is confus , with a rang of differ definit and claim cloud the true pictur","ordered_present_kp":[149,114,25],"keyphrases":["workflow","portals","ITNET","collaboratively"],"prmu":["P","P","P","U"]} {"id":"1682","title":"Data mining efforts increase business productivity and efficiency","abstract":"The use and acquisition of information is a key part of the way any business makes money. Data mining technologies provide greater insight into how this information can be better used and more effectively acquired. Steven Kudyba, an expert in the field of data mining technologies, shares his expertise in an interview","tok_text":"data mine effort increas busi product and effici \n the use and acquisit of inform is a key part of the way ani busi make money . data mine technolog provid greater insight into how thi inform can be better use and more effect acquir . steven kudyba , an expert in the field of data mine technolog , share hi expertis in an interview","ordered_present_kp":[0,30,42],"keyphrases":["data mining","productivity","efficiency"],"prmu":["P","P","P"]} {"id":"1463","title":"Computational complexity of probabilistic disambiguation","abstract":"Recent models of natural language processing employ statistical reasoning for dealing with the ambiguity of formal grammars. In this approach, statistics, concerning the various linguistic phenomena of interest, are gathered from actual linguistic data and used to estimate the probabilities of the various entities that are generated by a given grammar, e.g., derivations, parse-trees and sentences. The extension of grammars with probabilities makes it possible to state ambiguity resolution as a constrained optimization formula, which aims at maximizing the probability of some entity that the grammar generates given the input (e.g., maximum probability parse-tree given some input sentence). The implementation of these optimization formulae in efficient algorithms, however, does not always proceed smoothly. In this paper, we address the computational complexity of ambiguity resolution under various kinds of probabilistic models. We provide proofs that some, frequently occurring problems of ambiguity resolution are NP-complete. These problems are encountered in various applications, e.g., language understanding for textand speech-based applications. Assuming the common model of computation, this result implies that, for many existing probabilistic models it is not possible to devise tractable algorithms for solving these optimization problems","tok_text":"comput complex of probabilist disambigu \n recent model of natur languag process employ statist reason for deal with the ambigu of formal grammar . in thi approach , statist , concern the variou linguist phenomena of interest , are gather from actual linguist data and use to estim the probabl of the variou entiti that are gener by a given grammar , e.g. , deriv , parse-tre and sentenc . the extens of grammar with probabl make it possibl to state ambigu resolut as a constrain optim formula , which aim at maxim the probabl of some entiti that the grammar gener given the input ( e.g. , maximum probabl parse-tre given some input sentenc ) . the implement of these optim formula in effici algorithm , howev , doe not alway proceed smoothli . in thi paper , we address the comput complex of ambigu resolut under variou kind of probabilist model . we provid proof that some , frequent occur problem of ambigu resolut are np-complet . these problem are encount in variou applic , e.g. , languag understand for textand speech-bas applic . assum the common model of comput , thi result impli that , for mani exist probabilist model it is not possibl to devis tractabl algorithm for solv these optim problem","ordered_present_kp":[58,87,130,87,0,18,443,469,828,986],"keyphrases":["computational complexity","probabilistic disambiguation","natural language processing","statistical reasoning","statistics","formal grammars","state ambiguity resolution","constrained optimization formula","probabilistic models","language understanding","NP-completeness results","parsing problems","speech processing"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","M","M"]} {"id":"1762","title":"Laguerre pseudospectral method for nonlinear partial differential equations","abstract":"The Laguerre Gauss-Radau interpolation is investigated. Some approximation results are obtained. As an example, the Laguerre pseudospectral scheme is constructed for the BBM equation. The stability and the convergence of proposed scheme are proved. The numerical results show the high accuracy of this approach","tok_text":"laguerr pseudospectr method for nonlinear partial differenti equat \n the laguerr gauss-radau interpol is investig . some approxim result are obtain . as an exampl , the laguerr pseudospectr scheme is construct for the bbm equat . the stabil and the converg of propos scheme are prove . the numer result show the high accuraci of thi approach","ordered_present_kp":[0,32,73,121,218,234,290],"keyphrases":["Laguerre pseudospectral method","nonlinear partial differential equations","Laguerre Gauss-Radau interpolation","approximation results","BBM equation","stability","numerical results","nonlinear differential equations"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"1727","title":"Linguistic knowledge and new technologies","abstract":"Modern language studies are characterized by a variety of forms, ways, and methods of their development. In this connection, it is necessary to specify the problem of the development of their internal differentiation and classification, which lead to the formation of specific areas knowledge. An example of such an area is speechology-a field of science belonging to fundamental, theoretical, and applied linguistics","tok_text":"linguist knowledg and new technolog \n modern languag studi are character by a varieti of form , way , and method of their develop . in thi connect , it is necessari to specifi the problem of the develop of their intern differenti and classif , which lead to the format of specif area knowledg . an exampl of such an area is speechology-a field of scienc belong to fundament , theoret , and appli linguist","ordered_present_kp":[38,212,390,0],"keyphrases":["linguistic knowledge","modern language studies","internal differentiation","applied linguistics","internal classification","speechology","theoretical linguistics","fundamental linguistics"],"prmu":["P","P","P","P","R","U","R","R"]} {"id":"1849","title":"An active functionality service for e-business applications","abstract":"Service based architectures are a powerful approach to meet the fast evolution of business rules and the corresponding software. An active functionality service that detects events and involves the appropriate business rules is a critical component of such a service-based middleware architecture. In this paper we present an active functionality service that is capable of detecting events in heterogeneous environments, it uses an integral ontology-based approach for the semantic interpretation of heterogeneous events and data, and provides notifications through a publish\/subscribe notification mechanism. The power of this approach is illustrated with the help of an auction application and through the personalization of car and driver portals in Internet-enabled vehicles","tok_text":"an activ function servic for e-busi applic \n servic base architectur are a power approach to meet the fast evolut of busi rule and the correspond softwar . an activ function servic that detect event and involv the appropri busi rule is a critic compon of such a service-bas middlewar architectur . in thi paper we present an activ function servic that is capabl of detect event in heterogen environ , it use an integr ontology-bas approach for the semant interpret of heterogen event and data , and provid notif through a publish \/ subscrib notif mechan . the power of thi approach is illustr with the help of an auction applic and through the person of car and driver portal in internet-en vehicl","ordered_present_kp":[3,29,117,146,262,381,448,522,613,679],"keyphrases":["active functionality service","e-business applications","business rules","software","service-based middleware architecture","heterogeneous environments","semantic interpretation","publish\/subscribe notification mechanism","auction application","Internet-enabled vehicles","event detection","ontology based approach","personalized car portals","personalized driver portals"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","M","R","R"]} {"id":"1831","title":"Fast broadcasting and gossiping in radio networks","abstract":"We establish an O(n log\/sup 2\/ n) upper bound on the time for deterministic distributed broadcasting in multi-hop radio networks with unknown topology. This nearly matches the known lower bound of Omega (n log n). The fastest previously known algorithm for this problem works in time O(n\/sup 3\/2\/). Using our broadcasting algorithm, we develop an O(n\/sup 3\/2\/ log\/sup 2\/ n) algorithm for gossiping in the same network model","tok_text":"fast broadcast and gossip in radio network \n we establish an o(n log \/ sup 2\/ n ) upper bound on the time for determinist distribut broadcast in multi-hop radio network with unknown topolog . thi nearli match the known lower bound of omega ( n log n ) . the fastest previous known algorithm for thi problem work in time o(n \/ sup 3\/2\/ ) . use our broadcast algorithm , we develop an o(n \/ sup 3\/2\/ log \/ sup 2\/ n ) algorithm for gossip in the same network model","ordered_present_kp":[0,82,110,19,29],"keyphrases":["fast broadcasting","gossiping","radio networks","upper bound","deterministic distributed broadcasting"],"prmu":["P","P","P","P","P"]} {"id":"1874","title":"E - a brainiac theorem prover","abstract":"We describe the superposition-based theorem prover E. E is a sound and complete prover for clausal first order logic with equality. Important properties of the prover include strong redundancy elimination criteria, the DISCOUNT loop proof procedure, a very flexible interface for specifying search control heuristics, and an efficient inference engine. We also discuss the strengths and weaknesses of the system","tok_text":"e - a brainiac theorem prover \n we describ the superposition-bas theorem prover e. e is a sound and complet prover for clausal first order logic with equal . import properti of the prover includ strong redund elimin criteria , the discount loop proof procedur , a veri flexibl interfac for specifi search control heurist , and an effici infer engin . we also discuss the strength and weak of the system","ordered_present_kp":[6,47,100,90,119,150,195,231,240,298,337],"keyphrases":["brainiac theorem prover","superposition-based theorem prover","soundness","completeness","clausal first order logic","equality","strong redundancy elimination criteria","DISCOUNT","loop proof procedure","search control heuristics","inference engine","CASC","E automatic theorem prover","rewriting","CADE ATP System Competitions"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","U","M","U","M"]} {"id":"1889","title":"Sliding mode dynamics in continuous feedback control for distributed discrete-event scheduling","abstract":"A continuous feedback control approach for real-time scheduling of discrete events is presented motivated by the need for control theoretic techniques to analyze and design such systems in distributed manufacturing applications. These continuous feedback control systems exhibit highly nonlinear and discontinuous dynamics. Specifically, when the production demand in the manufacturing system exceeds the available resource capacity then the control system \"chatters\" and exhibits sliding modes. This sliding mode behavior is advantageously used in the scheduling application by allowing the system to visit different schedules within an infinitesimal region near the sliding surface. In the paper, an analytical model is developed to characterize the sliding mode dynamics. This model is then used to design controllers in the sliding mode domain to improve the effectiveness of the control system to \"search\" for schedules with good performance. Computational results indicate that the continuous feedback control approach can provide near-optimal schedules and that it is computationally efficient compared to existing scheduling techniques","tok_text":"slide mode dynam in continu feedback control for distribut discrete-ev schedul \n a continu feedback control approach for real-tim schedul of discret event is present motiv by the need for control theoret techniqu to analyz and design such system in distribut manufactur applic . these continu feedback control system exhibit highli nonlinear and discontinu dynam . specif , when the product demand in the manufactur system exce the avail resourc capac then the control system \" chatter \" and exhibit slide mode . thi slide mode behavior is advantag use in the schedul applic by allow the system to visit differ schedul within an infinitesim region near the slide surfac . in the paper , an analyt model is develop to character the slide mode dynam . thi model is then use to design control in the slide mode domain to improv the effect of the control system to \" search \" for schedul with good perform . comput result indic that the continu feedback control approach can provid near-optim schedul and that it is comput effici compar to exist schedul techniqu","ordered_present_kp":[0,20,49,121,188,249,383,438],"keyphrases":["sliding mode dynamics","continuous feedback control","distributed discrete-event scheduling","real-time scheduling","control theoretic techniques","distributed manufacturing applications","production demand","resource capacity","highly nonlinear discontinuous dynamics"],"prmu":["P","P","P","P","P","P","P","P","R"]} {"id":"153","title":"On the relationship between omega -automata and temporal logic normal forms","abstract":"We consider the relationship between omega -automata and a specific logical formulation based on a normal form for temporal logic formulae. While this normal form was developed for use with execution and clausal resolution in temporal logics, we show how it can represent, syntactically, omega -automata in a high-level way. Technical proofs of the correctness of this representation are given","tok_text":"on the relationship between omega -automata and tempor logic normal form \n we consid the relationship between omega -automata and a specif logic formul base on a normal form for tempor logic formula . while thi normal form wa develop for use with execut and clausal resolut in tempor logic , we show how it can repres , syntact , omega -automata in a high-level way . technic proof of the correct of thi represent are given","ordered_present_kp":[28,48,139,258],"keyphrases":["omega -automata","temporal logic normal forms","logical formulation","clausal resolution","program correctness"],"prmu":["P","P","P","P","M"]} {"id":"1506","title":"Intelligent control of life support for space missions","abstract":"Future manned space operations will include a greater use of automation than we currently see. For example, semiautonomous robots and software agents will perform difficult tasks while operating unattended most of the time. As these automated agents become more prevalent, human contact with them will occur more often and become more routine, so designing these automated agents according to the principles of human-centered computing is important. We describe two cases of semiautonomous control software developed and fielded in test environments at the NASA Johnson Space Center. This software operated continuously at the JSC and interacted closely with humans for months at a time","tok_text":"intellig control of life support for space mission \n futur man space oper will includ a greater use of autom than we current see . for exampl , semiautonom robot and softwar agent will perform difficult task while oper unattend most of the time . as these autom agent becom more preval , human contact with them will occur more often and becom more routin , so design these autom agent accord to the principl of human-cent comput is import . we describ two case of semiautonom control softwar develop and field in test environ at the nasa johnson space center . thi softwar oper continu at the jsc and interact close with human for month at a time","ordered_present_kp":[20,166,144,37,0,59,103,256,465,534],"keyphrases":["intelligent control","life support","space missions","manned space operations","automation","semiautonomous robots","software agents","automated agents","semiautonomous control software","NASA Johnson Space Center","crew air regeneration","crew water recovery","human intervention"],"prmu":["P","P","P","P","P","P","P","P","P","P","U","U","M"]} {"id":"1543","title":"RISCy business. Part 1: RISC projects by Cornell students","abstract":"The author looks at several projects that Cornell University students entered in the Atmel Design 2001 contest. Those covered include a vertical plotter; BiLines, an electronic game; a wireless Internet pager; Cooking Coach; Barbie's zip drive; and a model train controller","tok_text":"risci busi . part 1 : risc project by cornel student \n the author look at sever project that cornel univers student enter in the atmel design 2001 contest . those cover includ a vertic plotter ; bilin , an electron game ; a wireless internet pager ; cook coach ; barbi 's zip drive ; and a model train control","ordered_present_kp":[22,38,178,195,206,224,250,263,290],"keyphrases":["RISC projects","Cornell students","vertical plotter","BiLines","electronic game","wireless Internet pager","Cooking Coach","Barbie's zip drive","model train controller","Atmel's Design Logic 2001 contest"],"prmu":["P","P","P","P","P","P","P","P","P","M"]} {"id":"1607","title":"A solvable queueing network model for railway networks and its validation and applications for the Netherlands","abstract":"The performance of new railway networks cannot be measured or simulated, as no detailed train schedules are available. Railway infrastructure and capacities are to be determined long before the actual traffic is known. This paper therefore proposes a solvable queueing network model to compute performance measures of interest without requiring train schedules (timetables). Closed form expressions for mean delays are obtained. New network designs, traffic scenarios, and capacity expansions can so be evaluated. A comparison with real delay data for the Netherlands supports the practical value of the model. A special Dutch cargo-line application is included","tok_text":"a solvabl queue network model for railway network and it valid and applic for the netherland \n the perform of new railway network can not be measur or simul , as no detail train schedul are avail . railway infrastructur and capac are to be determin long befor the actual traffic is known . thi paper therefor propos a solvabl queue network model to comput perform measur of interest without requir train schedul ( timet ) . close form express for mean delay are obtain . new network design , traffic scenario , and capac expans can so be evalu . a comparison with real delay data for the netherland support the practic valu of the model . a special dutch cargo-lin applic is includ","ordered_present_kp":[34,2,82,198,356,424,447,475,492,515,649],"keyphrases":["solvable queueing network model","railway networks","Netherlands","railway infrastructure","performance measures","closed form expressions","mean delays","network designs","traffic scenarios","capacity expansions","Dutch cargo-line application","railway capacities"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1642","title":"Development and validation of user-adaptive navigation and information retrieval tools for an intranet portal organizational memory information system","abstract":"Based on previous research and properties of organizational memory, a conceptual model for navigation and retrieval functions in an intranet portal organizational memory information system was proposed, and two human-centred features (memory structure map and history-based tool) were developed to support user's navigation and retrieval in a well-known organizational memory. To test two hypotheses concerning the validity of the conceptual model and two human-centred features, an experiment was conducted with 30 subjects. Testing of the two hypotheses indicated the following: (1) the memory structure map's users showed 29% better performance in navigation, and (2) the history-based tool's users outperformed by 34% in identifying information. The results of the study suggest that a conceptual model and two human-centred features could be used in a user-adaptive interface design to improve user's performance in an intranet portal organizational memory information system","tok_text":"develop and valid of user-adapt navig and inform retriev tool for an intranet portal organiz memori inform system \n base on previou research and properti of organiz memori , a conceptu model for navig and retriev function in an intranet portal organiz memori inform system wa propos , and two human-centr featur ( memori structur map and history-bas tool ) were develop to support user 's navig and retriev in a well-known organiz memori . to test two hypothes concern the valid of the conceptu model and two human-centr featur , an experi wa conduct with 30 subject . test of the two hypothes indic the follow : ( 1 ) the memori structur map 's user show 29 % better perform in navig , and ( 2 ) the history-bas tool 's user outperform by 34 % in identifi inform . the result of the studi suggest that a conceptu model and two human-centr featur could be use in a user-adapt interfac design to improv user 's perform in an intranet portal organiz memori inform system","ordered_present_kp":[21,42,69,85,176,314,338,533,865],"keyphrases":["user-adaptive navigation","information retrieval tools","intranet portal","organizational memory information system","conceptual model","memory structure map","history-based tool","experiment","user-adaptive interface design","human factors","user performance"],"prmu":["P","P","P","P","P","P","P","P","P","U","R"]} {"id":"1766","title":"A note on vector cascade algorithm","abstract":"The focus of this paper is on the relationship between accuracy of multivariate refinable vector and vector cascade algorithm. We show that, if the vector cascade algorithm (1.5) with isotropic dilation converges to a vector-valued function with regularity, then the initial function must satisfy the Strang-Fix conditions","tok_text":"a note on vector cascad algorithm \n the focu of thi paper is on the relationship between accuraci of multivari refin vector and vector cascad algorithm . we show that , if the vector cascad algorithm ( 1.5 ) with isotrop dilat converg to a vector-valu function with regular , then the initi function must satisfi the strang-fix condit","ordered_present_kp":[10,101,213,240,317],"keyphrases":["vector cascade algorithm","multivariate refinable vector","isotropic dilation","vector-valued function","Strang-fix conditions","matrix algebra"],"prmu":["P","P","P","P","P","U"]} {"id":"1723","title":"Positive productivity, better billing [health care]","abstract":"Workflow software provides the right communication solution for hospital specialists, and delivers an unexpected financial boost too","tok_text":"posit product , better bill [ health care ] \n workflow softwar provid the right commun solut for hospit specialist , and deliv an unexpect financi boost too","ordered_present_kp":[30,46],"keyphrases":["health care","workflow software","San Francisco General Hospital","ProVation MD"],"prmu":["P","P","M","U"]} {"id":"1808","title":"Nonlinearities in NARX polynomial models: representation and estimation","abstract":"It is shown how nonlinearities are mapped in NARX polynomial models. General expressions are derived for the gain and eigenvalue functions in terms of the regressors and coefficients of NARX models. Such relationships are useful in grey-box identification problems. The results are illustrated using simulated and real data","tok_text":"nonlinear in narx polynomi model : represent and estim \n it is shown how nonlinear are map in narx polynomi model . gener express are deriv for the gain and eigenvalu function in term of the regressor and coeffici of narx model . such relationship are use in grey-box identif problem . the result are illustr use simul and real data","ordered_present_kp":[157,191,259],"keyphrases":["eigenvalue functions","regressors","grey-box identification problems","NARX polynomial model nonlinearities","nonlinearity representation","nonlinearity estimation","gain functions","nonlinear autoregressive exogenous-input polynomial model"],"prmu":["P","P","P","R","R","R","R","M"]} {"id":"1467","title":"Utilizing Web-based case studies for cutting-edge information services issues","abstract":"This article reports on a pilot study conducted by the Academic Libraries of the 21st Century project team to determine whether the benefits of the case study method as a training framework for change initiatives could successfully transfer from the traditional face-to-face format to a virtual format. Methods of developing the training framework, as well as the benefits, challenges, and recommendations for future strategies gained from participant feedback are outlined. The results of a survey administered to chat session registrants are presented in three sections: (1) evaluation of the training framework; (2) evaluation of participants' experiences in the virtual environment; and (3) a comparison of participants' preference of format. The overall participant feedback regarding the utilization of the case study method in a virtual environment for professional development and collaborative problem solving is very positive","tok_text":"util web-bas case studi for cutting-edg inform servic issu \n thi articl report on a pilot studi conduct by the academ librari of the 21st centuri project team to determin whether the benefit of the case studi method as a train framework for chang initi could success transfer from the tradit face-to-fac format to a virtual format . method of develop the train framework , as well as the benefit , challeng , and recommend for futur strategi gain from particip feedback are outlin . the result of a survey administ to chat session registr are present in three section : ( 1 ) evalu of the train framework ; ( 2 ) evalu of particip ' experi in the virtual environ ; and ( 3 ) a comparison of particip ' prefer of format . the overal particip feedback regard the util of the case studi method in a virtual environ for profession develop and collabor problem solv is veri posit","ordered_present_kp":[5,28,111,221,241,499,647,816,839],"keyphrases":["Web-based case studies","cutting-edge information services","academic libraries","training","change initiatives","survey","virtual environment","professional development","collaborative problem solving","Internet"],"prmu":["P","P","P","P","P","P","P","P","P","U"]} {"id":"1686","title":"Internet infrastructure and the emerging information society: an appraisal of the Internet backbone industry","abstract":"This paper examines the real constraints to the expansion of all encumbering and all pervasive information technology in our contemporary society. Perhaps the U.S. Internet infrastructure is the most appropriate to examine since it is U.S. technology that has led the world into the Internet age. In this context, this paper reviews the state of the U.S. Internet backbone that will lead us into information society of the future by facilitating massive data transmission","tok_text":"internet infrastructur and the emerg inform societi : an apprais of the internet backbon industri \n thi paper examin the real constraint to the expans of all encumb and all pervas inform technolog in our contemporari societi . perhap the u.s. internet infrastructur is the most appropri to examin sinc it is u.s. technolog that ha led the world into the internet age . in thi context , thi paper review the state of the u.s. internet backbon that will lead us into inform societi of the futur by facilit massiv data transmiss","ordered_present_kp":[0],"keyphrases":["Internet infrastructure","Internet service providers","users","backbone companies","local telephone companies"],"prmu":["P","M","U","M","U"]} {"id":"1915","title":"Multichannel scaler for general statistical analysis of dynamic light scattering","abstract":"A four channel scaler for counting applications has been designed and built using a standard high transfer rate parallel computer interface bus parallel data card. The counter section is based on standard complex programmable logic device integrated circuits. With a 200 MHz Pentium based host PC a sustained counting and data transfer with channel widths as short as 200 ns for a single channel is realized. The use of the multichannel scaler is demonstrated in dynamic light scattering experiments. The recorded traces are analyzed with wavelet and other statistical techniques to obtain transient changes in the properties of the scattered light","tok_text":"multichannel scaler for gener statist analysi of dynam light scatter \n a four channel scaler for count applic ha been design and built use a standard high transfer rate parallel comput interfac bu parallel data card . the counter section is base on standard complex programm logic devic integr circuit . with a 200 mhz pentium base host pc a sustain count and data transfer with channel width as short as 200 ns for a singl channel is realiz . the use of the multichannel scaler is demonstr in dynam light scatter experi . the record trace are analyz with wavelet and other statist techniqu to obtain transient chang in the properti of the scatter light","ordered_present_kp":[0,24,49,73,141,185,258,319,311,405],"keyphrases":["multichannel scaler","general statistical analysis","dynamic light scattering","four channel scaler","standard high transfer rate parallel computer interface","interface bus parallel data card","complex programmable logic device","200 MHz","Pentium based host PC","200 ns","correlation spectroscopy","optical spectroscopic techniques","photon signal statistical properties","standard CPLD ICs","windowed Fourier transform"],"prmu":["P","P","P","P","P","P","P","P","P","P","U","M","M","M","U"]} {"id":"1603","title":"Exploiting structure in adaptive dynamic programming algorithms for a stochastic batch service problem","abstract":"The purpose of this paper is to illustrate the importance of using structural results in dynamic programming algorithms. We consider the problem of approximating optimal strategies for the batch service of customers at a service station. Customers stochastically arrive at the station and wait to be served, incurring a waiting cost and a service cost. Service of customers is performed in groups of a fixed service capacity. We investigate the structure of cost functions and establish some theoretical results including monotonicity of the value functions. Then, we use our adaptive dynamic programming monotone algorithm that uses structure to preserve monotonicity of the estimates at each iterations to approximate the value functions. Since the problem with homogeneous customers can be solved optimally, we have a means of comparison to evaluate our heuristic. Finally, we compare our algorithm to classical forward dynamic programming methods","tok_text":"exploit structur in adapt dynam program algorithm for a stochast batch servic problem \n the purpos of thi paper is to illustr the import of use structur result in dynam program algorithm . we consid the problem of approxim optim strategi for the batch servic of custom at a servic station . custom stochast arriv at the station and wait to be serv , incur a wait cost and a servic cost . servic of custom is perform in group of a fix servic capac . we investig the structur of cost function and establish some theoret result includ monoton of the valu function . then , we use our adapt dynam program monoton algorithm that use structur to preserv monoton of the estim at each iter to approxim the valu function . sinc the problem with homogen custom can be solv optim , we have a mean of comparison to evalu our heurist . final , we compar our algorithm to classic forward dynam program method","ordered_present_kp":[56,20,144,274,358,374,430],"keyphrases":["adaptive dynamic programming algorithms","stochastic batch service problem","structural results","service station","waiting cost","service cost","fixed service capacity","optimal strategy approximation","cost function structure","value function monotonicity","inventory theory"],"prmu":["P","P","P","P","P","P","P","R","R","R","U"]} {"id":"1646","title":"The limits of shape constancy: point-to-point mapping of perspective projections of flat figures","abstract":"The present experiments investigate point-to-point mapping of perspective transformations of 2D outline figures under diverse viewing conditions: binocular free viewing, monocular perspective with 2D cues masked by an optic tunnel, and stereoptic viewing through an optic tunnel. The first experiment involved upright figures, and served to determine baseline point-to-point mapping accuracy, which was found to be very good. Three shapes were used: square, circle and irregularly round. The main experiment, with slanted figures, involved only two shapes-square and irregularly shaped-showed at several slant degrees. Despite the accumulated evidence for shape constancy when the outline of perspective projections is considered, metric perception of the inner structure of such projections was quite limited. Systematic distortions were found, especially with more extreme slants, and attributed to the joint effect of several factors: anchors, 3D information, and slant underestimation. Contradictory flatness cues did not detract from performance, while stereoptic information improved it","tok_text":"the limit of shape constanc : point-to-point map of perspect project of flat figur \n the present experi investig point-to-point map of perspect transform of 2d outlin figur under divers view condit : binocular free view , monocular perspect with 2d cue mask by an optic tunnel , and stereopt view through an optic tunnel . the first experi involv upright figur , and serv to determin baselin point-to-point map accuraci , which wa found to be veri good . three shape were use : squar , circl and irregularli round . the main experi , with slant figur , involv onli two shapes-squar and irregularli shaped-show at sever slant degre . despit the accumul evid for shape constanc when the outlin of perspect project is consid , metric percept of the inner structur of such project wa quit limit . systemat distort were found , especi with more extrem slant , and attribut to the joint effect of sever factor : anchor , 3d inform , and slant underestim . contradictori flat cue did not detract from perform , while stereopt inform improv it","ordered_present_kp":[13,30,97,157,179,200,222,246,264,283,906,915,931],"keyphrases":["shape constancy","point-to-point mapping","experiments","2D outline figures","diverse viewing conditions","binocular free viewing","monocular perspective","2D cues","optic tunnel","stereoptic viewing","anchors","3D information","slant underestimation","flat figure perspective projections","3D shape perception","human factors","3D information displays"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","M","M"]} {"id":"1928","title":"Solution of a Euclidean combinatorial optimization problem by the dynamic-programming method","abstract":"A class of Euclidean combinatorial optimization problems is selected that can be solved by the dynamic programming method. The problem of allocation of servicing enterprises is solved as an example","tok_text":"solut of a euclidean combinatori optim problem by the dynamic-program method \n a class of euclidean combinatori optim problem is select that can be solv by the dynam program method . the problem of alloc of servic enterpris is solv as an exampl","ordered_present_kp":[11,160],"keyphrases":["Euclidean combinatorial optimization problem","dynamic programming method"],"prmu":["P","P"]} {"id":"157","title":"Automatic extraction of eye and mouth fields from a face image using eigenfeatures and ensemble networks","abstract":"This paper presents a novel algorithm for the extraction of the eye and mouth (facial features) fields from 2D gray level images. Eigenfeatures are derived from the eigenvalues and eigenvectors of the binary edge data set constructed from eye and mouth fields. Such eigenfeatures are ideal features for finely locating fields efficiently. The eigenfeatures are extracted from a set of the positive and negative training samples for facial features and are used to train a multilayer perceptron (MLP) whose output indicates the degree to which a particular image window contains the eyes or the mouth within itself. An ensemble network consisting of a multitude of independent MLPs was used to enhance the generalization performance of a single MLP. It was experimentally verified that the proposed algorithm is robust against facial size and even slight variations of the pose","tok_text":"automat extract of eye and mouth field from a face imag use eigenfeatur and ensembl network \n thi paper present a novel algorithm for the extract of the eye and mouth ( facial featur ) field from 2d gray level imag . eigenfeatur are deriv from the eigenvalu and eigenvector of the binari edg data set construct from eye and mouth field . such eigenfeatur are ideal featur for fine locat field effici . the eigenfeatur are extract from a set of the posit and neg train sampl for facial featur and are use to train a multilay perceptron ( mlp ) whose output indic the degre to which a particular imag window contain the eye or the mouth within itself . an ensembl network consist of a multitud of independ mlp wa use to enhanc the gener perform of a singl mlp . it wa experiment verifi that the propos algorithm is robust against facial size and even slight variat of the pose","ordered_present_kp":[196,248,262,281,462,515,729,60],"keyphrases":["eigenfeatures","2D gray level images","eigenvalues","eigenvectors","binary edge data set","training samples","multilayer perceptron","generalization","eye field extraction","mouth field extraction","face feature extraction","experiment","ensemble neural networks"],"prmu":["P","P","P","P","P","P","P","P","R","R","R","U","M"]} {"id":"1502","title":"Mining open answers in questionnaire data","abstract":"Surveys are important tools for marketing and for managing customer relationships; the answers to open-ended questions, in particular, often contain valuable information and provide an important basis for business decisions. The summaries that human analysts make of these open answers, however, tend to rely too much on intuition and so aren't satisfactorily reliable. Moreover, because the Web makes it so easy to take surveys and solicit comments, companies are finding themselves inundated with data from questionnaires and other sources. Handling it all manually would be not only cumbersome but also costly. Thus, devising a computer system that can automatically mine useful information from open answers has become an important issue. We have developed a survey analysis system that works on these principles. The system mines open answers through two statistical learning techniques: rule learning (which we call rule analysis) and correspondence analysis","tok_text":"mine open answer in questionnair data \n survey are import tool for market and for manag custom relationship ; the answer to open-end question , in particular , often contain valuabl inform and provid an import basi for busi decis . the summari that human analyst make of these open answer , howev , tend to reli too much on intuit and so are n't satisfactorili reliabl . moreov , becaus the web make it so easi to take survey and solicit comment , compani are find themselv inund with data from questionnair and other sourc . handl it all manual would be not onli cumbersom but also costli . thu , devis a comput system that can automat mine use inform from open answer ha becom an import issu . we have develop a survey analysi system that work on these principl . the system mine open answer through two statist learn techniqu : rule learn ( which we call rule analysi ) and correspond analysi","ordered_present_kp":[714,20,806,858,877],"keyphrases":["questionnaire data","survey analysis","statistical learning techniques","rule analysis","correspondence analysis","natural language response analysis","text mining system","open answer mining"],"prmu":["P","P","P","P","P","M","M","R"]} {"id":"1547","title":"New projection-type methods for monotone LCP with finite termination","abstract":"In this paper we establish two new projection-type methods for the solution of the monotone linear complementarity problem (LCP). The methods are a combination of the extragradient method and the Newton method, in which the active set strategy is used and only one linear system of equations with lower dimension is solved at each iteration. It is shown that under the assumption of monotonicity, these two methods are globally and linearly convergent. Furthermore, under a nondegeneracy condition they have a finite termination property. Finally, the methods are extended to solving the monotone affine variational inequality problem","tok_text":"new projection-typ method for monoton lcp with finit termin \n in thi paper we establish two new projection-typ method for the solut of the monoton linear complementar problem ( lcp ) . the method are a combin of the extragradi method and the newton method , in which the activ set strategi is use and onli one linear system of equat with lower dimens is solv at each iter . it is shown that under the assumpt of monoton , these two method are global and linearli converg . furthermor , under a nondegeneraci condit they have a finit termin properti . final , the method are extend to solv the monoton affin variat inequ problem","ordered_present_kp":[4,30,47,139,216,242,271,310,367,30,463,494,593],"keyphrases":["projection-type methods","monotone LCP","monotonicity","finite termination","monotone linear complementarity problem","extragradient method","Newton method","active set strategy","linear system of equations","iteration","convergence","nondegeneracy condition","monotone affine variational inequality problem","matrix","vector"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","U","U"]} {"id":"1835","title":"Establishing an urban digital cadastre: analytical reconstruction of parcel boundaries","abstract":"A new method for generating a spatially accurate, legally supportive and operationally efficient cadastral database of the urban cadastral reality is described. The definition and compilation of an accurate cadastral database (achieving a standard deviation smaller than 0.1 m) is based on an analytical reconstruction of cadastral boundaries rather than on the conventional field reconstruction process. The new method is based on GPS control points and traverse networks for providing the framework; the old field books for defining the links between the various original ground features; and a geometrical and cadastral adjustment process as the conceptual basis. A pilot project that was carried out in order to examine and evaluate the new method is described","tok_text":"establish an urban digit cadastr : analyt reconstruct of parcel boundari \n a new method for gener a spatial accur , legal support and oper effici cadastr databas of the urban cadastr realiti is describ . the definit and compil of an accur cadastr databas ( achiev a standard deviat smaller than 0.1 m ) is base on an analyt reconstruct of cadastr boundari rather than on the convent field reconstruct process . the new method is base on gp control point and travers network for provid the framework ; the old field book for defin the link between the variou origin ground featur ; and a geometr and cadastr adjust process as the conceptu basi . a pilot project that wa carri out in order to examin and evalu the new method is describ","ordered_present_kp":[13,35,57,169,266,383,437,458,505,565,599],"keyphrases":["urban digital cadastre","analytical reconstruction","parcel boundaries","urban cadastral reality","standard deviation","field reconstruction process","GPS control points","traverse networks","old field books","ground features","cadastral adjustment process","spatially accurate cadastral database","land information systems","LIS","geographic information systems"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","U","U","U"]} {"id":"1870","title":"Robust control of nonlinear systems with parametric uncertainty","abstract":"Probabilistic robustness analysis and synthesis for nonlinear systems with uncertain parameters are presented. Monte Carlo simulation is used to estimate the likelihood of system instability and violation of performance requirements subject to variations of the probabilistic system parameters. Stochastic robust control synthesis searches the controller design parameter space to minimize a cost that is a function of the probabilities that design criteria will not be satisfied. The robust control design approach is illustrated by a simple nonlinear example. A modified feedback linearization control is chosen as controller structure, and the design parameters are searched by a genetic algorithm to achieve the tradeoff between stability and performance robustness","tok_text":"robust control of nonlinear system with parametr uncertainti \n probabilist robust analysi and synthesi for nonlinear system with uncertain paramet are present . mont carlo simul is use to estim the likelihood of system instabl and violat of perform requir subject to variat of the probabilist system paramet . stochast robust control synthesi search the control design paramet space to minim a cost that is a function of the probabl that design criteria will not be satisfi . the robust control design approach is illustr by a simpl nonlinear exampl . a modifi feedback linear control is chosen as control structur , and the design paramet are search by a genet algorithm to achiev the tradeoff between stabil and perform robust","ordered_present_kp":[0,18,40,63,129,161,212,554,656],"keyphrases":["robust control","nonlinear systems","parametric uncertainty","probabilistic robustness analysis","uncertain parameters","Monte Carlo simulation","system instability","modified feedback linearization control","genetic algorithm","probabilistic robustness synthesis","performance requirements violation","stochastic control synthesis","input-to-state stability"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R","M"]} {"id":"173","title":"Stock market trading rule discovery using technical charting heuristics","abstract":"In this case study in knowledge engineering and data mining, we implement a recognizer for two variations of the 'bull flag' technical charting heuristic and use this recognizer to discover trading rules on the NYSE Composite Index. Out-of-sample results indicate that these rules are effective","tok_text":"stock market trade rule discoveri use technic chart heurist \n in thi case studi in knowledg engin and data mine , we implement a recogn for two variat of the ' bull flag ' technic chart heurist and use thi recogn to discov trade rule on the nyse composit index . out-of-sampl result indic that these rule are effect","ordered_present_kp":[0,19,38,69,83,102,241,263],"keyphrases":["stock market trading","rule discovery","technical charting heuristics","case study","knowledge engineering","data mining","NYSE Composite Index","out-of-sample results","financial expert system"],"prmu":["P","P","P","P","P","P","P","P","U"]} {"id":"1526","title":"GK-DEVS: Geometric and kinematic DEVS formalism for simulation modeling of 3-dimensional multi-component systems","abstract":"A combined discrete\/continuous simulation methodology based on the DEVS (discrete event system specification) formalism is presented in this paper that satisfies the simulation requirements of 3-dimensional and dynamic systems with multi-components. We propose a geometric and kinematic DEVS (GK-DEVS) formalism that is able to describe the geometric and kinematic structure of a system and its continuous state dynamics as well as the interaction among the multi-components. To establish one model having dynamic behavior and a particular hierarchical structure, the atomic and the coupled model of the conventional DEVS are merged into one model in the proposed formalism. For simulation of the continuous motion of 3-D components, the sequential state set is partitioned into the discrete and the continuous state set and the rate of change function over the continuous state set is employed. Although modified from the conventional DEVS formalism, the GK-DEVS formalism preserves a hierarchical, modular modeling fashion and a coupling scheme. Furthermore, for the GK-DEVS model simulation, we propose an abstract simulation algorithm, called a GK-Simulator, in which data and control are separated and events are scheduled not globally but hierarchically so that an object-oriented principle is satisfied. The proposed GK-DEVS formalism and the GK-Simulator algorithm have been applied to the simulation of a flexible manufacturing system consisting of a 2-axis lathe, a 3-axis milling machine, and a vehicle-mounted robot","tok_text":"gk-dev : geometr and kinemat dev formal for simul model of 3-dimension multi-compon system \n a combin discret \/ continu simul methodolog base on the dev ( discret event system specif ) formal is present in thi paper that satisfi the simul requir of 3-dimension and dynam system with multi-compon . we propos a geometr and kinemat dev ( gk-dev ) formal that is abl to describ the geometr and kinemat structur of a system and it continu state dynam as well as the interact among the multi-compon . to establish one model have dynam behavior and a particular hierarch structur , the atom and the coupl model of the convent dev are merg into one model in the propos formal . for simul of the continu motion of 3-d compon , the sequenti state set is partit into the discret and the continu state set and the rate of chang function over the continu state set is employ . although modifi from the convent dev formal , the gk-dev formal preserv a hierarch , modular model fashion and a coupl scheme . furthermor , for the gk-dev model simul , we propos an abstract simul algorithm , call a gk-simul , in which data and control are separ and event are schedul not global but hierarch so that an object-ori principl is satisfi . the propos gk-dev formal and the gk-simul algorithm have been appli to the simul of a flexibl manufactur system consist of a 2-axi lath , a 3-axi mill machin , and a vehicle-mount robot","ordered_present_kp":[0,21,44,95,233,427,524,688,723,1048,1082,1186,1305,1344,1359,1385],"keyphrases":["GK-DEVS","kinematic DEVS","simulation modeling","combined discrete\/continuous simulation methodology","simulation requirements","continuous state dynamics","dynamic behavior","continuous motion","sequential state set","abstract simulation algorithm","GK-Simulator","object-oriented principle","flexible manufacturing system","2-axis lathe","3-axis milling machine","vehicle-mounted robot","geometric DEVS","3 dimensional multi-component systems"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","M"]} {"id":"1563","title":"A distance between elliptical distributions based in an embedding into the Siegel group","abstract":"This paper describes two different embeddings of the manifolds corresponding to many elliptical probability distributions with the informative geometry into the manifold of positive-definite matrices with the Siegel metric, generalizing a result published previously elsewhere. These new general embeddings are applicable to a wide class of elliptical probability distributions, in which the normal, t-Student and Cauchy are specific examples. A lower bound for the Rao distance is obtained, which is itself a distance, and, through these embeddings, a number of statistical tests of hypothesis are derived","tok_text":"a distanc between ellipt distribut base in an embed into the siegel group \n thi paper describ two differ embed of the manifold correspond to mani ellipt probabl distribut with the inform geometri into the manifold of positive-definit matric with the siegel metric , gener a result publish previous elsewher . these new gener embed are applic to a wide class of ellipt probabl distribut , in which the normal , t-student and cauchi are specif exampl . a lower bound for the rao distanc is obtain , which is itself a distanc , and , through these embed , a number of statist test of hypothesi are deriv","ordered_present_kp":[18,61,180,217,146,453],"keyphrases":["elliptical distributions","Siegel group","elliptical probability distributions","informative geometry","positive-definite matrices","lower bound","manifolds embeddings"],"prmu":["P","P","P","P","P","P","R"]} {"id":"1627","title":"Blind identification of non-stationary MA systems","abstract":"A new adaptive algorithm for blind identification of time-varying MA channels is derived. This algorithm proposes the use of a novel system of equations derived by combining the third- and fourth-order statistics of the output signals of MA models. This overdetermined system of equations has the important property that it can be solved adaptively because of their symmetries via an overdetermined recursive instrumental variable-type algorithm. This algorithm shows good behaviour in arbitrary noisy environments and good performance in tracking time-varying systems","tok_text":"blind identif of non-stationari ma system \n a new adapt algorithm for blind identif of time-vari ma channel is deriv . thi algorithm propos the use of a novel system of equat deriv by combin the third- and fourth-ord statist of the output signal of ma model . thi overdetermin system of equat ha the import properti that it can be solv adapt becaus of their symmetri via an overdetermin recurs instrument variable-typ algorithm . thi algorithm show good behaviour in arbitrari noisi environ and good perform in track time-vari system","ordered_present_kp":[0,50,206,249,467,511],"keyphrases":["blind identification","adaptive algorithm","fourth-order statistics","MA models","arbitrary noisy environments","tracking","time-varying channels","nonstationary systems","third-order statistics","overdetermined recursive algorithm","recursive instrumental variable algorithm","iterative algorithms","additive Gaussian noise","higher-order statistics"],"prmu":["P","P","P","P","P","P","R","M","M","R","M","M","U","M"]} {"id":"1811","title":"Adaptive tracking controller design for robotic systems using Gaussian wavelet networks","abstract":"An adaptive tracking control design for robotic systems using Gaussian wavelet networks is proposed. A Gaussian wavelet network with accurate approximation capability is employed to approximate the unknown dynamics of robotic systems by using an adaptive learning algorithm that can learn the parameters of the dilation and translation of Gaussian wavelet functions. Depending on the finite number of wavelet basis functions which result in inevitable approximation errors, a robust control law is provided to guarantee the stability of the closed-loop robotic system that can be proved by Lyapunov theory. Finally, the effectiveness of the Gaussian wavelet network-based control approach is illustrated through comparative simulations on a six-link robot manipulator","tok_text":"adapt track control design for robot system use gaussian wavelet network \n an adapt track control design for robot system use gaussian wavelet network is propos . a gaussian wavelet network with accur approxim capabl is employ to approxim the unknown dynam of robot system by use an adapt learn algorithm that can learn the paramet of the dilat and translat of gaussian wavelet function . depend on the finit number of wavelet basi function which result in inevit approxim error , a robust control law is provid to guarante the stabil of the closed-loop robot system that can be prove by lyapunov theori . final , the effect of the gaussian wavelet network-bas control approach is illustr through compar simul on a six-link robot manipul","ordered_present_kp":[0,31,48,195,243,283,464,483,588,715],"keyphrases":["adaptive tracking controller design","robotic systems","Gaussian wavelet networks","accurate approximation capability","unknown dynamics","adaptive learning algorithm","approximation errors","robust control law","Lyapunov theory","six-link robot manipulator","closed-loop system"],"prmu":["P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1483","title":"Hypothesis-based concept assignment in software maintenance","abstract":"Software maintenance accounts for a significant proportion of the lifetime cost of a software system. Software comprehension is required in many parts of the maintenance process and is one of the most expensive activities. Many tools have been developed to help the maintainer reduce the time and cost of this task, but of the numerous tools and methods available one group has received relatively little attention: those using plausible reasoning to address the concept assignment problem. We present a concept assignment method for COBOL II: hypothesis-based concept assignment (HB-CA). An implementation of a prototype tool is described, and results from a comprehensive evaluation using commercial COBOL II sources are summarised. In particular, we identify areas of a standard maintenance process where such methods would be appropriate, and discuss the potential cost savings that may result","tok_text":"hypothesis-bas concept assign in softwar mainten \n softwar mainten account for a signific proport of the lifetim cost of a softwar system . softwar comprehens is requir in mani part of the mainten process and is one of the most expens activ . mani tool have been develop to help the maintain reduc the time and cost of thi task , but of the numer tool and method avail one group ha receiv rel littl attent : those use plausibl reason to address the concept assign problem . we present a concept assign method for cobol ii : hypothesis-bas concept assign ( hb-ca ) . an implement of a prototyp tool is describ , and result from a comprehens evalu use commerci cobol ii sourc are summaris . in particular , we identifi area of a standard mainten process where such method would be appropri , and discuss the potenti cost save that may result","ordered_present_kp":[0,33,105,513],"keyphrases":["hypothesis-based concept assignment","software maintenance","lifetime cost","COBOL II","scalability"],"prmu":["P","P","P","P","U"]} {"id":"1854","title":"Software Technology: looking for quality accountants","abstract":"Software Technology wants to turn 23 years of reselling experience in the legal business into an asset in the accounting market","tok_text":"softwar technolog : look for qualiti account \n softwar technolog want to turn 23 year of resel experi in the legal busi into an asset in the account market","ordered_present_kp":[0,89,141],"keyphrases":["Software Technology","reselling","accounting market"],"prmu":["P","P","P"]} {"id":"1782","title":"Exploring the sabbatical or other leave as a means of energizing a career","abstract":"This article challenges librarians to create leaves that will not only inspire professional growth but also renewal. It presents a framework for developing a successful leave, incorporating useful advice from librarians at Concordia University (Montreal). As food for thought, the article offers examples of specific options meant to encourage professionals to explore their own creative ideas. Finally, a central theme of this article is that a midlife leave provides one with the perfect opportunity to take stock of oneself in order to define future career directions. Midlife is a time when rebel forces, feisty protestors from within, often insist on being heard. It is a time, in other words, when professionals often long to break loose from the stress \"to do far more, in less time\" (Barner, 1994). Escaping from current job constraints into a world of creative endeavor, when well-executed, is a superb means of invigorating a career stuck in gear and discovering a fresh perspective from which to view one's profession. To ignite renewal, midcareer is the perfect time to grant one's imagination free reign","tok_text":"explor the sabbat or other leav as a mean of energ a career \n thi articl challeng librarian to creat leav that will not onli inspir profession growth but also renew . it present a framework for develop a success leav , incorpor use advic from librarian at concordia univers ( montreal ) . as food for thought , the articl offer exampl of specif option meant to encourag profession to explor their own creativ idea . final , a central theme of thi articl is that a midlif leav provid one with the perfect opportun to take stock of oneself in order to defin futur career direct . midlif is a time when rebel forc , feisti protestor from within , often insist on be heard . it is a time , in other word , when profession often long to break loos from the stress \" to do far more , in less time \" ( barner , 1994 ) . escap from current job constraint into a world of creativ endeavor , when well-execut , is a superb mean of invigor a career stuck in gear and discov a fresh perspect from which to view one 's profess . to ignit renew , midcar is the perfect time to grant one 's imagin free reign","ordered_present_kp":[53,82,132,464],"keyphrases":["career","librarians","professional growth","midlife leave","sabbatical leave","library staff"],"prmu":["P","P","P","P","R","U"]} {"id":"1894","title":"Switching controller design via convex polyhedral Lyapunov functions","abstract":"We propose a systematic switching control design method for a class of nonlinear discrete time hybrid systems. The novelty of the adopted approach is in the fact that unlike conventional control the control burden is shifted to a logical level thus creating the need for the development of new analysis\/design methods","tok_text":"switch control design via convex polyhedr lyapunov function \n we propos a systemat switch control design method for a class of nonlinear discret time hybrid system . the novelti of the adopt approach is in the fact that unlik convent control the control burden is shift to a logic level thu creat the need for the develop of new analysi \/ design method","ordered_present_kp":[0,26,127],"keyphrases":["switching controller design","convex polyhedral Lyapunov functions","nonlinear discrete time hybrid systems","systematic design method"],"prmu":["P","P","P","R"]} {"id":"1869","title":"Stability and L\/sub 2\/ gain properties of LPV systems","abstract":"Stability and L\/sub 2\/ gain properties of linear parameter-varying systems are obtained under assumed bounds on either the maximum or average value of the parameter rate","tok_text":"stabil and l \/ sub 2\/ gain properti of lpv system \n stabil and l \/ sub 2\/ gain properti of linear parameter-vari system are obtain under assum bound on either the maximum or averag valu of the paramet rate","ordered_present_kp":[0,11,91,193],"keyphrases":["stability","L\/sub 2\/ gain properties","linear parameter-varying systems","parameter rate","Gromwall-Bellman inequality","gain scheduled control"],"prmu":["P","P","P","P","U","M"]} {"id":"1742","title":"A sufficient condition for optimality in nondifferentiable invex programming","abstract":"A sufficient optimality condition is established for a nonlinear programming problem without differentiability assumption on the data wherein Clarke's (1975) generalized gradient is used to define invexity","tok_text":"a suffici condit for optim in nondifferenti invex program \n a suffici optim condit is establish for a nonlinear program problem without differenti assumpt on the data wherein clark 's ( 1975 ) gener gradient is use to defin invex","ordered_present_kp":[30,62,102,193,44],"keyphrases":["nondifferentiable invex programming","invexity","sufficient optimality condition","nonlinear programming problem","generalized gradient","locally Lipschitz function","semiconvex function"],"prmu":["P","P","P","P","P","U","U"]} {"id":"1707","title":"Tactical airborne reconnaissance goes dual-band and beyond","abstract":"Multispectral imaging technologies are satisfying the need for a \"persistent\" look at the battlefield. We highlight the need to persistently monitor a battlefield to determine exactly who and what is there. For example, infrared imaging can be used to expose the fuel status of an aircraft on the runway. A daytime, visible-spectrum image of the same aircraft would offer information about external details, such as the plane's markings and paint scheme. A dual-band camera enables precision image registration by fusion and frequently yields more information than is possible by evaluating the images separately","tok_text":"tactic airborn reconnaiss goe dual-band and beyond \n multispectr imag technolog are satisfi the need for a \" persist \" look at the battlefield . we highlight the need to persist monitor a battlefield to determin exactli who and what is there . for exampl , infrar imag can be use to expos the fuel statu of an aircraft on the runway . a daytim , visible-spectrum imag of the same aircraft would offer inform about extern detail , such as the plane 's mark and paint scheme . a dual-band camera enabl precis imag registr by fusion and frequent yield more inform than is possibl by evalu the imag separ","ordered_present_kp":[0,53,131,257,293,310,477,500],"keyphrases":["tactical airborne reconnaissance","multispectral imaging technologies","battlefield","infrared imaging","fuel status","aircraft","dual-band camera","precision image registration","daytime visible-spectrum image","sensor fusion"],"prmu":["P","P","P","P","P","P","P","P","R","M"]} {"id":"1740","title":"Verification of ideological classifications-a statistical approach","abstract":"The paper presents a statistical method of verifying ideological classifications of votes. Parliamentary votes, preclassified by an expert (on a chosen subset), are verified at an assumed significance level by seeking the most likely match with the actual vote results. Classifications that do not meet the requirements defined are rejected. The results obtained can be applied in the ideological dimensioning algorithms, enabling ideological identification of dimensions obtained","tok_text":"verif of ideolog classifications-a statist approach \n the paper present a statist method of verifi ideolog classif of vote . parliamentari vote , preclassifi by an expert ( on a chosen subset ) , are verifi at an assum signific level by seek the most like match with the actual vote result . classif that do not meet the requir defin are reject . the result obtain can be appli in the ideolog dimens algorithm , enabl ideolog identif of dimens obtain","ordered_present_kp":[9,35,125,219,385],"keyphrases":["ideological classifications","statistical approach","parliamentary votes","significance level","ideological dimensioning algorithms","ideological space","bootstrap"],"prmu":["P","P","P","P","P","M","U"]} {"id":"1705","title":"The use of visual search for knowledge gathering in image decision support","abstract":"This paper presents a new method of knowledge gathering for decision support in image understanding based on information extracted from the dynamics of saccadic eye movements. The framework involves the construction of a generic image feature extraction library, from which the feature extractors that are most relevant to the visual assessment by domain experts are determined automatically through factor analysis. The dynamics of the visual search are analyzed by using the Markov model for providing training information to novices on how and where to look for image features. The validity of the framework has been evaluated in a clinical scenario whereby the pulmonary vascular distribution on Computed Tomography images was assessed by experienced radiologists as a potential indicator of heart failure. The performance of the system has been demonstrated by training four novices to follow the visual assessment behavior of two experienced observers. In all cases, the accuracy of the students improved from near random decision making (33%) to accuracies ranging from 50% to 68%","tok_text":"the use of visual search for knowledg gather in imag decis support \n thi paper present a new method of knowledg gather for decis support in imag understand base on inform extract from the dynam of saccad eye movement . the framework involv the construct of a gener imag featur extract librari , from which the featur extractor that are most relev to the visual assess by domain expert are determin automat through factor analysi . the dynam of the visual search are analyz by use the markov model for provid train inform to novic on how and where to look for imag featur . the valid of the framework ha been evalu in a clinic scenario wherebi the pulmonari vascular distribut on comput tomographi imag wa assess by experienc radiologist as a potenti indic of heart failur . the perform of the system ha been demonstr by train four novic to follow the visual assess behavior of two experienc observ . in all case , the accuraci of the student improv from near random decis make ( 33 % ) to accuraci rang from 50 % to 68 %","ordered_present_kp":[647,715,851,881,484,508,265,371,954],"keyphrases":["image features","domain experts","Markov model","training information","pulmonary vascular distribution","experienced radiologists","visual assessment behavior","experienced observers","near random decision making","heart failure indicator","student accuracy","saccadic eye movements dynamics","medical diagnostic imaging"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R","M"]} {"id":"1896","title":"The dynamics of a railway freight wagon wheelset with dry friction damping","abstract":"We investigate the dynamics of a simple model of a wheelset that supports one end of a railway freight wagon by springs with linear characteristics and dry friction dampers. The wagon runs on an ideal, straight and level track with constant speed. The lateral dynamics in dependence on the speed is examined. We have included stick-slip and hysteresis in our model of the dry friction and assume that Coulomb's law holds during the slip phase. It is found that the action of dry friction completely changes the bifurcation diagram, and that the longitudinal component of the dry friction damping forces destabilizes the wagon","tok_text":"the dynam of a railway freight wagon wheelset with dri friction damp \n we investig the dynam of a simpl model of a wheelset that support one end of a railway freight wagon by spring with linear characterist and dri friction damper . the wagon run on an ideal , straight and level track with constant speed . the later dynam in depend on the speed is examin . we have includ stick-slip and hysteresi in our model of the dri friction and assum that coulomb 's law hold dure the slip phase . it is found that the action of dri friction complet chang the bifurc diagram , and that the longitudin compon of the dri friction damp forc destabil the wagon","ordered_present_kp":[4,15,51,187,312,374,389,551,581],"keyphrases":["dynamics","railway freight wagon wheelset","dry friction damping","linear characteristics","lateral dynamics","stick-slip","hysteresis","bifurcation diagram","longitudinal component","Coulomb law"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"1519","title":"Structural invariance of spatial Pythagorean hodographs","abstract":"The structural invariance of the four-polynomial characterization for three-dimensional Pythagorean hodographs introduced by Dietz et al. (1993), under arbitrary spatial rotations, is demonstrated. The proof relies on a factored-quaternion representation for Pythagorean hodographs in three-dimensional Euclidean space-a particular instance of the \"PH representation map\" proposed by Choi et al. (2002)-and the unit quaternion description of spatial rotations. This approach furnishes a remarkably simple derivation for the polynomials u(t), upsilon (t), p(t), q(t) that specify the canonical form of a rotated Pythagorean hodograph, in terms of the original polynomials u(t), upsilon (t), p(t), q(t) and the angle theta and axis n of the spatial rotation. The preservation of the canonical form of PH space curves under arbitrary spatial rotations is essential to their incorporation into computer-aided design and manufacturing applications, such as the contour machining of free-form surfaces using a ball-end mill and realtime PH curve CNC interpolators","tok_text":"structur invari of spatial pythagorean hodograph \n the structur invari of the four-polynomi character for three-dimension pythagorean hodograph introduc by dietz et al . ( 1993 ) , under arbitrari spatial rotat , is demonstr . the proof reli on a factored-quaternion represent for pythagorean hodograph in three-dimension euclidean space-a particular instanc of the \" ph represent map \" propos by choi et al . ( 2002)-and the unit quaternion descript of spatial rotat . thi approach furnish a remark simpl deriv for the polynomi u(t ) , upsilon ( t ) , p(t ) , q(t ) that specifi the canon form of a rotat pythagorean hodograph , in term of the origin polynomi u(t ) , upsilon ( t ) , p(t ) , q(t ) and the angl theta and axi n of the spatial rotat . the preserv of the canon form of ph space curv under arbitrari spatial rotat is essenti to their incorpor into computer-aid design and manufactur applic , such as the contour machin of free-form surfac use a ball-end mill and realtim ph curv cnc interpol","ordered_present_kp":[0,78,19,187,368,426,197,918,936,959],"keyphrases":["structural invariance","spatial Pythagorean hodographs","four-polynomial characterization","arbitrary spatial rotations","spatial rotations","PH representation map","unit quaternion description","contour machining","free-form surfaces","ball-end mill","3D Pythagorean hodographs","factored quaternion representation","3D Euclidean space","CAD\/CAM","real-time PH curve CNC interpolators"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","M","M","U","M"]} {"id":"1618","title":"Optimal learning for patterns classification in RBF networks","abstract":"The proposed modifying of the structure of the radial basis function (RBF) network by introducing the weight matrix to the input layer (in contrast to the direct connection of the input to the hidden layer of a conventional RBF) so that the training space in the RBF network is adaptively separated by the resultant decision boundaries and class regions is reported. The training of this weight matrix is carried out as for a single-layer perceptron together with the clustering process. In this way the network is capable of dealing with complicated problems, which have a high degree of interference in the training data, and achieves a higher classification rate over the current classifiers using RBF","tok_text":"optim learn for pattern classif in rbf network \n the propos modifi of the structur of the radial basi function ( rbf ) network by introduc the weight matrix to the input layer ( in contrast to the direct connect of the input to the hidden layer of a convent rbf ) so that the train space in the rbf network is adapt separ by the result decis boundari and class region is report . the train of thi weight matrix is carri out as for a single-lay perceptron togeth with the cluster process . in thi way the network is capabl of deal with complic problem , which have a high degre of interfer in the train data , and achiev a higher classif rate over the current classifi use rbf","ordered_present_kp":[16,0,35,164,276,336,355,433,471],"keyphrases":["optimal learning","pattern classification","RBF networks","input layer","training space","decision boundaries","class regions","single-layer perceptron","clustering process","radial basis function network","weight matrix training","classification rate improvement"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","M"]} {"id":"1625","title":"Use of fuzzy weighted autocorrelation function for pitch extraction from noisy speech","abstract":"An investigation is presented into the feasibility of incorporating a fuzzy weighting scheme into the calculation of an autocorrelation function for pitch extraction. Simulation results reveal that the proposed method provides better robustness against background noise than the conventional approaches for extracting pitch period in a noisy environment","tok_text":"use of fuzzi weight autocorrel function for pitch extract from noisi speech \n an investig is present into the feasibl of incorpor a fuzzi weight scheme into the calcul of an autocorrel function for pitch extract . simul result reveal that the propos method provid better robust against background nois than the convent approach for extract pitch period in a noisi environ","ordered_present_kp":[44,63,132,20,214,286],"keyphrases":["autocorrelation function","pitch extraction","noisy speech","fuzzy weighting scheme","simulation results","background noise","speech analysis-synthesis system","average magnitude difference function","cepstrum method"],"prmu":["P","P","P","P","P","P","M","M","M"]} {"id":"1660","title":"A regularized conjugate gradient method for symmetric positive definite system of linear equations","abstract":"A class of regularized conjugate gradient methods is presented for solving the large sparse system of linear equations of which the coefficient matrix is an ill-conditioned symmetric positive definite matrix. The convergence properties of these methods are discussed in depth, and the best possible choices of the parameters involved in the new methods are investigated in detail. Numerical computations show that the new methods are more efficient and robust than both classical relaxation methods and classical conjugate direction methods","tok_text":"a regular conjug gradient method for symmetr posit definit system of linear equat \n a class of regular conjug gradient method is present for solv the larg spars system of linear equat of which the coeffici matrix is an ill-condit symmetr posit definit matrix . the converg properti of these method are discuss in depth , and the best possibl choic of the paramet involv in the new method are investig in detail . numer comput show that the new method are more effici and robust than both classic relax method and classic conjug direct method","ordered_present_kp":[2,37,69,150,197,265,488,513],"keyphrases":["regularized conjugate gradient method","symmetric positive definite system","linear equations","large sparse system","coefficient matrix","convergence properties","classical relaxation methods","classical conjugate direction methods","ill-conditioned linear system"],"prmu":["P","P","P","P","P","P","P","P","R"]} {"id":"171","title":"Education, training and development policies and practices in medium-sized companies in the UK: do they really influence firm performance?","abstract":"This paper sets out to examine the relationship between training and firm performance in middle-sized UK companies. It recognises that there is evidence that \"high performance work practices\" appear to be associated with better performance in large US companies, but argues that this relationship is less likely to be present in middle-sized companies. The paper's key contribution is to justify the wider concept of education, training and development (ETD) as applicable to such companies. It then finds that clusters of some ETD variables do appear to be associated with better middle-sized company performance","tok_text":"educ , train and develop polici and practic in medium-s compani in the uk : do they realli influenc firm perform ? \n thi paper set out to examin the relationship between train and firm perform in middle-s uk compani . it recognis that there is evid that \" high perform work practic \" appear to be associ with better perform in larg us compani , but argu that thi relationship is less like to be present in middle-s compani . the paper 's key contribut is to justifi the wider concept of educ , train and develop ( etd ) as applic to such compani . it then find that cluster of some etd variabl do appear to be associ with better middle-s compani perform","ordered_present_kp":[7,100,0,17,256],"keyphrases":["education","training","development policies","firm performance","high performance work practices","medium-sized UK companies","ETD variable clusters","human resources"],"prmu":["P","P","P","P","P","R","R","U"]} {"id":"1524","title":"Organizational design, information transfer, and the acquisition of rent-producing resources","abstract":"Within the resource-based view of the firm, a dynamic story has emerged in which the knowledge accumulated over the history of a firm and embedded in organizational routines and structures influences the firm's ability to recognize the value of new resources and capabilities. This paper explores the possibility of firms to select organizational designs that increase the likelihood that they will recognize and value rent-producing resources and capabilities. A computational model is developed to study the tension between an organization's desire to explore its environment for new capabilities and the organization's need to exploit existing capabilities. Support is provided for the proposition that integration, both externally and internally, is an important source of dynamic capability. The model provides greater insight into the tradeoffs between these two forms of integration and suggests when one form may be preferred over another. In particular, evidence is provided that in uncertain environments, the ability to explore possible alternatives is critical while in more certain environments, the ability to transfer information internally is paramount","tok_text":"organiz design , inform transfer , and the acquisit of rent-produc resourc \n within the resource-bas view of the firm , a dynam stori ha emerg in which the knowledg accumul over the histori of a firm and embed in organiz routin and structur influenc the firm 's abil to recogn the valu of new resourc and capabl . thi paper explor the possibl of firm to select organiz design that increas the likelihood that they will recogn and valu rent-produc resourc and capabl . a comput model is develop to studi the tension between an organ 's desir to explor it environ for new capabl and the organ 's need to exploit exist capabl . support is provid for the proposit that integr , both extern and intern , is an import sourc of dynam capabl . the model provid greater insight into the tradeoff between these two form of integr and suggest when one form may be prefer over anoth . in particular , evid is provid that in uncertain environ , the abil to explor possibl altern is critic while in more certain environ , the abil to transfer inform intern is paramount","ordered_present_kp":[0,17,55,470,912,914],"keyphrases":["organizational design","information transfer","rent-producing resources","computational model","uncertain environments","certain environments","probability","social networks","business strategy","investments"],"prmu":["P","P","P","P","P","P","U","U","U","U"]} {"id":"1561","title":"Self-validating integration and approximation of piecewise analytic functions","abstract":"Let an analytic or a piecewise analytic function on a compact interval be given. We present algorithms that produce enclosures for the integral or the function itself. Under certain conditions on the representation of the function, this is done with the minimal order of numbers of operations. The integration algorithm is implemented and numerical comparisons to non-validating integration software are presented","tok_text":"self-valid integr and approxim of piecewis analyt function \n let an analyt or a piecewis analyt function on a compact interv be given . we present algorithm that produc enclosur for the integr or the function itself . under certain condit on the represent of the function , thi is done with the minim order of number of oper . the integr algorithm is implement and numer comparison to non-valid integr softwar are present","ordered_present_kp":[0,110,169,295,331,34],"keyphrases":["self-validating integration","piecewise analytic functions","compact interval","enclosures","minimal order","integration algorithm","self-validating approximation","complex interval arithmetic"],"prmu":["P","P","P","P","P","P","R","M"]} {"id":"1780","title":"Migrating to public librarianship: depart on time to ensure a smooth flight","abstract":"Career change can be a difficult, time-consuming, and anxiety-laden process for anyone contemplating this important decision. The challenges faced by librarians considering the move from academic to public librarianship can be equally and significantly demanding. To most outsiders, at least on the surface, it may appear to be a quick and easy transition to make, but some professional librarians recognize the distinct differences between these areas of librarianship. Although the ubiquitous nature of technology has brought the various work responsibilities of academic and public librarians closer together during the last decade, there remain key differences in job-related duties and the work environments. These dissimilarities pose meaningful hurdles to leap for academic librarians wishing to migrate to the public sector. The paper considers the variations between academic and public librarianship","tok_text":"migrat to public librarianship : depart on time to ensur a smooth flight \n career chang can be a difficult , time-consum , and anxiety-laden process for anyon contempl thi import decis . the challeng face by librarian consid the move from academ to public librarianship can be equal and significantli demand . to most outsid , at least on the surfac , it may appear to be a quick and easi transit to make , but some profession librarian recogn the distinct differ between these area of librarianship . although the ubiquit natur of technolog ha brought the variou work respons of academ and public librarian closer togeth dure the last decad , there remain key differ in job-rel duti and the work environ . these dissimilar pose meaning hurdl to leap for academ librarian wish to migrat to the public sector . the paper consid the variat between academ and public librarianship","ordered_present_kp":[10,75,416,564,671,692],"keyphrases":["public librarianship","career change","professional librarians","work responsibilities","job-related duties","work environments","academic library","public library","library technology"],"prmu":["P","P","P","P","P","P","M","M","M"]} {"id":"1738","title":"Nurture the geek in you [accounting on the Internet]","abstract":"When chartered accountants focus on IT, it's not simply because we think technology is neat. We keep on top of tech trends and issues because it helps us do our jobs well. We need to know how to best manage and implement the wealth of technology systems within out client base or employer, as well as to determine on an ongoing basis how evolving technologies might affect business strategies, threats and opportunities. One way to stay current with technology is by monitoring the online drumbeat. Imagine the Internet as an endless conversation of millions of chattering voices, each focusing on a multitude of topics and issues. It's not surprising that a great deal of the information relates to technology itself, and if you learn how to tune in to the drumbeat, you can keep yourself informed","tok_text":"nurtur the geek in you [ account on the internet ] \n when charter account focu on it , it 's not simpli becaus we think technolog is neat . we keep on top of tech trend and issu becaus it help us do our job well . we need to know how to best manag and implement the wealth of technolog system within out client base or employ , as well as to determin on an ongo basi how evolv technolog might affect busi strategi , threat and opportun . one way to stay current with technolog is by monitor the onlin drumbeat . imagin the internet as an endless convers of million of chatter voic , each focus on a multitud of topic and issu . it 's not surpris that a great deal of the inform relat to technolog itself , and if you learn how to tune in to the drumbeat , you can keep yourself inform","ordered_present_kp":[58,40],"keyphrases":["Internet","chartered accountants","information technology","Slashdot","Techdirt","The Register","Dan Gillmor's Wournal","Daypop Top 40","RISKS","SecurityFocus","TechWeb"],"prmu":["P","P","R","U","U","M","M","M","U","U","U"]} {"id":"1813","title":"LMI approach to digital redesign of linear time-invariant systems","abstract":"A simple design methodology for the digital redesign of static state feedback controllers by using linear matrix inequalities is presented. The proposed method provides close matching of the states between the original continuous-time system and those of the digitally redesigned system with a guaranteed stability. Specifically, the digital redesign problem is reformulated as linear matrix inequalities (LMIs) and solved by a numerical optimisation technique. The main feature of the proposed method is that the closed-loop stability of the digitally redesigned system is explicitly guaranteed within the design procedure using the LMI-based approach. A numerical example of the position control of a simple crane system is presented","tok_text":"lmi approach to digit redesign of linear time-invari system \n a simpl design methodolog for the digit redesign of static state feedback control by use linear matrix inequ is present . the propos method provid close match of the state between the origin continuous-tim system and those of the digit redesign system with a guarante stabil . specif , the digit redesign problem is reformul as linear matrix inequ ( lmi ) and solv by a numer optimis techniqu . the main featur of the propos method is that the closed-loop stabil of the digit redesign system is explicitli guarante within the design procedur use the lmi-bas approach . a numer exampl of the posit control of a simpl crane system is present","ordered_present_kp":[0,16,34,70,151,253,321,432,506,653,678],"keyphrases":["LMI approach","digital redesign","linear time-invariant systems","design methodology","linear matrix inequalities","continuous-time system","guaranteed stability","numerical optimisation technique","closed-loop stability","position control","crane system"],"prmu":["P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1481","title":"Impact of aviation highway-in-the-sky displays on pilot situation awareness","abstract":"Thirty-six pilots (31 men, 5 women) were tested in a flight simulator on their ability to intercept a pathway depicted on a highway-in-the-sky (HITS) display. While intercepting and flying the pathway, pilots were required to watch for traffic outside the cockpit. Additionally, pilots were tested on their awareness of speed, altitude, and heading during the flight. Results indicated that the presence of a flight guidance cue significantly improved flight path awareness while intercepting the pathway, but significant practice effects suggest that a guidance cue might be unnecessary if pilots are given proper training. The amount of time spent looking outside the cockpit while using the HITS display was significantly less than when using conventional aircraft instruments. Additionally, awareness of flight information present on the HITS display was poor. Actual or potential applications of this research include guidance for the development of perspective flight display standards and as a basis for flight training requirements","tok_text":"impact of aviat highway-in-the-ski display on pilot situat awar \n thirty-six pilot ( 31 men , 5 women ) were test in a flight simul on their abil to intercept a pathway depict on a highway-in-the-ski ( hit ) display . while intercept and fli the pathway , pilot were requir to watch for traffic outsid the cockpit . addit , pilot were test on their awar of speed , altitud , and head dure the flight . result indic that the presenc of a flight guidanc cue significantli improv flight path awar while intercept the pathway , but signific practic effect suggest that a guidanc cue might be unnecessari if pilot are given proper train . the amount of time spent look outsid the cockpit while use the hit display wa significantli less than when use convent aircraft instrument . addit , awar of flight inform present on the hit display wa poor . actual or potenti applic of thi research includ guidanc for the develop of perspect flight display standard and as a basi for flight train requir","ordered_present_kp":[119,46,16,306,437,52,477],"keyphrases":["highway-in-the-sky display","pilots","situation awareness","flight simulator","cockpit","flight guidance","flight path awareness","human factors","aircraft display"],"prmu":["P","P","P","P","P","P","P","U","R"]} {"id":"1856","title":"Tax forms: CD or not CD?","abstract":"The move from CD to the Web looks unstoppable. Besides counting how many thousands of electronic tax forms they offer, vendors are rapidly moving those documents to the Web","tok_text":"tax form : cd or not cd ? \n the move from cd to the web look unstopp . besid count how mani thousand of electron tax form they offer , vendor are rapidli move those document to the web","ordered_present_kp":[104,52],"keyphrases":["Web","electronic tax forms","ATX Forms Zillion Forms","CCH Perform Plus H","Kleinrock Forms Library Plus","Nelco LaserLibrarian II","RIA eForm","STF Services Superform","Universal Tax Systems Forms Complete"],"prmu":["P","P","M","U","M","U","U","U","M"]} {"id":"155","title":"Fuzzy non-homogeneous Markov systems","abstract":"In this paper the theory of fuzzy logic and fuzzy reasoning is combined with the theory of Markov systems and the concept of a fuzzy non-homogeneous Markov system is introduced for the first time. This is an effort to deal with the uncertainty introduced in the estimation of the transition probabilities and the input probabilities in Markov systems. The asymptotic behaviour of the fuzzy Markov system and its asymptotic variability is considered and given in closed analytic form. Moreover, the asymptotically attainable structures of the system are estimated also in a closed analytic form under some realistic assumptions. The importance of this result lies in the fact that in most cases the traditional methods for estimating the probabilities can not be used due to lack of data and measurement errors. The introduction of fuzzy logic into Markov systems represents a powerful tool for taking advantage of the symbolic knowledge that the experts of the systems possess","tok_text":"fuzzi non-homogen markov system \n in thi paper the theori of fuzzi logic and fuzzi reason is combin with the theori of markov system and the concept of a fuzzi non-homogen markov system is introduc for the first time . thi is an effort to deal with the uncertainti introduc in the estim of the transit probabl and the input probabl in markov system . the asymptot behaviour of the fuzzi markov system and it asymptot variabl is consid and given in close analyt form . moreov , the asymptot attain structur of the system are estim also in a close analyt form under some realist assumpt . the import of thi result lie in the fact that in most case the tradit method for estim the probabl can not be use due to lack of data and measur error . the introduct of fuzzi logic into markov system repres a power tool for take advantag of the symbol knowledg that the expert of the system possess","ordered_present_kp":[61,77,253,294,318,408,725,833],"keyphrases":["fuzzy logic","fuzzy reasoning","uncertainty","transition probabilities","input probabilities","asymptotic variability","measurement errors","symbolic knowledge","fuzzy nonhomogeneous Markov systems","probability theory"],"prmu":["P","P","P","P","P","P","P","P","M","R"]} {"id":"1500","title":"DAML+OIL: an ontology language for the Semantic Web","abstract":"By all measures, the Web is enormous and growing at a staggering rate, which has made it increasingly difficult-and important-for both people and programs to have quick and accurate access to Web information and services. The Semantic Web offers a solution, capturing and exploiting the meaning of terms to transform the Web from a platform that focuses on presenting information, to a platform that focuses on understanding and reasoning with information. To support Semantic Web development, the US Defense Advanced Research Projects Agency launched the DARPA Agent Markup Language (DAML) initiative to fund research in languages, tools, infrastructure, and applications that make Web content more accessible and understandable. Although the US government funds DAML, several organizations-including US and European businesses and universities, and international consortia such as the World Wide Web Consortium-have contributed to work on issues related to DAML's development and deployment. We focus on DAML's current markup language, DAML+OIL, which is a proposed starting point for the W3C's Semantic Web Activity's Ontology Web Language (OWL). We introduce DAML+OIL syntax and usage through a set of examples, drawn from a wine knowledge base used to teach novices how to build ontologies","tok_text":"daml+oil : an ontolog languag for the semant web \n by all measur , the web is enorm and grow at a stagger rate , which ha made it increasingli difficult-and important-for both peopl and program to have quick and accur access to web inform and servic . the semant web offer a solut , captur and exploit the mean of term to transform the web from a platform that focus on present inform , to a platform that focus on understand and reason with inform . to support semant web develop , the us defens advanc research project agenc launch the darpa agent markup languag ( daml ) initi to fund research in languag , tool , infrastructur , and applic that make web content more access and understand . although the us govern fund daml , sever organizations-includ us and european busi and univers , and intern consortia such as the world wide web consortium-hav contribut to work on issu relat to daml 's develop and deploy . we focu on daml 's current markup languag , daml+oil , which is a propos start point for the w3c 's semant web activ 's ontolog web languag ( owl ) . we introduc daml+oil syntax and usag through a set of exampl , drawn from a wine knowledg base use to teach novic how to build ontolog","ordered_present_kp":[38,538,0,1039,1090,1145],"keyphrases":["DAML+OIL","Semantic Web","DARPA Agent Markup Language","Ontology Web Language","syntax","wine knowledge base"],"prmu":["P","P","P","P","P","P"]} {"id":"1545","title":"Pontryagin maximum principle of optimal control governed by fluid dynamic systems with two point boundary state constraint","abstract":"We study the optimal control problem subject to the semilinear equation with a state constraint. We prove certain theorems and give examples of state constraints so that the maximum principle holds. The main difficulty of the problem is to make the sensitivity analysis of the state with respect to the control caused by the unboundedness and nonlinearity of an operator","tok_text":"pontryagin maximum principl of optim control govern by fluid dynam system with two point boundari state constraint \n we studi the optim control problem subject to the semilinear equat with a state constraint . we prove certain theorem and give exampl of state constraint so that the maximum principl hold . the main difficulti of the problem is to make the sensit analysi of the state with respect to the control caus by the unbounded and nonlinear of an oper","ordered_present_kp":[0,31,55,167,98],"keyphrases":["Pontryagin maximum principle","optimal control","fluid dynamics","state constraints","semilinear equation"],"prmu":["P","P","P","P","P"]} {"id":"1601","title":"Solving the multiple competitive facilities location problem","abstract":"In this paper we propose five heuristic procedures for the solution of the multiple competitive facilities location problem. A franchise of several facilities is to be located in a trade area where competing facilities already exist. The objective is to maximize the market share captured by the franchise as a whole. We perform extensive computational tests and conclude that a two-step heuristic procedure combining simulated annealing and an ascent algorithm provides the best solutions","tok_text":"solv the multipl competit facil locat problem \n in thi paper we propos five heurist procedur for the solut of the multipl competit facil locat problem . a franchis of sever facil is to be locat in a trade area where compet facil alreadi exist . the object is to maxim the market share captur by the franchis as a whole . we perform extens comput test and conclud that a two-step heurist procedur combin simul anneal and an ascent algorithm provid the best solut","ordered_present_kp":[9,76,339,370,403,423],"keyphrases":["multiple competitive facilities location problem","heuristic procedures","computational tests","two-step heuristic procedure","simulated annealing","ascent algorithm","facilities franchise","market share maximization"],"prmu":["P","P","P","P","P","P","R","R"]} {"id":"1644","title":"An experimental evaluation of comprehensibility aspects of knowledge structures derived through induction techniques: a case study of industrial fault diagnosis","abstract":"Machine induction has been extensively used in order to develop knowledge bases for decision support systems and predictive systems. The extent to which developers and domain experts can comprehend these knowledge structures and gain useful insights into the basis of decision making has become a challenging research issue. This article examines the knowledge structures generated by the C4.5 induction technique in a fault diagnostic task and proposes to use a model of human learning in order to guide the process of making comprehensive the results of machine induction. The model of learning is used to generate hierarchical representations of diagnostic knowledge by adjusting the level of abstraction and varying the goal structures between 'shallow' and 'deep' ones. Comprehensibility is assessed in a global way in an experimental comparison where subjects are required to acquire the knowledge structures and transfer to new tasks. This method of addressing the issue of comprehensibility appears promising especially for machine induction techniques that are rather inflexible with regard to the number and sorts of interventions allowed to system developers","tok_text":"an experiment evalu of comprehens aspect of knowledg structur deriv through induct techniqu : a case studi of industri fault diagnosi \n machin induct ha been extens use in order to develop knowledg base for decis support system and predict system . the extent to which develop and domain expert can comprehend these knowledg structur and gain use insight into the basi of decis make ha becom a challeng research issu . thi articl examin the knowledg structur gener by the c4.5 induct techniqu in a fault diagnost task and propos to use a model of human learn in order to guid the process of make comprehens the result of machin induct . the model of learn is use to gener hierarch represent of diagnost knowledg by adjust the level of abstract and vari the goal structur between ' shallow ' and ' deep ' one . comprehens is assess in a global way in an experiment comparison where subject are requir to acquir the knowledg structur and transfer to new task . thi method of address the issu of comprehens appear promis especi for machin induct techniqu that are rather inflex with regard to the number and sort of intervent allow to system develop","ordered_present_kp":[3,76,96,110,189,207,232,472],"keyphrases":["experimental evaluation","induction techniques","case study","industrial fault diagnosis","knowledge bases","decision support systems","predictive systems","C4.5 induction technique","knowledge structure comprehensibility aspects","industrial plants","human learning model","diagnostic knowledge representations"],"prmu":["P","P","P","P","P","P","P","P","R","M","R","R"]} {"id":"1837","title":"A review of methodologies used in research on cadastral development","abstract":"World-wide, much attention has been given to cadastral development. As a consequence of experiences made during recent decades, several authors have stated the need for research in the domain of cadastre and proposed methodologies to be used. The paper contributes to the acceptance of research methodologies needed for cadastral development, and thereby enhances theory in the cadastral domain. The paper reviews nine publications on cadastre and identifies the methodologies used. The review focuses on the institutional, social, political and economic aspects of cadastral development, rather than on the technical aspects. The main conclusion is that the methodologies used are largely those of the social sciences. That agrees with the notion that cadastre relates as much to people and institutions, as it relates to land, and that cadastral systems are shaped by social, political and economic conditions, as well as technology. Since the geodetic survey profession has been the keeper of the cadastre, geodetic surveyors will have to deal ever more with social science matters, a fact that universities will have to consider","tok_text":"a review of methodolog use in research on cadastr develop \n world-wid , much attent ha been given to cadastr develop . as a consequ of experi made dure recent decad , sever author have state the need for research in the domain of cadastr and propos methodolog to be use . the paper contribut to the accept of research methodolog need for cadastr develop , and therebi enhanc theori in the cadastr domain . the paper review nine public on cadastr and identifi the methodolog use . the review focus on the institut , social , polit and econom aspect of cadastr develop , rather than on the technic aspect . the main conclus is that the methodolog use are larg those of the social scienc . that agre with the notion that cadastr relat as much to peopl and institut , as it relat to land , and that cadastr system are shape by social , polit and econom condit , as well as technolog . sinc the geodet survey profess ha been the keeper of the cadastr , geodet surveyor will have to deal ever more with social scienc matter , a fact that univers will have to consid","ordered_present_kp":[42,309,534,671,842,890,948],"keyphrases":["cadastre","research methodologies","economic aspects","social sciences","economic conditions","geodetic survey profession","geodetic surveyors","cadastral development methodologies","political aspects","land registration","case study"],"prmu":["P","P","P","P","P","P","P","R","R","M","U"]} {"id":"1872","title":"TPTP, CASC and the development of a semantically guided theorem prover","abstract":"The first-order theorem prover SCOTT has been through a series of versions over some ten years. The successive provers, while retaining the same underlying technology, have used radically different algorithms and shown wide differences of behaviour. The development process has depended heavily on experiments with problems from the TPTP library and has been sharpened by participation in CASC each year since 1997. We outline some of the difficulties inherent in designing and refining a theorem prover as complex as SCOTT, and explain our experimental methodology. While SCOTT is not one of the systems which have been highly optimised for CASC, it does help to illustrate the influence of both CASC and the TPTP library on contemporary theorem proving research","tok_text":"tptp , casc and the develop of a semant guid theorem prover \n the first-ord theorem prover scott ha been through a seri of version over some ten year . the success prover , while retain the same underli technolog , have use radic differ algorithm and shown wide differ of behaviour . the develop process ha depend heavili on experi with problem from the tptp librari and ha been sharpen by particip in casc each year sinc 1997 . we outlin some of the difficulti inher in design and refin a theorem prover as complex as scott , and explain our experiment methodolog . while scott is not one of the system which have been highli optimis for casc , it doe help to illustr the influenc of both casc and the tptp librari on contemporari theorem prove research","ordered_present_kp":[354,7,33,66,91,543],"keyphrases":["CASC","semantically guided theorem prover","first-order theorem prover","SCOTT","TPTP library","experimental methodology","Semantically Constrained Otter","proof searches"],"prmu":["P","P","P","P","P","P","M","U"]} {"id":"1759","title":"On the p-adic Birch, Swinnerton-Dyer Conjecture for non-semistable reduction","abstract":"In this paper, we examine the Iwasawa theory of elliptic curves E with additive reduction at an odd prime p. By extending Perrin-Riou's theory to certain nonsemistable representations, we are able to convert Kato's zeta-elements into p-adic L-functions. This allows us to deduce the cotorsion of the Selmer group over the cyclotomic Z\/sub p\/-extension of Q, and thus prove an inequality in the p-adic Birch and Swinnerton-Dyer conjecture at primes p whose square divides the conductor of E","tok_text":"on the p-adic birch , swinnerton-dy conjectur for non-semist reduct \n in thi paper , we examin the iwasawa theori of ellipt curv e with addit reduct at an odd prime p. by extend perrin-ri 's theori to certain nonsemist represent , we are abl to convert kato 's zeta-el into p-adic l-function . thi allow us to deduc the cotors of the selmer group over the cyclotom z \/ sub p\/-extens of q , and thu prove an inequ in the p-adic birch and swinnerton-dy conjectur at prime p whose squar divid the conductor of e","ordered_present_kp":[7,22,117,136,178,274,320,334,356],"keyphrases":["p-adic Birch","Swinnerton-Dyer conjecture","elliptic curves","additive reduction","Perrin-Riou's theory","p-adic L-functions","cotorsion","Selmer group","cyclotomic Z\/sub p\/-extension","nonsemistable reduction","lwasawa theory"],"prmu":["P","P","P","P","P","P","P","P","P","R","M"]} {"id":"1799","title":"Steady-state mean-square error analysis of the cross-correlation and constant modulus algorithm in a MIMO convolutive system","abstract":"The cross-correlation and constant modulus algorithm (CC-CMA) has been proven to be an effective approach in the problem of joint blind equalisation and source separation in a multi-input and multi-output system. In the paper, the steady-state mean-square error performance of CC-CMA in a noise-free environment is studied, and a new expression is derived based on the energy preservation approach of Mai and Sayed (2000). Simulation studies are undertaken to support the analysis","tok_text":"steady-st mean-squar error analysi of the cross-correl and constant modulu algorithm in a mimo convolut system \n the cross-correl and constant modulu algorithm ( cc-cma ) ha been proven to be an effect approach in the problem of joint blind equalis and sourc separ in a multi-input and multi-output system . in the paper , the steady-st mean-squar error perform of cc-cma in a noise-fre environ is studi , and a new express is deriv base on the energi preserv approach of mai and say ( 2000 ) . simul studi are undertaken to support the analysi","ordered_present_kp":[90,0,42,59,229,253,377,445,162],"keyphrases":["Steady-state mean-square error analysis","cross-correlation","constant modulus algorithm","MIMO convolutive system","CC-CMA","joint blind equalisation","source separation","noise-free environment","energy preservation approach","multi-input multi-output system"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"1465","title":"P systems with symport\/antiport rules: the traces of objects","abstract":"We continue the study of those P systems where the computation is performed by the communication of objects, that is, systems with symport and antiport rules. Instead of the (number of) objects collected in a specified membrane, as the result of a computation we consider the itineraries of a certain object through membranes, during a halting computation, written as a coding of the string of labels of the visited membranes. The family of languages generated in this way is investigated with respect to its place in the Chomsky hierarchy. When the (symport and antiport) rules are applied in a conditional manner, promoted or inhibited by certain objects which should be present in the membrane where a rule is applied, then a characterization of recursively enumerable languages is obtained; the power of systems with the rules applied freely is only partially described","tok_text":"p system with symport \/ antiport rule : the trace of object \n we continu the studi of those p system where the comput is perform by the commun of object , that is , system with symport and antiport rule . instead of the ( number of ) object collect in a specifi membran , as the result of a comput we consid the itinerari of a certain object through membran , dure a halt comput , written as a code of the string of label of the visit membran . the famili of languag gener in thi way is investig with respect to it place in the chomski hierarchi . when the ( symport and antiport ) rule are appli in a condit manner , promot or inhibit by certain object which should be present in the membran where a rule is appli , then a character of recurs enumer languag is obtain ; the power of system with the rule appli freeli is onli partial describ","ordered_present_kp":[0,24,312,367,459,528,737],"keyphrases":["P systems","antiport rules","itineraries","halting computation","languages","Chomsky hierarchy","recursively enumerable languages","object communication","object traces","symport rules","label string coding"],"prmu":["P","P","P","P","P","P","P","R","R","R","R"]} {"id":"1498","title":"John McCarthy: father of AI","abstract":"If John McCarthy, the father of AI, were to coin a new phrase for \"artificial intelligence\" today, he would probably use \"computational intelligence.\" McCarthy is not just the father of AI, he is also the inventor of the Lisp (list processing) language. The author considers McCarthy's conception of Lisp and discusses McCarthy's recent research that involves elaboration tolerance, creativity by machines, free will of machines, and some improved ways of doing situation calculus","tok_text":"john mccarthi : father of ai \n if john mccarthi , the father of ai , were to coin a new phrase for \" artifici intellig \" today , he would probabl use \" comput intellig . \" mccarthi is not just the father of ai , he is also the inventor of the lisp ( list process ) languag . the author consid mccarthi 's concept of lisp and discuss mccarthi 's recent research that involv elabor toler , creativ by machin , free will of machin , and some improv way of do situat calculu","ordered_present_kp":[0,16,101,152,243,250,373,388,408,456],"keyphrases":["John McCarthy","father of AI","artificial intelligence","computational intelligence","Lisp","list processing","elaboration tolerance","creativity","free will","situation calculus"],"prmu":["P","P","P","P","P","P","P","P","P","P"]} {"id":"1764","title":"Two-scale curved element method for elliptic problems with small periodic coefficients","abstract":"This paper is concerned with the second order elliptic problems with small periodic coefficients on a bounded domain with a curved boundary. A two-scale curved element method which couples linear elements and isoparametric elements is proposed. The error estimate is obtained over the given smooth domain. Furthermore an additive Schwarz method is provided for the isoparametric element method","tok_text":"two-scal curv element method for ellipt problem with small period coeffici \n thi paper is concern with the second order ellipt problem with small period coeffici on a bound domain with a curv boundari . a two-scal curv element method which coupl linear element and isoparametr element is propos . the error estim is obtain over the given smooth domain . furthermor an addit schwarz method is provid for the isoparametr element method","ordered_present_kp":[0,33,53,107,167,187,246,265,301,368,407],"keyphrases":["two-scale curved element method","elliptic problems","small periodic coefficients","second order elliptic problems","bounded domain","curved boundary","linear elements","isoparametric elements","error estimate","additive Schwarz method","isoparametric element method"],"prmu":["P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1721","title":"Dueling platforms [healthcare network servers]","abstract":"Many large hospitals and healthcare systems have grown accustomed to the reliability of mainframe architecture, although tighter operating budgets, coupled with advances in client\/server technology, have led to more office and clinical applications being moved off mainframes. But Evanston Northwestern Healthcare wasn't ready to get rid of its IBM OS 390 mainframe just yet. While a number of new clinical applications are being installed on two brand new IBM servers, Evanston Northwestern Healthcare will retain its favored hospital billing system and let it reside on the organization's mainframe, as it has since 1982","tok_text":"duel platform [ healthcar network server ] \n mani larg hospit and healthcar system have grown accustom to the reliabl of mainfram architectur , although tighter oper budget , coupl with advanc in client \/ server technolog , have led to more offic and clinic applic be move off mainfram . but evanston northwestern healthcar wa n't readi to get rid of it ibm os 390 mainfram just yet . while a number of new clinic applic are be instal on two brand new ibm server , evanston northwestern healthcar will retain it favor hospit bill system and let it resid on the organ 's mainfram , as it ha sinc 1982","ordered_present_kp":[26,292,354],"keyphrases":["network servers","Evanston Northwestern Healthcare","IBM OS 390 mainframe","Leapfrog Group","computerized physician order entry system"],"prmu":["P","P","P","U","M"]} {"id":"1917","title":"Design and modeling of an interval-based ABR flow control protocol","abstract":"A novel flow control protocol is presented for availability bit rate (ABR) service in asynchronous transfer mode (ATM) networks. This scheme features periodic explicit rate feedback that enables precise allocation of link bandwidth and buffer space on a hop-by-hop basis to guarantee maximum throughput, minimum cell loss, and high resource efficiency. With the inclusion of resource management cell synchronization and consolidation algorithms, this protocol is capable of controlling point-to-multipoint ABR services within a unified framework. The authors illustrate the modeling of single ABR connection, the interaction between multiple ABR connections, and the constraints applicable to flow control decisions. A loss-free flow control mechanism is presented for high-speed ABR connections using a fluid traffic model. Supporting algorithms and ATM signaling procedures are specified, in company with linear system modeling, numerical analysis, and simulation results, which demonstrate its performance and cost benefits in high-speed backbone networking scenarios","tok_text":"design and model of an interval-bas abr flow control protocol \n a novel flow control protocol is present for avail bit rate ( abr ) servic in asynchron transfer mode ( atm ) network . thi scheme featur period explicit rate feedback that enabl precis alloc of link bandwidth and buffer space on a hop-by-hop basi to guarante maximum throughput , minimum cell loss , and high resourc effici . with the inclus of resourc manag cell synchron and consolid algorithm , thi protocol is capabl of control point-to-multipoint abr servic within a unifi framework . the author illustr the model of singl abr connect , the interact between multipl abr connect , and the constraint applic to flow control decis . a loss-fre flow control mechan is present for high-spe abr connect use a fluid traffic model . support algorithm and atm signal procedur are specifi , in compani with linear system model , numer analysi , and simul result , which demonstr it perform and cost benefit in high-spe backbon network scenario","ordered_present_kp":[23,11,0,202,324,345,369,679,702,746,773,821,867,889,909,970],"keyphrases":["design","modeling","interval-based ABR flow control protocol","periodic explicit rate feedback","maximum throughput","minimum cell loss","high resource efficiency","flow control decisions","loss-free flow control mechanism","high-speed ABR connections","fluid traffic model","signaling","linear system modeling","numerical analysis","simulation","high-speed backbone networking scenarios","availability bit rate service","ATM networks","link bandwidth allocation","buffer space allocation","resource management cell synchronization algorithms","resource management cell consolidation algorithms","point-to-multipoint services"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R","R","R","R"]} {"id":"1679","title":"Project scheduling under time dependent costs-a branch and bound algorithm","abstract":"In a given project network, execution of each activity in normal duration requires utilization of certain resources. If faster execution of an activity is desired then additional resources at extra cost would be required. Given a project network, the cost structure for each activity and a planning horizon, the project compression problem is concerned with the determination of optimal schedule of performing each activity while satisfying given restrictions and minimizing the total cost of project execution. The paper considers the project compression problem with time dependent cost structure for each activity. The planning horizon is divided into several regular time intervals over which the cost structure of an activity may vary. But the cost structure of the activities remains the same within a time interval. The objective is to find an optimal project schedule minimizing the total project cost. We present a mathematical model for this problem, develop some heuristics and an exact branch and bound algorithm. Using simulated problems we provide an insight into the computational performances of heuristics and the branch and bound algorithm","tok_text":"project schedul under time depend costs-a branch and bound algorithm \n in a given project network , execut of each activ in normal durat requir util of certain resourc . if faster execut of an activ is desir then addit resourc at extra cost would be requir . given a project network , the cost structur for each activ and a plan horizon , the project compress problem is concern with the determin of optim schedul of perform each activ while satisfi given restrict and minim the total cost of project execut . the paper consid the project compress problem with time depend cost structur for each activ . the plan horizon is divid into sever regular time interv over which the cost structur of an activ may vari . but the cost structur of the activ remain the same within a time interv . the object is to find an optim project schedul minim the total project cost . we present a mathemat model for thi problem , develop some heurist and an exact branch and bound algorithm . use simul problem we provid an insight into the comput perform of heurist and the branch and bound algorithm","ordered_present_kp":[0,22,42,82,324,343,400,924],"keyphrases":["project scheduling","time dependent costs","branch and bound algorithm","project network","planning horizon","project compression problem","optimal schedule","heuristics"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1684","title":"E-learning on the college campus: a help or hindrance to students learning objectives: a case study","abstract":"If you know how to surf the World Wide Web, have used email before, and can learn how to send an email attachment, then learning how to interact in an online course should not be difficult at all. In a way to find out, I decided to offer two identical courses, one of which would be offered online and the other the \"traditional way\". I wanted to see how students would fare with identical material provided in each course. I wanted their anonymous feedback, when the course was over","tok_text":"e-learn on the colleg campu : a help or hindranc to student learn object : a case studi \n if you know how to surf the world wide web , have use email befor , and can learn how to send an email attach , then learn how to interact in an onlin cours should not be difficult at all . in a way to find out , i decid to offer two ident cours , one of which would be offer onlin and the other the \" tradit way \" . i want to see how student would fare with ident materi provid in each cours . i want their anonym feedback , when the cours wa over","ordered_present_kp":[0],"keyphrases":["e-learning","distance education","William Paterson University"],"prmu":["P","U","U"]} {"id":"168","title":"Nurturing clients' trust to encourage engagement success during the customization of ERP systems","abstract":"Customization is a crucial, lengthy, and costly aspect in the successful implementation of ERP systems, and has, accordingly, become a major specialty of many vendors and consulting companies. The study examines how such companies can increase their clients' perception of engagement success through increased client trust that is brought about through responsive and dependable customization. Survey data from ERP customization clients show that, as hypothesized, clients' trust influenced their perception of engagement success with the company. The data also show that clients' trust in the customization company was increased when the company behaved in accordance with client expectations by being responsive, and decreased when the company behaved in a manner that contradicted these expectations by not being dependable. Responses to an open-ended question addendum attached to the survey corroborated the importance of responsiveness and dependability. Implications for customization companies and research on trust are discussed","tok_text":"nurtur client ' trust to encourag engag success dure the custom of erp system \n custom is a crucial , lengthi , and costli aspect in the success implement of erp system , and ha , accordingli , becom a major specialti of mani vendor and consult compani . the studi examin how such compani can increas their client ' percept of engag success through increas client trust that is brought about through respons and depend custom . survey data from erp custom client show that , as hypothes , client ' trust influenc their percept of engag success with the compani . the data also show that client ' trust in the custom compani wa increas when the compani behav in accord with client expect by be respons , and decreas when the compani behav in a manner that contradict these expect by not be depend . respons to an open-end question addendum attach to the survey corrobor the import of respons and depend . implic for custom compani and research on trust are discuss","ordered_present_kp":[357,34,57,67,226,237,412],"keyphrases":["engagement success","customization","ERP systems","vendors","consulting companies","client trust","dependability","enterprise resource planning systems","perceived responsiveness","MRP II implementation","integrity","benevolence"],"prmu":["P","P","P","P","P","P","P","M","M","M","U","U"]} {"id":"1578","title":"Records role in e-business","abstract":"Records management standards are now playing a key role in e-business strategy","tok_text":"record role in e-busi \n record manag standard are now play a key role in e-busi strategi","ordered_present_kp":[73,24],"keyphrases":["records management","e-business strategy"],"prmu":["P","P"]} {"id":"1829","title":"Improved approximation of Max-Cut on graphs of bounded degree","abstract":"Let alpha approximately=0.87856 denote the best approximation ratio currently known for the Max-Cut problem on general graphs. We consider a semidefinite relaxation of the Max-Cut problem, round it using the random hyperplane rounding technique of M.X. Goemans and D.P. Williamson (1995), and then add a local improvement step. We show that for graphs of degree at most Delta , our algorithm achieves an approximation ratio of at least alpha + epsilon , where epsilon >0 is a constant that depends only on Delta .. Using computer assisted analysis, we show that for graphs of maximal degree 3 our algorithm obtains an approximation ratio of at least 0.921, and for 3-regular graphs the approximation ratio is at least 0.924. We note that for the semidefinite relaxation of Max-Cut used by Goemans and Williamson the integrality gap is at least 1\/0.885, even for 2-regular graphs","tok_text":"improv approxim of max-cut on graph of bound degre \n let alpha approximately=0.87856 denot the best approxim ratio current known for the max-cut problem on gener graph . we consid a semidefinit relax of the max-cut problem , round it use the random hyperplan round techniqu of m.x. goeman and d.p. williamson ( 1995 ) , and then add a local improv step . we show that for graph of degre at most delta , our algorithm achiev an approxim ratio of at least alpha + epsilon , where epsilon > 0 is a constant that depend onli on delta .. use comput assist analysi , we show that for graph of maxim degre 3 our algorithm obtain an approxim ratio of at least 0.921 , and for 3-regular graph the approxim ratio is at least 0.924 . we note that for the semidefinit relax of max-cut use by goeman and williamson the integr gap is at least 1\/0.885 , even for 2-regular graph","ordered_present_kp":[182,100,537,848,95],"keyphrases":["best approximation ratio","approximation ratio","semidefinite relaxation","computer assisted analysis","2-regular graphs","Max-Cut approximation","bounded degree graph"],"prmu":["P","P","P","P","P","R","R"]} {"id":"1702","title":"Reconstruction of time-varying 3-D left-ventricular shape from multiview X-ray cineangiocardiograms","abstract":"This paper reports on the clinical application of a system for recovering the time-varying three-dimensional (3-D) left-ventricular (LV) shape from multiview X-ray cineangiocardiograms. Considering that X-ray cineangiocardiography is still commonly employed in clinical cardiology and computational costs for 3-D recovery and visualization are rapidly decreasing, it is meaningful to develop a clinically applicable system for 3-D LV shape recovery from X-ray cineangiocardiograms. The system is based on a previously reported closed-surface method of shape recovery from two-dimensional occluding contours with multiple views. To apply the method to \"real\" LV cineangiocardiograms, user-interactive systems were implemented for preprocessing, including detection of LV contours, calibration of the imaging geometry, and setting of the LV model coordinate system. The results for three real LV angiographic image sequences are presented, two with fixed multiple views (using supplementary angiography) and one with rotating views. 3-D reconstructions utilizing different numbers of views were compared and evaluated in terms of contours manually traced by an experienced radiologist. The performance of the preprocesses was also evaluated, and the effects of variations in user-specified parameters on the final 3-D reconstruction results were shown to be sufficiently small. These experimental results demonstrate the potential usefulness of combining multiple views for 3-D recovery from \"real\" LV cineangiocardiograms","tok_text":"reconstruct of time-vari 3-d left-ventricular shape from multiview x-ray cineangiocardiogram \n thi paper report on the clinic applic of a system for recov the time-vari three-dimension ( 3-d ) left-ventricular ( lv ) shape from multiview x-ray cineangiocardiogram . consid that x-ray cineangiocardiographi is still commonli employ in clinic cardiolog and comput cost for 3-d recoveri and visual are rapidli decreas , it is meaning to develop a clinic applic system for 3-d lv shape recoveri from x-ray cineangiocardiogram . the system is base on a previous report closed-surfac method of shape recoveri from two-dimension occlud contour with multipl view . to appli the method to \" real \" lv cineangiocardiogram , user-interact system were implement for preprocess , includ detect of lv contour , calibr of the imag geometri , and set of the lv model coordin system . the result for three real lv angiograph imag sequenc are present , two with fix multipl view ( use supplementari angiographi ) and one with rotat view . 3-d reconstruct util differ number of view were compar and evalu in term of contour manual trace by an experienc radiologist . the perform of the preprocess wa also evalu , and the effect of variat in user-specifi paramet on the final 3-d reconstruct result were shown to be suffici small . these experiment result demonstr the potenti use of combin multipl view for 3-d recoveri from \" real \" lv cineangiocardiogram","ordered_present_kp":[57,334,608,355,714,897,944,1124],"keyphrases":["multiview X-ray cineangiocardiograms","clinical cardiology","computational costs","two-dimensional occluding contours","user-interactive systems","angiographic image sequences","fixed multiple views","experienced radiologist","medical diagnostic imaging","time-varying 3-D left-ventricular shape reconstruction","arterial septal defect","B-spline","user-specified parameters variations"],"prmu":["P","P","P","P","P","P","P","P","M","R","U","U","R"]} {"id":"1747","title":"On a general constitutive description for the inelastic and failure behavior of fibrous laminates. II. Laminate theory and applications","abstract":"For pt. I see ibid., pp. 1159-76. The two papers report systematically a constitutive description for the inelastic and strength behavior of laminated composites reinforced with various fiber preforms. The constitutive relationship is established micromechanically, through layer-by-layer analysis. Namely, only the properties of the constituent fiber and matrix materials of the composites are required as input data. In the previous part lamina theory was presented. Three fundamental quantities of the laminae, i.e. the internal stresses generated in the constituent fiber and matrix materials and the instantaneous compliance matrix, with different fiber preform (including woven, braided, and knitted fabric) reinforcements were explicitly obtained by virtue of the bridging micromechanics model. In this paper, the laminate stress analysis is shown. The purpose of this analysis is to determine the load shared by each lamina in the laminate, so that the lamina theory can be applied. Incorporation of the constitutive equations into an FEM software package is illustrated. A number of application examples are given to demonstrate the efficiency of the constitutive theory. The predictions made include: failure envelopes of multidirectional laminates subjected to biaxial in-plane loads, thermomechanical cycling stress-strain curves of a titanium metal matrix composite laminate, S-N curves of multilayer knitted fabric reinforced laminates under tensile fatigue, and bending load-deflection plots and ultimate bending strengths of laminated braided fabric reinforced beams subjected to lateral loads","tok_text":"on a gener constitut descript for the inelast and failur behavior of fibrou lamin . ii . lamin theori and applic \n for pt . i see ibid . , pp . 1159 - 76 . the two paper report systemat a constitut descript for the inelast and strength behavior of lamin composit reinforc with variou fiber preform . the constitut relationship is establish micromechan , through layer-by-lay analysi . name , onli the properti of the constitu fiber and matrix materi of the composit are requir as input data . in the previou part lamina theori wa present . three fundament quantiti of the lamina , i.e. the intern stress gener in the constitu fiber and matrix materi and the instantan complianc matrix , with differ fiber preform ( includ woven , braid , and knit fabric ) reinforc were explicitli obtain by virtu of the bridg micromechan model . in thi paper , the lamin stress analysi is shown . the purpos of thi analysi is to determin the load share by each lamina in the lamin , so that the lamina theori can be appli . incorpor of the constitut equat into an fem softwar packag is illustr . a number of applic exampl are given to demonstr the effici of the constitut theori . the predict made includ : failur envelop of multidirect lamin subject to biaxial in-plan load , thermomechan cycl stress-strain curv of a titanium metal matrix composit lamin , s-n curv of multilay knit fabric reinforc lamin under tensil fatigu , and bend load-deflect plot and ultim bend strength of lamin braid fabric reinforc beam subject to later load","ordered_present_kp":[5,50,69,89,227,254,284,340,362,590,436,658,855,926,1048,1191,1209,1238,1261,1303,1342,1354,1396,1443,1466,1510],"keyphrases":["general constitutive description","failure behavior","fibrous laminates","laminate theory","strength behavior","composites","fiber preforms","micromechanics","layer-by-layer analysis","matrix materials","internal stresses","instantaneous compliance matrix","stress analysis","load","FEM software package","failure envelopes","multidirectional laminates","biaxial in-plane loads","thermomechanical cycling stress-strain curves","titanium metal matrix composite laminate","S-N curves","multilayer knitted fabric reinforced laminates","tensile fatigue","ultimate bending strengths","laminated braided fabric reinforced beams","lateral loads","inelastic behavior","bending load deflection plots"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","M"]} {"id":"1891","title":"On trajectory and force tracking control of constrained mobile manipulators with parameter uncertainty","abstract":"Studies the trajectory and force tracking control problem of mobile manipulators subject to holonomic and nonholonomic constraints with unknown inertia parameters. Adaptive controllers are proposed based on a suitable reduced dynamic model, the defined reference signals and the mixed tracking errors. The proposed controllers not only ensure the entire state of the system to asymptotically converge to the desired trajectory but also ensure the constraint force to asymptotically converge to the desired force. A detailed numerical example is presented to illustrate the developed methods","tok_text":"on trajectori and forc track control of constrain mobil manipul with paramet uncertainti \n studi the trajectori and forc track control problem of mobil manipul subject to holonom and nonholonom constraint with unknown inertia paramet . adapt control are propos base on a suitabl reduc dynam model , the defin refer signal and the mix track error . the propos control not onli ensur the entir state of the system to asymptot converg to the desir trajectori but also ensur the constraint forc to asymptot converg to the desir forc . a detail numer exampl is present to illustr the develop method","ordered_present_kp":[18,40,69,183,236,279,330,415],"keyphrases":["force tracking control","constrained mobile manipulators","parameter uncertainty","nonholonomic constraints","adaptive controllers","reduced dynamic model","mixed tracking errors","asymptotic convergence","trajectory control","holonomic constraints","position control","mobile robots"],"prmu":["P","P","P","P","P","P","P","P","R","R","M","M"]} {"id":"19","title":"Decentralized adaptive output feedback stabilization for a class of interconnected systems with unknown bound of uncertainties","abstract":"The problem of adaptive decentralized stabilization for a class of linear time-invarying large-scale systems with nonlinear interconnectivity and uncertainties is discussed. The bounds of uncertainties are assumed to be unknown. For such uncertain dynamic systems, an adaptive decentralized controller is presented. The resulting closed-loop systems are asymptotically stable in theory. Moreover, an adaptive decentralized control scheme is given. The scheme ensures the closed-loop systems exponentially practically stable and can be used in practical engineering. Finally, simulations show that the control scheme is effective","tok_text":"decentr adapt output feedback stabil for a class of interconnect system with unknown bound of uncertainti \n the problem of adapt decentr stabil for a class of linear time-invari large-scal system with nonlinear interconnect and uncertainti is discuss . the bound of uncertainti are assum to be unknown . for such uncertain dynam system , an adapt decentr control is present . the result closed-loop system are asymptot stabl in theori . moreov , an adapt decentr control scheme is given . the scheme ensur the closed-loop system exponenti practic stabl and can be use in practic engin . final , simul show that the control scheme is effect","ordered_present_kp":[123,387,313],"keyphrases":["adaptive decentralized stabilization","uncertain dynamic systems","closed-loop systems","robust control","large scale systems"],"prmu":["P","P","P","M","M"]} {"id":"1909","title":"Breast MR imaging with high spectral and spatial resolutions: preliminary experience","abstract":"The authors evaluated magnetic resonance (MR) imaging with high spectral and spatial resolutions (HSSR) of water and fat in breasts of healthy volunteers (n=6) and women with suspicious lesions (n=6). Fat suppression, edge delineation, and image texture were improved on MR images derived from HSSR data compared with those on conventional MR images. HSSR MR imaging data acquired before and after contrast medium injection showed spectrally inhomogeneous changes in the water resonances in small voxels that were not detectable with conventional MR imaging","tok_text":"breast mr imag with high spectral and spatial resolut : preliminari experi \n the author evalu magnet reson ( mr ) imag with high spectral and spatial resolut ( hssr ) of water and fat in breast of healthi volunt ( n=6 ) and women with suspici lesion ( n=6 ) . fat suppress , edg delin , and imag textur were improv on mr imag deriv from hssr data compar with those on convent mr imag . hssr mr imag data acquir befor and after contrast medium inject show spectral inhomogen chang in the water reson in small voxel that were not detect with convent mr imag","ordered_present_kp":[197,275,291,427,487,502,224,235,260],"keyphrases":["healthy volunteers","women","suspicious lesions","fat suppression","edge delineation","image texture","contrast medium injection","water resonances","small voxels","breast magnetic resonance imaging","high spectral spatial resolutions","magnetic resonance images","magnetic resonance imaging data"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R","R"]} {"id":"1667","title":"Combining constraint programming and linear programming on an example of bus driver scheduling","abstract":"Provides details of a successful application where the column generation algorithm was used to combine constraint programming and linear programming. In the past, constraint programming and linear programming were considered to be two competing technologies that solved similar types of problems. Both these technologies had their strengths and weaknesses. The paper shows that the two technologies can be combined together to extract the strengths of both these technologies. Details of a real-world application to optimize bus driver duties are given. This system was developed by ILOG for a major software house in Japan using ILOG-Solver and ILOG-CPLEX, constraint programming and linear programming C\/C++ libraries","tok_text":"combin constraint program and linear program on an exampl of bu driver schedul \n provid detail of a success applic where the column gener algorithm wa use to combin constraint program and linear program . in the past , constraint program and linear program were consid to be two compet technolog that solv similar type of problem . both these technolog had their strength and weak . the paper show that the two technolog can be combin togeth to extract the strength of both these technolog . detail of a real-world applic to optim bu driver duti are given . thi system wa develop by ilog for a major softwar hous in japan use ilog-solv and ilog-cplex , constraint program and linear program c \/ c++ librari","ordered_present_kp":[7,30,61,125,583,626,640,691],"keyphrases":["constraint programming","linear programming","bus driver scheduling","column generation algorithm","ILOG","ILOG-Solver","ILOG-CPLEX","C\/C++ libraries"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1622","title":"Error resilient intra refresh scheme for H.26L stream","abstract":"Recently much attention has been focused on video streaming through IP-based networks. An error resilient RD intra macro-block refresh scheme for H.26L Internet video streaming is introduced. Various channel simulations have proved that this scheme is more effective than those currently adopted in H.26L","tok_text":"error resili intra refresh scheme for h.26l stream \n recent much attent ha been focus on video stream through ip-bas network . an error resili rd intra macro-block refresh scheme for h.26l internet video stream is introduc . variou channel simul have prove that thi scheme is more effect than those current adopt in h.26l","ordered_present_kp":[189,110,146,232],"keyphrases":["IP-based networks","intra macro-block refresh scheme","Internet","channel simulations","H.26L video streaming","error resilient scheme","RD intra refresh scheme","video communication","RDerr scheme","RDall scheme"],"prmu":["P","P","P","P","R","R","R","M","M","M"]} {"id":"176","title":"Knowledge model reuse: therapy decision through specialisation of a generic decision model","abstract":"We present the definition of the therapy decision task and its associated Heuristic Multi-Attribute (HM) solving method, in the form of a KADS-style specification. The goal of the therapy decision task is to identify the ideal therapy, for a given patient, in accordance with a set of objectives of a diverse nature constituting a global therapy-evaluation framework in which considerations such as patient preferences and quality-of-life results are integrated. We give a high-level overview of this task as a specialisation of the generic decision task, and additional decomposition methods for the subtasks involved. These subtasks possess some reflective capabilities for reasoning about self-models, particularly the learning subtask, which incrementally corrects and refines the model used to assess the effects of the therapies. This work illustrates the process of reuse in the framework of AI software development methodologies such as KADS-CommonKADS in order to obtain new (more specialised but still generic) components for the analysis libraries developed in this context. In order to maximise reuse benefits, where possible, the therapy decision task and HM method have been defined in terms of regular components from the earlier-mentioned libraries. To emphasise the importance of using a rigorous approach to the modelling of domain and method ontologies, we make extensive use of the semi-formal object-oriented analysis notation UML, together with its associated constraint language OCL, to illustrate the ontology of the decision method and the corresponding specific one of the therapy decision domain, the latter being a refinement via inheritance of the former","tok_text":"knowledg model reus : therapi decis through specialis of a gener decis model \n we present the definit of the therapi decis task and it associ heurist multi-attribut ( hm ) solv method , in the form of a kads-styl specif . the goal of the therapi decis task is to identifi the ideal therapi , for a given patient , in accord with a set of object of a divers natur constitut a global therapy-evalu framework in which consider such as patient prefer and quality-of-lif result are integr . we give a high-level overview of thi task as a specialis of the gener decis task , and addit decomposit method for the subtask involv . these subtask possess some reflect capabl for reason about self-model , particularli the learn subtask , which increment correct and refin the model use to assess the effect of the therapi . thi work illustr the process of reus in the framework of ai softwar develop methodolog such as kads-commonkad in order to obtain new ( more specialis but still gener ) compon for the analysi librari develop in thi context . in order to maximis reus benefit , where possibl , the therapi decis task and hm method have been defin in term of regular compon from the earlier-ment librari . to emphasis the import of use a rigor approach to the model of domain and method ontolog , we make extens use of the semi-form object-ori analysi notat uml , togeth with it associ constraint languag ocl , to illustr the ontolog of the decis method and the correspond specif one of the therapi decis domain , the latter be a refin via inherit of the former","ordered_present_kp":[0,109,203,375,432,668,711,873,1280,1326,1351,1379,1398],"keyphrases":["knowledge model reuse","therapy decision task","KADS-style specification","global therapy-evaluation framework","patient preferences","reasoning","learning subtask","software development methodologies","ontologies","object-oriented analysis notation","UML","constraint language","OCL","CommonKADS","generic decision model specialisation","Heuristic Multi-Attribute solving method"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","U","R","R"]} {"id":"1566","title":"A numerical C\/sup 1\/-shadowing result for retarded functional differential equations","abstract":"This paper gives a numerical C\/sup 1\/-shadowing between the exact solutions of a functional differential equation and its numerical approximations. The shadowing result is obtained by comparing exact solutions with numerical approximation which do not share the same initial value. Behavior of stable manifolds of functional differential equations under numerics will follow from the shadowing result","tok_text":"a numer c \/ sup 1\/-shadow result for retard function differenti equat \n thi paper give a numer c \/ sup 1\/-shadow between the exact solut of a function differenti equat and it numer approxim . the shadow result is obtain by compar exact solut with numer approxim which do not share the same initi valu . behavior of stabl manifold of function differenti equat under numer will follow from the shadow result","ordered_present_kp":[2,125,175,315,37],"keyphrases":["numerical C\/sup 1\/-shadowing","retarded functional differential equations","exact solutions","numerical approximations","stable manifolds"],"prmu":["P","P","P","P","P"]} {"id":"1523","title":"Process specialization: defining specialization for state diagrams","abstract":"A precise definition of specialization and inheritance promises to be as useful in organizational process modeling as it is in object modeling. It would help us better understand, maintain, reuse, and generate process models. However, even though object-oriented analysis and design methodologies take full advantage of the object specialization hierarchy, the process specialization hierarchy is not supported in major process representations, such as the state diagram, data flow diagram, and UML representations. Partly underlying this lack of support is an implicit assumption that we can always specialize a process by treating it as \"just another object.\" We argue in this paper that this is not so straightforward as it might seem; we argue that a process-specific approach must be developed. We propose such an approach in the form of a set of transformations which, when applied to a process description, always result in specialization. We illustrate this approach by applying it to the state diagram representation and demonstrate that this approach to process specialization is not only theoretically possible, but shows promise as a method for categorizing and analyzing processes. We point out apparent inconsistencies between our notion of process specialization and existing work on object specialization but show that these inconsistencies are superficial and that the definition we provide is compatible with the traditional notion of specialization","tok_text":"process special : defin special for state diagram \n a precis definit of special and inherit promis to be as use in organiz process model as it is in object model . it would help us better understand , maintain , reus , and gener process model . howev , even though object-ori analysi and design methodolog take full advantag of the object special hierarchi , the process special hierarchi is not support in major process represent , such as the state diagram , data flow diagram , and uml represent . partli underli thi lack of support is an implicit assumpt that we can alway special a process by treat it as \" just anoth object . \" we argu in thi paper that thi is not so straightforward as it might seem ; we argu that a process-specif approach must be develop . we propos such an approach in the form of a set of transform which , when appli to a process descript , alway result in special . we illustr thi approach by appli it to the state diagram represent and demonstr that thi approach to process special is not onli theoret possibl , but show promis as a method for categor and analyz process . we point out appar inconsist between our notion of process special and exist work on object special but show that these inconsist are superfici and that the definit we provid is compat with the tradit notion of special","ordered_present_kp":[0,36,84,115,265,332,413],"keyphrases":["process specialization","state diagrams","inheritance","organizational process modeling","object-oriented analysis","object specialization hierarchy","process representation","object-oriented design"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"1787","title":"The theory of information reversal","abstract":"The end of the industrial age coincides with the advent of the information society as the next model of social and economic organization, which brings about significant changes in the way modern man conceives work and the social environment. The functional basis of the new model is pivoted upon the effort to formulate the theory on the violent reversal of the basic relationship between man and information, and isolate it as one of the components for the creation of the new electronic reality. The objective of the theory of reversal is to effectively contribute to the formulation of a new definition consideration in regards to the concept of the emerging information society. In order to empirically apply the theory of reversal, we examine a case study based on the example of the digital library","tok_text":"the theori of inform revers \n the end of the industri age coincid with the advent of the inform societi as the next model of social and econom organ , which bring about signific chang in the way modern man conceiv work and the social environ . the function basi of the new model is pivot upon the effort to formul the theori on the violent revers of the basic relationship between man and inform , and isol it as one of the compon for the creation of the new electron realiti . the object of the theori of revers is to effect contribut to the formul of a new definit consider in regard to the concept of the emerg inform societi . in order to empir appli the theori of revers , we examin a case studi base on the exampl of the digit librari","ordered_present_kp":[89,45,136,690,727],"keyphrases":["industrial age","information society","economic organization","case study","digital library","information reversal theory","social organization","information systems"],"prmu":["P","P","P","P","P","R","R","M"]} {"id":"1851","title":"Supporting global user profiles through trusted authorities","abstract":"Personalization generally refers to making a Web site more responsive to the unique and individual needs of each user. We argue that for personalization to work effectively, detailed and interoperable user profiles should be globally available for authorized sites, and these profiles should dynamically reflect changes in user interests. Creating user profiles from user click-stream data seems to be an effective way of generating detailed and dynamic user profiles. However, a user profile generated in this way is available only on the computer where the user accesses his browser, and is inaccessible when the same user works on a different computer. On the other hand, integration of the Internet with telecommunication networks has made it possible for the users to connect to the Web with a variety of mobile devices as well as desktops. This requires that user profiles should be available to any desktop or mobile device on the Internet that users choose to work with. In this paper, we address these problems through the concept of \"trusted authority\". A user agent at the client side that captures the user click stream, dynamically generates a navigational history 'log' file in Extensible Markup Language (XML). This log file is then used to produce 'user profiles' in a resource description framework (RDF). A user's right to privacy is provided through the Platform for Privacy Preferences (P3P) standard. User profiles are uploaded to the trusted authority and served next time the user connects to the Web","tok_text":"support global user profil through trust author \n person gener refer to make a web site more respons to the uniqu and individu need of each user . we argu that for person to work effect , detail and interoper user profil should be global avail for author site , and these profil should dynam reflect chang in user interest . creat user profil from user click-stream data seem to be an effect way of gener detail and dynam user profil . howev , a user profil gener in thi way is avail onli on the comput where the user access hi browser , and is inaccess when the same user work on a differ comput . on the other hand , integr of the internet with telecommun network ha made it possibl for the user to connect to the web with a varieti of mobil devic as well as desktop . thi requir that user profil should be avail to ani desktop or mobil devic on the internet that user choos to work with . in thi paper , we address these problem through the concept of \" trust author \" . a user agent at the client side that captur the user click stream , dynam gener a navig histori ' log ' file in extens markup languag ( xml ) . thi log file is then use to produc ' user profil ' in a resourc descript framework ( rdf ) . a user 's right to privaci is provid through the platform for privaci prefer ( p3p ) standard . user profil are upload to the trust author and serv next time the user connect to the web","ordered_present_kp":[8,35,50,79,633,647,738,976,1022,1110,1174,1230],"keyphrases":["global user profiles","trusted authorities","personalization","Web site","Internet","telecommunication networks","mobile device","user agent","user click stream","XML","resource description framework","privacy","navigational history log file","Platform for Privacy Preferences standard","namespace qualifier","globally unique user ID\/password identification"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R","U","M"]} {"id":"1814","title":"Control of integral processes with dead-time. 2. Quantitative analysis","abstract":"For part 1, see ibid., p.285-90, (2002). Several different control schemes for integral processes with dead time resulted in the same disturbance response. It has already been shown that such a response is subideal. Hence, it is necessary to quantitatively analyse the achievable specifications and the robust stability regions. The control parameter can be quantitatively determined with a compromise between the disturbance response and the robustness. Four specifications: (normalised) maximum dynamic error, maximum decay rate, (normalised) control action bound and approximate recovery time are used to characterise the step-disturbance response. It is shown that any attempt to obtain a (normalised) dynamic error less than tau \/sub m\/ is impossible and a sufficient condition on the (relative) gain-uncertainty bound is square root (3)\/2","tok_text":"control of integr process with dead-tim . 2 . quantit analysi \n for part 1 , see ibid . , p.285 - 90 , ( 2002 ) . sever differ control scheme for integr process with dead time result in the same disturb respons . it ha alreadi been shown that such a respons is subid . henc , it is necessari to quantit analys the achiev specif and the robust stabil region . the control paramet can be quantit determin with a compromis between the disturb respons and the robust . four specif : ( normalis ) maximum dynam error , maximum decay rate , ( normalis ) control action bound and approxim recoveri time are use to characteris the step-disturb respons . it is shown that ani attempt to obtain a ( normalis ) dynam error less than tau \/sub m\/ is imposs and a suffici condit on the ( rel ) gain-uncertainti bound is squar root ( 3)\/2","ordered_present_kp":[11,31,46,195,336,336,492,514,548,573,623,750,780],"keyphrases":["integral processes","dead-time","quantitative analysis","disturbance response","robust stability regions","robustness","maximum dynamic error","maximum decay rate","control action bound","approximate recovery time","step-disturbance response","sufficient condition","gain-uncertainty bound"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1486","title":"Hand-held digital video-camera for eye examination and follow-up","abstract":"We developed a hand-held digital colour video-camera for eye examination in primary care. The device weighed 550 g. It featured a charge-coupled device (CCD) and corrective optics. Both colour video and digital still images could be taken. The video-camera was connected to a PC with software for database storage, image processing and telecommunication. We studied 88 normal subjects (38 male, 50 female), aged 7-62 years. It was not necessary to use mydriatic eye drops for pupillary dilation. Satisfactory digital images of the whole face and the anterior eye were obtained. The optic disc and the central part of the ocular fundus could also be recorded. Image quality of the face and the anterior eye were excellent; image quality of the optic disc and macula were good enough for tele-ophthalmology. Further studies are needed to evaluate the usefulness of the equipment in different clinical conditions","tok_text":"hand-held digit video-camera for eye examin and follow-up \n we develop a hand-held digit colour video-camera for eye examin in primari care . the devic weigh 550 g. it featur a charge-coupl devic ( ccd ) and correct optic . both colour video and digit still imag could be taken . the video-camera wa connect to a pc with softwar for databas storag , imag process and telecommun . we studi 88 normal subject ( 38 male , 50 femal ) , age 7 - 62 year . it wa not necessari to use mydriat eye drop for pupillari dilat . satisfactori digit imag of the whole face and the anterior eye were obtain . the optic disc and the central part of the ocular fundu could also be record . imag qualiti of the face and the anterior eye were excel ; imag qualiti of the optic disc and macula were good enough for tele-ophthalmolog . further studi are need to evalu the use of the equip in differ clinic condit","ordered_present_kp":[33,127,177,208,246,313,321,333,350,367,392,547,566,597,636,672,794,877,48],"keyphrases":["eye examination","follow-up","primary care","charge-coupled device","corrective optics","digital still images","PC","software","database storage","image processing","telecommunication","normal subjects","whole face","anterior eye","optic disc","ocular fundus","image quality","tele-ophthalmology","clinical conditions","hand-held digital colour video camera","colour video images"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M","R"]} {"id":"152","title":"Linear tense logics of increasing sets","abstract":"We provide an extension of the language of linear tense logic with future and past connectives F and P, respectively, by a modality that quantifies over the points of some set which is assumed to increase in the course of time. In this way we obtain a general framework for modelling growth qualitatively. We develop an appropriate logical system, prove a corresponding completeness and decidability result and discuss the various kinds of flow of time in the new context. We also consider decreasing sets briefly","tok_text":"linear tens logic of increas set \n we provid an extens of the languag of linear tens logic with futur and past connect f and p , respect , by a modal that quantifi over the point of some set which is assum to increas in the cours of time . in thi way we obtain a gener framework for model growth qualit . we develop an appropri logic system , prove a correspond complet and decid result and discuss the variou kind of flow of time in the new context . we also consid decreas set briefli","ordered_present_kp":[0,96,328,362,374,467],"keyphrases":["linear tense logic","future and past connectives","logical system","completeness","decidability","decreasing sets","temporal reasoning"],"prmu":["P","P","P","P","P","P","U"]} {"id":"1542","title":"The open-source HCS project","abstract":"Despite the rumors, the HCS II project is not dead. In fact, HCS has been licensed and is now an open-source project. In this article, the author brings us up to speed on the HCS II project's past, present, and future. The HCS II is an expandable, standalone, network-based (RS-485), intelligent-node, industrial-oriented supervisory control (SC) system intended for demanding home control applications. The HCS incorporates direct and remote digital inputs and outputs, direct and remote analog inputs and outputs, real time or Boolean decision event triggering, X10 transmission and reception, infrared remote control transmission and reception, remote LCDs, and a master console. Its program is compiled on a PC with the XPRESS compiler and then downloaded to the SC where it runs independently of the PC","tok_text":"the open-sourc hc project \n despit the rumor , the hc ii project is not dead . in fact , hc ha been licens and is now an open-sourc project . in thi articl , the author bring us up to speed on the hc ii project 's past , present , and futur . the hc ii is an expand , standalon , network-bas ( rs-485 ) , intelligent-nod , industrial-ori supervisori control ( sc ) system intend for demand home control applic . the hc incorpor direct and remot digit input and output , direct and remot analog input and output , real time or boolean decis event trigger , x10 transmiss and recept , infrar remot control transmiss and recept , remot lcd , and a master consol . it program is compil on a pc with the xpress compil and then download to the sc where it run independ of the pc","ordered_present_kp":[51,390,280],"keyphrases":["HCS II","network-based","home control","supervisory control system"],"prmu":["P","P","P","R"]} {"id":"1507","title":"Ethnography, customers, and negotiated interactions at the airport","abstract":"In the late 1990s, tightly coordinated airline schedules unraveled owing to massive delays resulting from inclement weather, overbooked flights, and airline operational difficulties. As schedules slipped, the delayed departures and late arrivals led to systemwide breakdowns, customers missed their connections, and airline work activities fell further out of sync. In offering possible answers, we emphasize the need to consider the customer as participant, following the human-centered computing model. Our study applied ethnographic methods to understand the airline system domain and the nature of airline delays, and it revealed the deficiencies of the airline production system model of operations. The research insights that led us to shift from a production and marketing system perspective to a customer-as-participant view might appear obvious to some readers. However, we do not know of any airline that designs its operations and technologies around any other model than the production and marketing system view. Our human-centered analysis used ethnographic methods to gather information, offering new insight into airline delays and suggesting effective ways to improve operations reliability","tok_text":"ethnographi , custom , and negoti interact at the airport \n in the late 1990 , tightli coordin airlin schedul unravel owe to massiv delay result from inclement weather , overbook flight , and airlin oper difficulti . as schedul slip , the delay departur and late arriv led to systemwid breakdown , custom miss their connect , and airlin work activ fell further out of sync . in offer possibl answer , we emphas the need to consid the custom as particip , follow the human-cent comput model . our studi appli ethnograph method to understand the airlin system domain and the natur of airlin delay , and it reveal the defici of the airlin product system model of oper . the research insight that led us to shift from a product and market system perspect to a customer-as-particip view might appear obviou to some reader . howev , we do not know of ani airlin that design it oper and technolog around ani other model than the product and market system view . our human-cent analysi use ethnograph method to gather inform , offer new insight into airlin delay and suggest effect way to improv oper reliabl","ordered_present_kp":[466,50,0,27,582,756,1088],"keyphrases":["ethnography","negotiated interactions","airports","human-centered computing model","airline delays","customer-as-participant view","operations reliability","customer trajectories","employees","airline production system operations model"],"prmu":["P","P","P","P","P","P","P","M","U","R"]} {"id":"1643","title":"Effectiveness of user testing and heuristic evaluation as a function of performance classification","abstract":"For different levels of user performance, different types of information are processed and users will make different types of errors. Based on the error's immediate cause and the information being processed, usability problems can be classified into three categories. They are usability problems associated with skill-based, rule-based and knowledge-based levels of performance. In this paper, a user interface for a Web-based software program was evaluated with two usability evaluation methods, user testing and heuristic evaluation. The experiment discovered that the heuristic evaluation with human factor experts is more effective than user testing in identifying usability problems associated with skill-based and rule-based levels of performance. User testing is more effective than heuristic evaluation in finding usability problems associated with the knowledge-based level of performance. The practical application of this research is also discussed in the paper","tok_text":"effect of user test and heurist evalu as a function of perform classif \n for differ level of user perform , differ type of inform are process and user will make differ type of error . base on the error 's immedi caus and the inform be process , usabl problem can be classifi into three categori . they are usabl problem associ with skill-bas , rule-bas and knowledge-bas level of perform . in thi paper , a user interfac for a web-bas softwar program wa evalu with two usabl evalu method , user test and heurist evalu . the experi discov that the heurist evalu with human factor expert is more effect than user test in identifi usabl problem associ with skill-bas and rule-bas level of perform . user test is more effect than heurist evalu in find usabl problem associ with the knowledge-bas level of perform . the practic applic of thi research is also discuss in the paper","ordered_present_kp":[10,24,55,93,245,407,427,524,566],"keyphrases":["user testing","heuristic evaluation","performance classification","user performance","usability","user interface","Web-based software","experiment","human factors","knowledge-based performance levels","skill-based performance levels","rule-based performance levels"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1606","title":"Single machine earliness-tardiness scheduling with resource-dependent release dates","abstract":"This paper deals with the single machine earliness and tardiness scheduling problem with a common due date and resource-dependent release dates. It is assumed that the cost of resource consumption of a job is a non-increasing linear function of the job release date, and this function is common for all jobs. The objective is to find a schedule and job release dates that minimize the total resource consumption, and earliness and tardiness penalties. It is shown that the problem is NP-hard in the ordinary sense even if the due date is unrestricted (the number of jobs that can be scheduled before the due date is unrestricted). An exact dynamic programming (DP) algorithm for small and medium size problems is developed. A heuristic algorithm for large-scale problems is also proposed and the results of a computational comparison between heuristic and optimal solutions are discussed","tok_text":"singl machin earliness-tardi schedul with resource-depend releas date \n thi paper deal with the singl machin earli and tardi schedul problem with a common due date and resource-depend releas date . it is assum that the cost of resourc consumpt of a job is a non-increas linear function of the job releas date , and thi function is common for all job . the object is to find a schedul and job releas date that minim the total resourc consumpt , and earli and tardi penalti . it is shown that the problem is np-hard in the ordinari sens even if the due date is unrestrict ( the number of job that can be schedul befor the due date is unrestrict ) . an exact dynam program ( dp ) algorithm for small and medium size problem is develop . a heurist algorithm for large-scal problem is also propos and the result of a comput comparison between heurist and optim solut are discuss","ordered_present_kp":[0,42,148,293,701,736,758],"keyphrases":["single machine earliness-tardiness scheduling","resource-dependent release dates","common due date","job release date","medium size problems","heuristic algorithm","large-scale problems","job resource consumption cost","nonincreasing linear function","total resource consumption minimization","NP-hard problem","exact dynamic programming algorithm","small size problems","polynomial time algorithm"],"prmu":["P","P","P","P","P","P","P","R","M","R","R","R","R","M"]} {"id":"1875","title":"The design and implementation of VAMPIRE","abstract":"We describe VAMPIRE: a high-performance theorem prover for first-order logic. As our description is mostly targeted to the developers of such systems and specialists in automated reasoning, it focuses on the design of the system and some key implementation features. We also analyze the performance of the prover at CASC-JC","tok_text":"the design and implement of vampir \n we describ vampir : a high-perform theorem prover for first-ord logic . as our descript is mostli target to the develop of such system and specialist in autom reason , it focus on the design of the system and some key implement featur . we also analyz the perform of the prover at casc-jc","ordered_present_kp":[28,59,91,190,318],"keyphrases":["VAMPIRE","high-performance theorem prover","first-order logic","automated reasoning","CASC-JC","performance evaluation","resolution theorem proving"],"prmu":["P","P","P","P","P","M","M"]} {"id":"1830","title":"Approximation of pathwidth of outerplanar graphs","abstract":"There exists a polynomial time algorithm to compute the pathwidth of outerplanar graphs, but the large exponent makes this algorithm impractical. In this paper, we give an algorithm that, given a biconnected outerplanar graph G, finds a path decomposition of G of pathwidth at most twice the pathwidth of G plus one. To obtain the result, several relations between the pathwidth of a biconnected outerplanar graph and its dual are established","tok_text":"approxim of pathwidth of outerplanar graph \n there exist a polynomi time algorithm to comput the pathwidth of outerplanar graph , but the larg expon make thi algorithm impract . in thi paper , we give an algorithm that , given a biconnect outerplanar graph g , find a path decomposit of g of pathwidth at most twice the pathwidth of g plu one . to obtain the result , sever relat between the pathwidth of a biconnect outerplanar graph and it dual are establish","ordered_present_kp":[25,59,229,268],"keyphrases":["outerplanar graphs","polynomial time algorithm","biconnected outerplanar graph","path decomposition","pathwidth approximation"],"prmu":["P","P","P","P","R"]} {"id":"1888","title":"L\/sub 2\/ model reduction and variance reduction","abstract":"We examine certain variance properties of model reduction. The focus is on L\/sub 2\/ model reduction, but some general results are also presented. These general results can be used to analyze various other model reduction schemes. The models we study are finite impulse response (FIR) and output error (OE) models. We compare the variance of two estimated models. The first one is estimated directly from data and the other one is computed by reducing a high order model, by L\/sub 2\/ model reduction. In the FIR case we show that it is never better to estimate the model directly from data, compared to estimating it via L\/sub 2\/ model reduction of a high order FIR model. For OE models we show that the reduced model has the same variance as the directly estimated one if the reduced model class used contains the true system","tok_text":"l \/ sub 2\/ model reduct and varianc reduct \n we examin certain varianc properti of model reduct . the focu is on l \/ sub 2\/ model reduct , but some gener result are also present . these gener result can be use to analyz variou other model reduct scheme . the model we studi are finit impuls respons ( fir ) and output error ( oe ) model . we compar the varianc of two estim model . the first one is estim directli from data and the other one is comput by reduc a high order model , by l \/ sub 2\/ model reduct . in the fir case we show that it is never better to estim the model directli from data , compar to estim it via l \/ sub 2\/ model reduct of a high order fir model . for oe model we show that the reduc model ha the same varianc as the directli estim one if the reduc model class use contain the true system","ordered_present_kp":[0,28,662],"keyphrases":["L\/sub 2\/ model reduction","variance reduction","FIR models","finite impulse response models","output error models","identification"],"prmu":["P","P","P","R","R","U"]} {"id":"1462","title":"Non-linear analysis of nearly saturated porous media: theoretical and numerical formulation","abstract":"A formulation for a porous medium saturated with a compressible fluid undergoing large elastic and plastic deformations is presented. A consistent thermodynamic formulation is proposed for the two-phase mixture problem; thus preserving a straightforward and robust numerical scheme. A novel feature is the specification of the fluid compressibility in terms of a volumetric logarithmic strain, which is energy conjugated to the fluid pressure in the entropy inequality. As a result, the entropy inequality is used to separate three different mechanisms representing the response: effective stress response according to Terzaghi in the solid skeleton, fluid pressure response to compressibility of the fluid, and dissipative Darcy flow representing the interaction between the two phases. The paper is concluded with a couple of numerical examples that display the predictive capabilities of the proposed formulation. In particular, we consider results for the kinematically linear theory as compared to the kinematically non-linear theory","tok_text":"non-linear analysi of nearli satur porou media : theoret and numer formul \n a formul for a porou medium satur with a compress fluid undergo larg elast and plastic deform is present . a consist thermodynam formul is propos for the two-phas mixtur problem ; thu preserv a straightforward and robust numer scheme . a novel featur is the specif of the fluid compress in term of a volumetr logarithm strain , which is energi conjug to the fluid pressur in the entropi inequ . as a result , the entropi inequ is use to separ three differ mechan repres the respons : effect stress respons accord to terzaghi in the solid skeleton , fluid pressur respons to compress of the fluid , and dissip darci flow repres the interact between the two phase . the paper is conclud with a coupl of numer exampl that display the predict capabl of the propos formul . in particular , we consid result for the kinemat linear theori as compar to the kinemat non-linear theori","ordered_present_kp":[22,117,185,230,290,348,376,434,455,560,608,625,678,807,886],"keyphrases":["nearly saturated porous media","compressible fluid","consistent thermodynamic formulation","two-phase mixture problem","robust numerical scheme","fluid compressibility","volumetric logarithmic strain","fluid pressure","entropy inequality","effective stress response","solid skeleton","fluid pressure response","dissipative Darcy flow","predictive capabilities","kinematically linear theory","nonlinear analysis","large elastic deformations","large plastic deformations","kinematically nonlinear theory"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M","R","R","M"]} {"id":"1726","title":"Two-layer model for the formation of states of the hidden Markov chains","abstract":"Procedures for the formation of states of the hidden Markov models are described. Formant amplitudes and frequencies are used as state features. The training strategy is presented that allows one to calculate the parameters of conditional probabilities of the generation of a given formant set by a given hidden state with the help of the maximum likelihood method","tok_text":"two-lay model for the format of state of the hidden markov chain \n procedur for the format of state of the hidden markov model are describ . formant amplitud and frequenc are use as state featur . the train strategi is present that allow one to calcul the paramet of condit probabl of the gener of a given formant set by a given hidden state with the help of the maximum likelihood method","ordered_present_kp":[107,141,182,267,329,363],"keyphrases":["hidden Markov models","formant amplitudes","state features","conditional probabilities","hidden state","maximum likelihood method","formant frequencies"],"prmu":["P","P","P","P","P","P","R"]} {"id":"1763","title":"Numerical studies of 2D free surface waves with fixed bottom","abstract":"The motion of surface waves under the effect of bottom is a very interesting and challenging phenomenon in the nature. we use boundary integral method to compute and analyze this problem. In the linear analysis, the linearized equations have bounded error increase under some compatible conditions. This contributes to the cancellation of instable Kelvin-Helmholtz terms. Under the effect of bottom, the existence of equations is hard to determine, but given some limitations it proves true. These limitations are that the swing of interfaces should be small enough, and the distance between surface and bottom should be large enough. In order to maintain the stability of computation, some compatible relationship must be satisfied. In the numerical examples, the simulation of standing waves and breaking waves are calculated. And in the case of shallow bottom, we found that the behavior of waves are rather singular","tok_text":"numer studi of 2d free surfac wave with fix bottom \n the motion of surfac wave under the effect of bottom is a veri interest and challeng phenomenon in the natur . we use boundari integr method to comput and analyz thi problem . in the linear analysi , the linear equat have bound error increas under some compat condit . thi contribut to the cancel of instabl kelvin-helmholtz term . under the effect of bottom , the exist of equat is hard to determin , but given some limit it prove true . these limit are that the swing of interfac should be small enough , and the distanc between surfac and bottom should be larg enough . in order to maintain the stabil of comput , some compat relationship must be satisfi . in the numer exampl , the simul of stand wave and break wave are calcul . and in the case of shallow bottom , we found that the behavior of wave are rather singular","ordered_present_kp":[0,15,171,236,257,353],"keyphrases":["numerical studies","2D free surface waves","boundary integral method","linear analysis","linearized equations","instable Kelvin-Helmholtz terms"],"prmu":["P","P","P","P","P","P"]} {"id":"1848","title":"Contracting in the days of ebusiness","abstract":"Putting electronic business on a sound foundation-model theoretically as well as technologically-is a central challenge for research as well as commercial development. This paper concentrates on the discovery and negotiation phase of concluding an agreement based on a contract. We present a methodology for moving seamlessly from a many-to-many relationship in the discovery phase to a one-to-one relationship in the contract negotiation phase. Making the content of contracts persistent is achieved by reconstructing contract templates by means of mereologic (logic of the whole-part relation). Possibly nested sub-structures of the contract template are taken as a basis for negotiation in a dialogical way. For the negotiation itself the contract templates are extended by implications (logical) and sequences (topical)","tok_text":"contract in the day of ebusi \n put electron busi on a sound foundation-model theoret as well as technologically-i a central challeng for research as well as commerci develop . thi paper concentr on the discoveri and negoti phase of conclud an agreement base on a contract . we present a methodolog for move seamlessli from a many-to-mani relationship in the discoveri phase to a one-to-on relationship in the contract negoti phase . make the content of contract persist is achiev by reconstruct contract templat by mean of mereolog ( logic of the whole-part relat ) . possibl nest sub-structur of the contract templat are taken as a basi for negoti in a dialog way . for the negoti itself the contract templat are extend by implic ( logic ) and sequenc ( topic )","ordered_present_kp":[35,358,0,325,379,409,523,495,576,745,724],"keyphrases":["contracting","electronic business","many-to-many relationship","discovery phase","one-to-one relationship","contract negotiation phase","contract templates","mereologic","nested sub-structure","implications","sequences"],"prmu":["P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1910","title":"Breast cancer: effectiveness of computer-aided diagnosis-observer study with independent database of mammograms","abstract":"Evaluates the effectiveness of a computerized classification method as an aid to radiologists reviewing clinical mammograms for which the diagnoses were unknown to both the radiologists and the computer. Six mammographers and six community radiologists participated in an observer study. These 12 radiologists interpreted, with and without the computer aid, 110 cases that were unknown to both the 12 radiologist observers and the trained computer classification scheme. The radiologists' performances in differentiating between benign and malignant masses without and with the computer aid were evaluated with receiver operating characteristic (ROC) analysis. Two-tailed P values were calculated for the Student t test to indicate the statistical significance of the differences in performances with and without the computer aid. When the computer aid was used, the average performance of the 12 radiologists improved, as indicated by an increase in the area under the ROC curve (A\/sub z\/) from 0.93 to 0.96 (P<.001), by an increase in partial area under the ROC curve (\/sub 0.9\/0A'\/sub z\/) from 0.56 to 0.72 (P<.001), and by an increase in sensitivity from 94% to 98% (P=.022). No statistically significant difference in specificity was found between readings with and those without computer aid ( Delta +-0.014; P=.46; 95% Cl: -0.054, 0.026), where Delta is difference in specificity. When we analyzed results from the mammographers and community radiologists as separate groups, a larger improvement was demonstrated for the community radiologists. Computer-aided diagnosis can potentially help radiologists improve their diagnostic accuracy in the task of differentiating between benign and malignant masses seen on mammograms","tok_text":"breast cancer : effect of computer-aid diagnosis-observ studi with independ databas of mammogram \n evalu the effect of a computer classif method as an aid to radiologist review clinic mammogram for which the diagnos were unknown to both the radiologist and the comput . six mammograph and six commun radiologist particip in an observ studi . these 12 radiologist interpret , with and without the comput aid , 110 case that were unknown to both the 12 radiologist observ and the train comput classif scheme . the radiologist ' perform in differenti between benign and malign mass without and with the comput aid were evalu with receiv oper characterist ( roc ) analysi . two-tail p valu were calcul for the student t test to indic the statist signific of the differ in perform with and without the comput aid . when the comput aid wa use , the averag perform of the 12 radiologist improv , as indic by an increas in the area under the roc curv ( a \/ sub z\/ ) from 0.93 to 0.96 ( p<.001 ) , by an increas in partial area under the roc curv ( \/sub 0.9\/0a'\/sub z\/ ) from 0.56 to 0.72 ( p<.001 ) , and by an increas in sensit from 94 % to 98 % ( p=.022 ) . no statist signific differ in specif wa found between read with and those without comput aid ( delta + -0.014 ; p=.46 ; 95 % cl : -0.054 , 0.026 ) , where delta is differ in specif . when we analyz result from the mammograph and commun radiologist as separ group , a larger improv wa demonstr for the commun radiologist . computer-aid diagnosi can potenti help radiologist improv their diagnost accuraci in the task of differenti between benign and malign mass seen on mammogram","ordered_present_kp":[121,177,49,0,26,67,478,451,567,670,706,734,526,843,1538,396,274,293],"keyphrases":["breast cancer","computer-aided diagnosis","observer study","independent database","computerized classification method","clinical mammograms","mammographers","community radiologists","computer aid","radiologist observers","trained computer classification scheme","performances","malignant masses","two-tailed P values","Student t test","statistical significance","average performance","diagnostic accuracy","benign masses","receiver operating characteristic analysis","receiver operating characteristic curve"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"192","title":"New Jersey African American women writers and their publications: a study of identification from written and oral sources","abstract":"This study examines the use of written sources, and personal interviews and informal conversations with individuals from New Jersey's religious, political, and educational community to identify African American women writers in New Jersey and their intellectual output. The focus on recognizing the community as an oral repository of history and then tapping these oral sources for collection development and acquisition purposes is supported by empirical and qualitative evidence. Findings indicate that written sources are so limited that information professionals must rely on oral sources to uncover local writers and their publications","tok_text":"new jersey african american women writer and their public : a studi of identif from written and oral sourc \n thi studi examin the use of written sourc , and person interview and inform convers with individu from new jersey 's religi , polit , and educ commun to identifi african american women writer in new jersey and their intellectu output . the focu on recogn the commun as an oral repositori of histori and then tap these oral sourc for collect develop and acquisit purpos is support by empir and qualit evid . find indic that written sourc are so limit that inform profession must reli on oral sourc to uncov local writer and their public","ordered_present_kp":[0,137,157,178,325,381,400,442,615],"keyphrases":["New Jersey African American women writers","written sources","personal interviews","informal conversations","intellectual output","oral repository","history","collection development","local writers","special collections"],"prmu":["P","P","P","P","P","P","P","P","P","M"]} {"id":"1683","title":"Unlocking the potential of videoconferencing","abstract":"I propose in this paper to show, through a number of case studies, that videoconferencing is user-friendly, cost-effective, time-effective and life-enhancing for people of all ages and abilities and that it requires only a creative and imaginative approach to unlock its potential. I believe that these benefits need not, and should not, be restricted to the education sector. My examples will range from simple storytelling, through accessing international experts, professional development and distance learning in a variety of forms, to the use of videoconferencing for virtual meetings and planning sessions. In some cases, extracts from the reactions and responses of the participants will be included to illustrate the impact of the medium","tok_text":"unlock the potenti of videoconferenc \n i propos in thi paper to show , through a number of case studi , that videoconferenc is user-friendli , cost-effect , time-effect and life-enhanc for peopl of all age and abil and that it requir onli a creativ and imagin approach to unlock it potenti . i believ that these benefit need not , and should not , be restrict to the educ sector . my exampl will rang from simpl storytel , through access intern expert , profession develop and distanc learn in a varieti of form , to the use of videoconferenc for virtual meet and plan session . in some case , extract from the reaction and respons of the particip will be includ to illustr the impact of the medium","ordered_present_kp":[22,312,91,367],"keyphrases":["videoconferencing","case studies","benefits","education"],"prmu":["P","P","P","P"]} {"id":"1724","title":"A winning combination [wireless health care]","abstract":"Three years ago, the Institute of Medicine (IOM) reported that medical errors result in at least 44,000 deaths each year-more than deaths from highway accidents, breast cancer or AIDS. That report, and others which placed serious errors as high as 98,000 annually, served as a wake-up call for healthcare providers such as the CareGroup Healthcare System Inc., a Boston-area healthcare network that is the second largest integrated delivery system in the northeastern United States. With annual revenues of $1.2B, CareGroup provides primary care and specialty services to more than 1,000,000 patients. CareGroup combined wireless technology with the Web to create a provider order entry (POE) system designed to reduce the frequency of costly medical mistakes. The POE infrastructure includes InterSystems Corporation's CACHE database, Dell Computer C600 laptops and Cisco Systems' Aironet 350 wireless networks","tok_text":"a win combin [ wireless health care ] \n three year ago , the institut of medicin ( iom ) report that medic error result in at least 44,000 death each year-mor than death from highway accid , breast cancer or aid . that report , and other which place seriou error as high as 98,000 annual , serv as a wake-up call for healthcar provid such as the caregroup healthcar system inc. , a boston-area healthcar network that is the second largest integr deliveri system in the northeastern unit state . with annual revenu of $ 1.2b , caregroup provid primari care and specialti servic to more than 1,000,000 patient . caregroup combin wireless technolog with the web to creat a provid order entri ( poe ) system design to reduc the frequenc of costli medic mistak . the poe infrastructur includ intersystem corpor 's cach databas , dell comput c600 laptop and cisco system ' aironet 350 wireless network","ordered_present_kp":[346,394,15,101,670,824],"keyphrases":["wireless","medical errors","CareGroup Healthcare System","healthcare network","provider order entry","Dell Computer C600 laptops","InterSystems Corporation CACHE database","Cisco Systems Aironet 350 wireless networks"],"prmu":["P","P","P","P","P","P","R","R"]} {"id":"1761","title":"Superconvergence of discontinuous Galerkin method for nonstationary hyperbolic equation","abstract":"For the first order nonstationary hyperbolic equation taking the piecewise linear discontinuous Galerkin solver, we prove that under the uniform rectangular partition, such a discontinuous solver, after postprocessing, can have two and half approximative order which is half order higher than the optimal estimate by P. Lesaint and P. Raviart (1974) under the rectangular partition","tok_text":"superconverg of discontinu galerkin method for nonstationari hyperbol equat \n for the first order nonstationari hyperbol equat take the piecewis linear discontinu galerkin solver , we prove that under the uniform rectangular partit , such a discontinu solver , after postprocess , can have two and half approxim order which is half order higher than the optim estim by p. lesaint and p. raviart ( 1974 ) under the rectangular partit","ordered_present_kp":[0,47,136,213,303],"keyphrases":["superconvergence of discontinuous Galerkin method","nonstationary hyperbolic equation","piecewise linear discontinuous Galerkin solver","rectangular partition","approximative order"],"prmu":["P","P","P","P","P"]} {"id":"1681","title":"One and two facility network design revisited","abstract":"The one facility one commodity network design problem (OFOC) with nonnegative flow costs considers the problem of sending d units of flow from a source to a destination where arc capacity is purchased in batches of C units. The two facility problem (TFOC) is similar, but capacity can be purchased either in batches of C units or one unit. Flow costs are zero. These problems are known to be NP-hard. We describe an exact O(n\/sup 3\/3\/sup n\/) algorithm for these problems based on the repeated use of a bipartite matching algorithm. We also present a better lower bound of Omega (n\/sup 2k*\/) for an earlier Omega (n\/sup 2k\/) algorithm described in the literature where k = [d\/C] and k* = min{k, [(n 2)\/2]}. The matching algorithm is faster than this one for k >or= [(n - 2)\/2]. Finally, we provide another reformulation of the problem that is quasi integral. This property could be useful in designing a modified version of the simplex method to solve the problem using a sequence of pivots with integer extreme solutions, referred to as the integral simplex method in the literature","tok_text":"one and two facil network design revisit \n the one facil one commod network design problem ( ofoc ) with nonneg flow cost consid the problem of send d unit of flow from a sourc to a destin where arc capac is purchas in batch of c unit . the two facil problem ( tfoc ) is similar , but capac can be purchas either in batch of c unit or one unit . flow cost are zero . these problem are known to be np-hard . we describ an exact o(n \/ sup 3\/3 \/ sup n\/ ) algorithm for these problem base on the repeat use of a bipartit match algorithm . we also present a better lower bound of omega ( n \/ sup 2k*\/ ) for an earlier omega ( n \/ sup 2k\/ ) algorithm describ in the literatur where k = [ d \/ c ] and k * = min{k , [ ( n 2)\/2 ] } . the match algorithm is faster than thi one for k > or= [ ( n - 2)\/2 ] . final , we provid anoth reformul of the problem that is quasi integr . thi properti could be use in design a modifi version of the simplex method to solv the problem use a sequenc of pivot with integ extrem solut , refer to as the integr simplex method in the literatur","ordered_present_kp":[47,8,105,112,508,560,853,980,1028],"keyphrases":["two facility network design","one facility one commodity network design problem","nonnegative flow costs","flow costs","bipartite matching algorithm","lower bound","quasi integral","pivots","integral simplex method","NP-hard problems","exact algorithm"],"prmu":["P","P","P","P","P","P","P","P","P","R","R"]} {"id":"1538","title":"A heuristic approach to resource locations in broadband networks","abstract":"In broadband networks, such as ATM, the importance of dynamic migration of data resources is increasing because of its potential to improve performance especially for transaction processing. In environments with migratory data resources, it is necessary to have mechanisms to manage the locations of each data resource. In this paper, we present an algorithm that makes use of system state information and heuristics to manage locations of data resources in a distributed network. In the proposed algorithm, each site maintains information about state of other sites with respect to each data resource of the system and uses it to find: (1) a subset of sites likely to have the requested data resource; and (2) the site where the data resource is to be migrated from the current site. The proposed algorithm enhances its effectiveness by continuously updating system state information stored at each site. It focuses on reducing the overall average time delay needed by the transaction requests to locate and access the migratory data resources. We evaluated the performance of the proposed algorithm and also compared it with one of the existing location management algorithms, by simulation studies under several system parameters such as the frequency of requests generation, frequency of data resource migrations, network topology and scale of network. The experimental results show the effectiveness of the proposed algorithm in all cases","tok_text":"a heurist approach to resourc locat in broadband network \n in broadband network , such as atm , the import of dynam migrat of data resourc is increas becaus of it potenti to improv perform especi for transact process . in environ with migratori data resourc , it is necessari to have mechan to manag the locat of each data resourc . in thi paper , we present an algorithm that make use of system state inform and heurist to manag locat of data resourc in a distribut network . in the propos algorithm , each site maintain inform about state of other site with respect to each data resourc of the system and use it to find : ( 1 ) a subset of site like to have the request data resourc ; and ( 2 ) the site where the data resourc is to be migrat from the current site . the propos algorithm enhanc it effect by continu updat system state inform store at each site . it focus on reduc the overal averag time delay need by the transact request to locat and access the migratori data resourc . we evalu the perform of the propos algorithm and also compar it with one of the exist locat manag algorithm , by simul studi under sever system paramet such as the frequenc of request gener , frequenc of data resourc migrat , network topolog and scale of network . the experiment result show the effect of the propos algorithm in all case","ordered_present_kp":[39,90,22,2,457,1194,1216],"keyphrases":["heuristics","resource locations","broadband networks","ATM","distributed network","data resource migrations","network topology"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1912","title":"A novel preterm respiratory mechanics active simulator to test the performances of neonatal pulmonary ventilators","abstract":"A patient active simulator is proposed which is capable of reproducing values of the parameters of pulmonary mechanics of healthy newborns and preterm pathological infants. The implemented prototype is able to: (a) let the operator choose the respiratory pattern, times of apnea, episodes of cough, sobs, etc., (b) continuously regulate and control the parameters characterizing the pulmonary system; and, finally, (c) reproduce the attempt of breathing of a preterm infant. Taking into account both the limitation due to the chosen application field and the preliminary autocalibration phase automatically carried out by the proposed device, accuracy and reliability on the order of 1% is estimated. The previously indicated value has to be considered satisfactory in light of the field of application and the small values of the simulated parameters. Finally, the achieved metrological characteristics allow the described neonatal simulator to be adopted as a reference device to test performances of neonatal ventilators and, more specifically, to measure the time elapsed between the occurrence of a potentially dangerous condition to the patient and the activation of the corresponding alarm of the tested ventilator","tok_text":"a novel preterm respiratori mechan activ simul to test the perform of neonat pulmonari ventil \n a patient activ simul is propos which is capabl of reproduc valu of the paramet of pulmonari mechan of healthi newborn and preterm patholog infant . the implement prototyp is abl to : ( a ) let the oper choos the respiratori pattern , time of apnea , episod of cough , sob , etc . , ( b ) continu regul and control the paramet character the pulmonari system ; and , final , ( c ) reproduc the attempt of breath of a preterm infant . take into account both the limit due to the chosen applic field and the preliminari autocalibr phase automat carri out by the propos devic , accuraci and reliabl on the order of 1 % is estim . the previous indic valu ha to be consid satisfactori in light of the field of applic and the small valu of the simul paramet . final , the achiev metrolog characterist allow the describ neonat simul to be adopt as a refer devic to test perform of neonat ventil and , more specif , to measur the time elaps between the occurr of a potenti danger condit to the patient and the activ of the correspond alarm of the test ventil","ordered_present_kp":[8,70,98,199,219,613,670,683],"keyphrases":["preterm respiratory mechanics active simulator","neonatal pulmonary ventilators","patient active simulator","healthy newborns","preterm pathological infants","autocalibration phase","accuracy","reliability","apnea times","respiratory diseases","ventilatory support","intensive care equipment","electronic unit","pneumatic\/mechanical unit","software control","double compartment model","artificial trachea","pressure transducer","variable clamp resistance","upper airway resistance","compliance"],"prmu":["P","P","P","P","P","P","P","P","R","M","U","U","U","M","M","U","U","U","U","U","U"]} {"id":"190","title":"On the design of gain-scheduled trajectory tracking controllers [AUV application]","abstract":"A new methodology is proposed for the design of trajectory tracking controllers for autonomous vehicles. The design technique builds on gain scheduling control theory. An application is made to the design of a trajectory tracking controller for a prototype autonomous underwater vehicle (AUV). The effectiveness and advantages of the new control laws derived are illustrated in simulation using a full set of non-linear equations of motion of the vehicle","tok_text":"on the design of gain-schedul trajectori track control [ auv applic ] \n a new methodolog is propos for the design of trajectori track control for autonom vehicl . the design techniqu build on gain schedul control theori . an applic is made to the design of a trajectori track control for a prototyp autonom underwat vehicl ( auv ) . the effect and advantag of the new control law deriv are illustr in simul use a full set of non-linear equat of motion of the vehicl","ordered_present_kp":[146,192,299,368],"keyphrases":["autonomous vehicles","gain scheduling control theory","autonomous underwater vehicle","control laws","gain-scheduled trajectory tracking controller design","nonlinear equations of motion"],"prmu":["P","P","P","P","R","M"]} {"id":"1639","title":"New hub gears up for algorithmic exchange","abstract":"Warwick University in the UK is on the up and up. Sometimes considered a typical 1960s, middle-of-the-road redbrick institution-not known for their distinction the 2001 UK Research Assessment Exercise (RAE) shows its research to be the fifth most highly-rated in the country, with outstanding standards in the sciences. This impressive performance has rightly given Warwick a certain amount of muscle, which it is flexing rather effectively, aided by a snappy approach to making things happen that leaves some older institutions standing. The result is a brand new Centre for Scientific Computing (CSC), launched within a couple of years of its initial conception","tok_text":"new hub gear up for algorithm exchang \n warwick univers in the uk is on the up and up . sometim consid a typic 1960 , middle-of-the-road redbrick institution-not known for their distinct the 2001 uk research assess exercis ( rae ) show it research to be the fifth most highly-r in the countri , with outstand standard in the scienc . thi impress perform ha rightli given warwick a certain amount of muscl , which it is flex rather effect , aid by a snappi approach to make thing happen that leav some older institut stand . the result is a brand new centr for scientif comput ( csc ) , launch within a coupl of year of it initi concept","ordered_present_kp":[],"keyphrases":["Warwick University Centre for Scientific Computing"],"prmu":["R"]} {"id":"1641","title":"Development through gaming","abstract":"Mainstream observers commonly underestimate the role of fringe activities in propelling science and technology. Well-known examples are how wars have fostered innovation in areas such as communications, cryptography, medicine and aerospace; and how erotica has been a major factor in pioneering visual media, from the first printed books to photography, cinematography, videotape, or the latest online video streaming. The article aims to be a sampler of a less controversial, but still often underrated, symbiosis between scientific computing and computing for leisure and entertainment","tok_text":"develop through game \n mainstream observ commonli underestim the role of fring activ in propel scienc and technolog . well-known exampl are how war have foster innov in area such as commun , cryptographi , medicin and aerospac ; and how erotica ha been a major factor in pioneer visual media , from the first print book to photographi , cinematographi , videotap , or the latest onlin video stream . the articl aim to be a sampler of a less controversi , but still often underr , symbiosi between scientif comput and comput for leisur and entertain","ordered_present_kp":[497,528,539],"keyphrases":["scientific computing","leisure","entertainment","computer games","graphics"],"prmu":["P","P","P","R","U"]} {"id":"1604","title":"Improving supply-chain performance by sharing advance demand information","abstract":"In this paper, we analyze how sharing advance demand information (ADI) can improve supply-chain performance. We consider two types of ADI, aggregated ADI (A-ADI) and detailed ADI (D-ADI). With A-ADI, customers share with manufacturers information about whether they will place an order for some product in the next time period, but do not share information about which product they will order and which of several potential manufacturers will receive the order. With D-ADI, customers additionally share information about which product they will order, but which manufacturer will receive the order remains uncertain. We develop and solve mathematical models of supply chains where ADI is shared. We derive exact expressions and closed-form approximations for expected costs, expected base-stock levels, and variations of the production quantities. We show that both the manufacturer and the customers benefit from sharing ADI, but that sharing ADI increases the bullwhip effect. We also show that under certain conditions it is optimal to collect ADI from either none or all of the customers. We study two supply chains in detail: a supply chain with an arbitrary number of products that have identical demand rates, and a supply chain with two products that have arbitrary demand rates. For these two supply chains, we analyze how the values of A-ADI and D-ADI depend on the characteristics of the supply chain and on the quality of the shared information, and we identify conditions under which sharing A-ADI and D-ADI can significantly reduce cost. Our results can be used by decision makers to analyze the cost savings that can be achieved by sharing ADI and help them to determine if sharing ADI is beneficial for their supply chains","tok_text":"improv supply-chain perform by share advanc demand inform \n in thi paper , we analyz how share advanc demand inform ( adi ) can improv supply-chain perform . we consid two type of adi , aggreg adi ( a-adi ) and detail adi ( d-adi ) . with a-adi , custom share with manufactur inform about whether they will place an order for some product in the next time period , but do not share inform about which product they will order and which of sever potenti manufactur will receiv the order . with d-adi , custom addit share inform about which product they will order , but which manufactur will receiv the order remain uncertain . we develop and solv mathemat model of suppli chain where adi is share . we deriv exact express and closed-form approxim for expect cost , expect base-stock level , and variat of the product quantiti . we show that both the manufactur and the custom benefit from share adi , but that share adi increas the bullwhip effect . we also show that under certain condit it is optim to collect adi from either none or all of the custom . we studi two suppli chain in detail : a suppli chain with an arbitrari number of product that have ident demand rate , and a suppli chain with two product that have arbitrari demand rate . for these two suppli chain , we analyz how the valu of a-adi and d-adi depend on the characterist of the suppli chain and on the qualiti of the share inform , and we identifi condit under which share a-adi and d-adi can significantli reduc cost . our result can be use by decis maker to analyz the cost save that can be achiev by share adi and help them to determin if share adi is benefici for their suppli chain","ordered_present_kp":[37,186,211,265,646,725,750,764,931,1154,1220,1516,1542],"keyphrases":["advance demand information","aggregated ADI","detailed ADI","manufacturing","mathematical models","closed-form approximations","expected costs","expected base-stock levels","bullwhip effect","identical demand rates","arbitrary demand rates","decision makers","cost savings","supply-chain performance improvement","information sharing","production quantity variations","arbitrary product number","shared information quality","forecasting"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R","R","U"]} {"id":"150","title":"Model checking games for branching time logics","abstract":"This paper defines and examines model checking games for the branching time temporal logic CTL*. The games employ a technique called focus which enriches sets by picking out one distinguished element. This is necessary to avoid ambiguities in the regeneration of temporal operators. The correctness of these games is proved, and optimizations are considered to obtain model checking games for important fragments of CTL*. A game based model checking algorithm that matches the known lower and upper complexity bounds is sketched","tok_text":"model check game for branch time logic \n thi paper defin and examin model check game for the branch time tempor logic ctl * . the game employ a techniqu call focu which enrich set by pick out one distinguish element . thi is necessari to avoid ambigu in the regener of tempor oper . the correct of these game is prove , and optim are consid to obtain model check game for import fragment of ctl * . a game base model check algorithm that match the known lower and upper complex bound is sketch","ordered_present_kp":[0,21,105,269,470],"keyphrases":["model checking games","branching time logics","temporal logic","temporal operators","complexity bounds"],"prmu":["P","P","P","P","P"]} {"id":"1540","title":"Adaptive thinning for bivariate scattered data","abstract":"This paper studies adaptive thinning strategies for approximating a large set of scattered data by piecewise linear functions over triangulated subsets. Our strategies depend on both the locations of the data points in the plane, and the values of the sampled function at these points - adaptive thinning. All our thinning strategies remove data points one by one, so as to minimize an estimate of the error that results by the removal of a point from the current set of points (this estimate is termed \"anticipated error\"). The thinning process generates subsets of \"most significant\" points, such that the piecewise linear interpolants over the Delaunay triangulations of these subsets approximate progressively the function values sampled at the original scattered points, and such that the approximation errors are small relative to the number of points in the subsets. We design various methods for computing the anticipated error at reasonable cost, and compare and test the performance of the methods. It is proved that for data sampled from a convex function, with the strategy of convex triangulation, the actual error is minimized by minimizing the best performing measure of anticipated error. It is also shown that for data sampled from certain quadratic polynomials, adaptive thinning is equivalent to thinning which depends only on the locations of the data points - nonadaptive thinning. Based on our numerical tests and comparisons, two practical adaptive thinning algorithms are proposed for thinning large data sets, one which is more accurate and another which is faster","tok_text":"adapt thin for bivari scatter data \n thi paper studi adapt thin strategi for approxim a larg set of scatter data by piecewis linear function over triangul subset . our strategi depend on both the locat of the data point in the plane , and the valu of the sampl function at these point - adapt thin . all our thin strategi remov data point one by one , so as to minim an estim of the error that result by the remov of a point from the current set of point ( thi estim is term \" anticip error \" ) . the thin process gener subset of \" most signific \" point , such that the piecewis linear interpol over the delaunay triangul of these subset approxim progress the function valu sampl at the origin scatter point , and such that the approxim error are small rel to the number of point in the subset . we design variou method for comput the anticip error at reason cost , and compar and test the perform of the method . it is prove that for data sampl from a convex function , with the strategi of convex triangul , the actual error is minim by minim the best perform measur of anticip error . it is also shown that for data sampl from certain quadrat polynomi , adapt thin is equival to thin which depend onli on the locat of the data point - nonadapt thin . base on our numer test and comparison , two practic adapt thin algorithm are propos for thin larg data set , one which is more accur and anoth which is faster","ordered_present_kp":[0,22,116,146,383,604,953],"keyphrases":["adaptive thinning","scattered data","piecewise linear functions","triangulated subsets","error","Delaunay triangulations","convex function"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1505","title":"Modeling and simulating practices, a work method for work systems design","abstract":"Work systems involve people engaging in activities over time-not just with each other, but also with machines, tools, documents, and other artifacts. These activities often produce goods, services, or-as is the case in the work system described in this article-scientific data. Work systems and work practice evolve slowly over time. The integration and use of technology, the distribution and collocation of people, organizational roles and procedures, and the facilities where the work occurs largely determine this evolution","tok_text":"model and simul practic , a work method for work system design \n work system involv peopl engag in activ over time-not just with each other , but also with machin , tool , document , and other artifact . these activ often produc good , servic , or-a is the case in the work system describ in thi article-scientif data . work system and work practic evolv slowli over time . the integr and use of technolog , the distribut and colloc of peopl , organiz role and procedur , and the facil where the work occur larg determin thi evolut","ordered_present_kp":[],"keyphrases":["work practice simulation","work practice modeling","work system design method","complex system interactions","human activity","communication","collaboration","teamwork","tool usage","workspace usage","problem solving","learning behavior"],"prmu":["R","R","R","M","M","U","U","U","M","U","U","U"]} {"id":"1877","title":"Strong completeness of lattice-valued logic","abstract":"This paper shows strong completeness of the system L for lattice valued logic given by S. Titani (1999), in which she formulates a lattice-valued set theory by introducing the logical implication which represents the order relation on the lattice. Syntax and semantics concerned are described and strong completeness is proved","tok_text":"strong complet of lattice-valu logic \n thi paper show strong complet of the system l for lattic valu logic given by s. titani ( 1999 ) , in which she formul a lattice-valu set theori by introduc the logic implic which repres the order relat on the lattic . syntax and semant concern are describ and strong complet is prove","ordered_present_kp":[0,159,229,268,257,18],"keyphrases":["strong completeness","lattice-valued logic","lattice-valued set theory","order relation","syntax","semantics"],"prmu":["P","P","P","P","P","P"]} {"id":"1832","title":"A linear time algorithm for recognizing regular Boolean functions","abstract":"A positive (or monotone) Boolean function is regular if its variables are naturally ordered, left to fight, by decreasing strength, so that shifting the nonzero component of any true vector to the left always yields another true vector. This paper considers the problem of recognizing whether a positive function f is regular, where f is given by min T(f) (the set of all minimal true vectors of f). We propose a simple linear time (i.e., O(n|min T(f)|)-time) algorithm for it. This improves upon the previous algorithm by J.S. Provan and M.O. Ball (1988) which requires O(n\/sup 2\/|min T(f)|) time. As a corollary, we also present an O(n(n+|min T(f)|))-time algorithm for the recognition problem of 2-monotonic functions","tok_text":"a linear time algorithm for recogn regular boolean function \n a posit ( or monoton ) boolean function is regular if it variabl are natur order , left to fight , by decreas strength , so that shift the nonzero compon of ani true vector to the left alway yield anoth true vector . thi paper consid the problem of recogn whether a posit function f is regular , where f is given by min t(f ) ( the set of all minim true vector of f ) . we propos a simpl linear time ( i.e. , o(n|min t(f)|)-time ) algorithm for it . thi improv upon the previou algorithm by j.s. provan and m.o. ball ( 1988 ) which requir o(n \/ sup 2\/|min t(f)| ) time . as a corollari , we also present an o(n(n+|min t(f)|))-time algorithm for the recognit problem of 2-monoton function","ordered_present_kp":[2,35,201,223,328,731],"keyphrases":["linear time algorithm","regular Boolean functions","nonzero component","true vector","positive function","2-monotonic functions","monotone Boolean function"],"prmu":["P","P","P","P","P","P","R"]} {"id":"1719","title":"The UPS as network management tool","abstract":"Uninterrupted power supplies (UPS), or battery backup systems, once provided a relatively limited, although important, function-continual battery support to connected equipment in the event of a power failure. However, yesterday's \"battery in a box\" has evolved into a sophisticated network power management tool that can monitor and actively correct many of the problems that might plague a healthy network. This new breed of UPS system provides such features as automatic voltage regulation, generous runtimes and unattended system shutdown, and now also monitors and automatically restarts critical services and operating systems if they lock up or otherwise fail","tok_text":"the up as network manag tool \n uninterrupt power suppli ( up ) , or batteri backup system , onc provid a rel limit , although import , function-continu batteri support to connect equip in the event of a power failur . howev , yesterday 's \" batteri in a box \" ha evolv into a sophist network power manag tool that can monitor and activ correct mani of the problem that might plagu a healthi network . thi new breed of up system provid such featur as automat voltag regul , gener runtim and unattend system shutdown , and now also monitor and automat restart critic servic and oper system if they lock up or otherwis fail","ordered_present_kp":[31,284,490,450],"keyphrases":["uninterrupted power supplies","network power management","automatic voltage regulation","unattended system shutdown"],"prmu":["P","P","P","P"]} {"id":"174","title":"The BIOGENES system for knowledge-based bioprocess control","abstract":"The application of knowledge-based control systems in the area of biotechnological processes has become increasingly popular over the past decade. This paper outlines the structure of the advanced knowledge-based part of the BIOGENES Copyright control system for the control of bioprocesses such as the fed-batch Saccharomyces cerevisiae cultivation. First, a brief overview of all the tasks implemented in the knowledge-based level including process data classification, qualitative process state identification and supervisory process control is given. The procedures performing the on-line identification of metabolic states and supervisory process control (setpoint calculation and control strategy selection) are described in more detail. Finally, the performance of the system is discussed using results obtained from a number of experimental cultivation runs in a laboratory unit","tok_text":"the biogen system for knowledge-bas bioprocess control \n the applic of knowledge-bas control system in the area of biotechnolog process ha becom increasingli popular over the past decad . thi paper outlin the structur of the advanc knowledge-bas part of the biogen copyright control system for the control of bioprocess such as the fed-batch saccharomyc cerevisia cultiv . first , a brief overview of all the task implement in the knowledge-bas level includ process data classif , qualit process state identif and supervisori process control is given . the procedur perform the on-lin identif of metabol state and supervisori process control ( setpoint calcul and control strategi select ) are describ in more detail . final , the perform of the system is discuss use result obtain from a number of experiment cultiv run in a laboratori unit","ordered_present_kp":[4,22,115,332,458,481,514,596],"keyphrases":["BIOGENES system","knowledge-based bioprocess control","biotechnological processes","fed-batch Saccharomyces cerevisiae cultivation","process data classification","qualitative process state identification","supervisory process control","metabolic states","online identification","experiment"],"prmu":["P","P","P","P","P","P","P","P","M","U"]} {"id":"1564","title":"Asymptotic normality for the K\/sub phi \/-divergence goodness-of-fit tests","abstract":"In this paper for a wide class of goodness-of-fit statistics based K\/sub phi \/-divergences, the asymptotic normality is established under the assumption n\/m\/sub n\/ to a in (0, infinity ), where n denotes sample size and m\/sub n\/ the number of cells. This result is extended to contiguous alternatives to study asymptotic efficiency","tok_text":"asymptot normal for the k \/ sub phi \/-diverg goodness-of-fit test \n in thi paper for a wide class of goodness-of-fit statist base k \/ sub phi \/-diverg , the asymptot normal is establish under the assumpt n \/ m \/ sub n\/ to a in ( 0 , infin ) , where n denot sampl size and m \/ sub n\/ the number of cell . thi result is extend to contigu altern to studi asymptot effici","ordered_present_kp":[0,352,24],"keyphrases":["asymptotic normality","K\/sub phi \/-divergence goodness-of-fit tests","asymptotic efficiency"],"prmu":["P","P","P"]} {"id":"1521","title":"Optimal multi-degree reduction of Bezier curves with constraints of endpoints continuity","abstract":"Given a Bezier curve of degree n, the problem of optimal multi-degree reduction (degree reduction of more than one degree) by a Bezier curve of degree m (mor=0) orders can be preserved at two endpoints respectively. The method in the paper performs multi-degree reduction at one time and does not need stepwise computing. When applied to multi-degree reduction with endpoint continuity of any order, the MDR by L\/sub 2\/ obtains the best least squares approximation. Comparison with another method of multi-degree reduction (MDR by L\/sub infinity \/), which achieves the nearly best uniform approximation with respect to L\/sub infinity \/ norm, is also given. The approximate effect of the MDR by L\/sub 2\/ is better than that of the MDR by L\/sub infinity \/. Explicit approximate error analysis of the multi-degree reduction methods is presented","tok_text":"optim multi-degre reduct of bezier curv with constraint of endpoint continu \n given a bezier curv of degre n , the problem of optim multi-degre reduct ( degre reduct of more than one degre ) by a bezier curv of degre m ( m < n-1 ) with constraint of endpoint continu is investig . with respect to l \/ sub 2\/ norm , thi paper present an approxim method ( mdr by l \/ sub 2\/ ) that give an explicit solut to deal with it . the method ha good properti of endpoint interpol : continu of ani r , s ( r , s > or=0 ) order can be preserv at two endpoint respect . the method in the paper perform multi-degre reduct at one time and doe not need stepwis comput . when appli to multi-degre reduct with endpoint continu of ani order , the mdr by l \/ sub 2\/ obtain the best least squar approxim . comparison with anoth method of multi-degre reduct ( mdr by l \/ sub infin \/ ) , which achiev the nearli best uniform approxim with respect to l \/ sub infin \/ norm , is also given . the approxim effect of the mdr by l \/ sub 2\/ is better than that of the mdr by l \/ sub infin \/. explicit approxim error analysi of the multi-degre reduct method is present","ordered_present_kp":[0,28,336,387,451,761,893,1061],"keyphrases":["optimal multi-degree reduction","Bezier curves","approximate method","explicit solution","endpoint interpolation","least squares approximation","uniform approximation","explicit approximate error analysis","endpoint continuity constraints"],"prmu":["P","P","P","P","P","P","P","P","R"]} {"id":"1698","title":"Exact frequency-domain reconstruction for thermoacoustic tomography. I. Planar geometry","abstract":"We report an exact and fast Fourier-domain reconstruction algorithm for thermoacoustic tomography in a planar configuration assuming thermal confinement and constant acoustic speed. The effects of the finite size of the detector and the finite length of the excitation pulse are explicitly included in the reconstruction algorithm. The algorithm is numerically and experimentally verified. We also demonstrate that the blurring caused by the finite size of the detector surface is the primary limiting factor on the resolution and that it can be compensated for by deconvolution","tok_text":"exact frequency-domain reconstruct for thermoacoust tomographi . i. planar geometri \n we report an exact and fast fourier-domain reconstruct algorithm for thermoacoust tomographi in a planar configur assum thermal confin and constant acoust speed . the effect of the finit size of the detector and the finit length of the excit puls are explicitli includ in the reconstruct algorithm . the algorithm is numer and experiment verifi . we also demonstr that the blur caus by the finit size of the detector surfac is the primari limit factor on the resolut and that it can be compens for by deconvolut","ordered_present_kp":[0,184,206,225,459,517,587,322,129,39,68],"keyphrases":["exact frequency-domain reconstruction","thermoacoustic tomography","planar geometry","reconstruction algorithm","planar configuration","thermal confinement","constant acoustic speed","excitation pulse","blurring","primary limiting factor","deconvolution","medical diagnostic imaging","finite detector surface size","resolution limitation"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","U","R","R"]} {"id":"1665","title":"How airlines and airports recover from schedule perturbations: a survey","abstract":"The explosive growth in air traffic as well as the widespread adoption of Operations Research techniques in airline scheduling has given rise to tight flight schedules at major airports. An undesirable consequence of this is that a minor incident such as a delay in the arrival of a small number of flights can result in a chain reaction of events involving several flights and airports, causing disruption throughout the system. This paper reviews recent literature in the area of recovery from schedule disruptions. First we review how disturbances at a given airport could be handled, including the effects of runways and fixes. Then we study the papers on recovery from airline schedule perturbations, which involve adjustments in flight schedules, aircraft, and crew. The mathematical programming techniques used in ground holding are covered in some detail. We conclude the review with suggestions on how singular perturbation theory could play a role in analyzing disruptions to such highly sensitive schedules as those in the civil aviation industry","tok_text":"how airlin and airport recov from schedul perturb : a survey \n the explos growth in air traffic as well as the widespread adopt of oper research techniqu in airlin schedul ha given rise to tight flight schedul at major airport . an undesir consequ of thi is that a minor incid such as a delay in the arriv of a small number of flight can result in a chain reaction of event involv sever flight and airport , caus disrupt throughout the system . thi paper review recent literatur in the area of recoveri from schedul disrupt . first we review how disturb at a given airport could be handl , includ the effect of runway and fix . then we studi the paper on recoveri from airlin schedul perturb , which involv adjust in flight schedul , aircraft , and crew . the mathemat program techniqu use in ground hold are cover in some detail . we conclud the review with suggest on how singular perturb theori could play a role in analyz disrupt to such highli sensit schedul as those in the civil aviat industri","ordered_present_kp":[34,131,157,189,15,508,494,611,760,793,874,980],"keyphrases":["airports","schedule perturbation","operations research techniques","airline scheduling","tight flight schedules","recovery","schedule disruptions","runways","mathematical programming techniques","ground holding","singular perturbation theory","civil aviation industry","air traffic management","disturbance handling","flight schedule adjustments","aircraft adjustments","crew adjustments"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","M","R","R","R","R"]} {"id":"1620","title":"Rapid Cauer filter design employing new filter model","abstract":"The exact three-dimensional (3D) design of a coaxial Cauer filter employing a new filter model, a 3D field simulator and a circuit simulator, is demonstrated. Only a few iterations between the field simulator and the circuit simulator are necessary to meet a given specification","tok_text":"rapid cauer filter design employ new filter model \n the exact three-dimension ( 3d ) design of a coaxial cauer filter employ a new filter model , a 3d field simul and a circuit simul , is demonstr . onli a few iter between the field simul and the circuit simul are necessari to meet a given specif","ordered_present_kp":[6,12,37,151,169,210],"keyphrases":["Cauer filter","filter design","filter model","field simulator","circuit simulator","iterations","3D design","coaxial filter","bandpass filters"],"prmu":["P","P","P","P","P","P","R","R","M"]} {"id":"1599","title":"Evaluating the best main battle tank using fuzzy decision theory with linguistic criteria evaluation","abstract":"In this paper, experts' opinions are described in linguistic terms which can be expressed in trapezoidal (or triangular) fuzzy numbers. To make the consensus of the experts consistent, we utilize the fuzzy Delphi method to adjust the fuzzy rating of every expert to achieve the consensus condition. For the aggregate of many experts' opinions, we take the operation of fuzzy numbers to get the mean of fuzzy rating, x\/sub ij\/ and the mean of weight, w\/sub .j\/. In multi-alternatives and multi-attributes cases, the fuzzy decision matrix X=[x\/sub ij\/]\/sub m*n\/ is constructed by means of the fuzzy rating, x\/sub ij\/. Then, we can derive the aggregate fuzzy numbers by multiplying the fuzzy decision matrix with the corresponding fuzzy attribute weights. The final results become a problem of ranking fuzzy numbers. We also propose an easy procedure of using fuzzy numbers to rank aggregate fuzzy numbers A\/sub i\/. In this way, we can obtain the best selection for evaluating the system. For practical application, we propose an algorithm for evaluating the best main battle tank by fuzzy decision theory and comparing it with other methods","tok_text":"evalu the best main battl tank use fuzzi decis theori with linguist criteria evalu \n in thi paper , expert ' opinion are describ in linguist term which can be express in trapezoid ( or triangular ) fuzzi number . to make the consensu of the expert consist , we util the fuzzi delphi method to adjust the fuzzi rate of everi expert to achiev the consensu condit . for the aggreg of mani expert ' opinion , we take the oper of fuzzi number to get the mean of fuzzi rate , x \/ sub ij\/ and the mean of weight , w \/ sub .j\/. in multi-altern and multi-attribut case , the fuzzi decis matrix x=[x \/ sub ij\/]\/sub m*n\/ is construct by mean of the fuzzi rate , x \/ sub ij\/. then , we can deriv the aggreg fuzzi number by multipli the fuzzi decis matrix with the correspond fuzzi attribut weight . the final result becom a problem of rank fuzzi number . we also propos an easi procedur of use fuzzi number to rank aggreg fuzzi number a \/ sub i\/. in thi way , we can obtain the best select for evalu the system . for practic applic , we propos an algorithm for evalu the best main battl tank by fuzzi decis theori and compar it with other method","ordered_present_kp":[35,59,270,304,345,566,688,763],"keyphrases":["fuzzy decision theory","linguistic criteria evaluation","fuzzy Delphi method","fuzzy rating","consensus condition","fuzzy decision matrix","aggregate fuzzy numbers","fuzzy attribute weights","battle tank evaluation","fuzzy group decision making","multiple criteria problems","group decision making","subjective-objective backgrounds","trapezoidal fuzzy numbers","triangular fuzzy numbers","fuzzy number ranking"],"prmu":["P","P","P","P","P","P","P","P","R","M","M","M","U","R","R","R"]} {"id":"189","title":"Identification of linear parameter varying models","abstract":"We consider identification of a certain class of discrete-time nonlinear systems known as linear parameter varying system. We assume that inputs, outputs and the scheduling parameters are directly measured, and a form of the functional dependence of the system coefficients on the parameters is known. We show how this identification problem can be reduced to a linear regression, and provide compact formulae for the corresponding least mean square and recursive least-squares algorithms. We derive conditions on persistency of excitation in terms of the inputs and scheduling parameter trajectories when the functional dependence is of polynomial type. These conditions have a natural polynomial interpolation interpretation, and do not require the scheduling parameter trajectories to vary slowly. This method is illustrated with a simulation example using two different parameter trajectories","tok_text":"identif of linear paramet vari model \n we consid identif of a certain class of discrete-tim nonlinear system known as linear paramet vari system . we assum that input , output and the schedul paramet are directli measur , and a form of the function depend of the system coeffici on the paramet is known . we show how thi identif problem can be reduc to a linear regress , and provid compact formula for the correspond least mean squar and recurs least-squar algorithm . we deriv condit on persist of excit in term of the input and schedul paramet trajectori when the function depend is of polynomi type . these condit have a natur polynomi interpol interpret , and do not requir the schedul paramet trajectori to vari slowli . thi method is illustr with a simul exampl use two differ paramet trajectori","ordered_present_kp":[11,0,79,184,240,263,355,439,531,631,539],"keyphrases":["identification","linear parameter varying models","discrete-time nonlinear systems","scheduling parameters","functional dependence","system coefficients","linear regression","recursive least-squares algorithms","scheduling parameter trajectories","parameter trajectories","polynomial interpolation interpretation","least mean square algorithms","persistency of excitation conditions","time-varying systems"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","R","M"]} {"id":"1778","title":"HeLIN pilot mentoring scheme","abstract":"The health care libraries unit coordinates, facilitates, and promotes continuing personal development for all staff in the Health Libraries and Information Network (HeLIN) of the Oxford Deanery (UK). It supports the development of a culture of lifelong learning and recognizes that CPD should help deliver organizational objectives, as well as enabling all staff to expand and fulfill their potential. A major emphasis for 2000 was to investigate ways of improving support for individual learning within the workplace. The group identified a need to build on existing informal support networks in order to provide additional learning opportunities and decided to investigate the feasibility of piloting a mentoring scheme. The objectives of the pilot were to increase understanding and knowledge of mentoring as a tool for CPD; to investigate existing mentoring schemes and their applicability for HeLIN; to develop a pilot mentoring scheme for HeLIN incorporating a program for accreditation of mentors; and to evaluate the scheme and disseminate the results. In order to identify current practice in this area, a literature review was carried out, and colleagues with an interest in or existing knowledge of mentoring schemes were contacted where possible. In the absence of clearly defined appraisal tools, all abstracts were read, and articles that met the following criteria were obtained and distributed to the group for review","tok_text":"helin pilot mentor scheme \n the health care librari unit coordin , facilit , and promot continu person develop for all staff in the health librari and inform network ( helin ) of the oxford deaneri ( uk ) . it support the develop of a cultur of lifelong learn and recogn that cpd should help deliv organiz object , as well as enabl all staff to expand and fulfil their potenti . a major emphasi for 2000 wa to investig way of improv support for individu learn within the workplac . the group identifi a need to build on exist inform support network in order to provid addit learn opportun and decid to investig the feasibl of pilot a mentor scheme . the object of the pilot were to increas understand and knowledg of mentor as a tool for cpd ; to investig exist mentor scheme and their applic for helin ; to develop a pilot mentor scheme for helin incorpor a program for accredit of mentor ; and to evalu the scheme and dissemin the result . in order to identifi current practic in thi area , a literatur review wa carri out , and colleagu with an interest in or exist knowledg of mentor scheme were contact where possibl . in the absenc of clearli defin apprais tool , all abstract were read , and articl that met the follow criteria were obtain and distribut to the group for review","ordered_present_kp":[0,32,88,119,132,245,526,871,995],"keyphrases":["HeLIN pilot mentoring scheme","health care libraries unit","continuing personal development","staff","Health Libraries and Information Network","lifelong learning","informal support networks","accreditation","literature review","midcareer librarians"],"prmu":["P","P","P","P","P","P","P","P","P","U"]} {"id":"1853","title":"CherylAnn Silberer: all about process [accounting technologist]","abstract":"Silberer's company, CompLete, is making a specialty of workflow process analysis","tok_text":"cherylann silber : all about process [ account technologist ] \n silber 's compani , complet , is make a specialti of workflow process analysi","ordered_present_kp":[84,117,39],"keyphrases":["accounting technologist","CompLete","workflow process analysis"],"prmu":["P","P","P"]} {"id":"1816","title":"Hamiltonian modelling and nonlinear disturbance attenuation control of TCSC for improving power system stability","abstract":"To tackle the obstacle of applying passivity-based control (PBC) to power systems, an affine non-linear system widely existing in power systems is formulated as a standard Hamiltonian system using a pre-feedback method. The port controlled Hamiltonian with dissipation (PCHD) model of a thyristor controlled serial compensator (TCSC) is then established corresponding with a revised Hamiltonian function. Furthermore, employing the modified Hamiltonian function directly as the storage function, a non-linear adaptive L\/sub 2\/ gain control method is proposed to solve the problem of L\/sub 2\/ gain disturbance attenuation for this Hamiltonian system with parametric perturbations. Finally, simulation results are presented to verify the validity of the proposed controller","tok_text":"hamiltonian model and nonlinear disturb attenu control of tcsc for improv power system stabil \n to tackl the obstacl of appli passivity-bas control ( pbc ) to power system , an affin non-linear system wide exist in power system is formul as a standard hamiltonian system use a pre-feedback method . the port control hamiltonian with dissip ( pchd ) model of a thyristor control serial compens ( tcsc ) is then establish correspond with a revis hamiltonian function . furthermor , employ the modifi hamiltonian function directli as the storag function , a non-linear adapt l \/ sub 2\/ gain control method is propos to solv the problem of l \/ sub 2\/ gain disturb attenu for thi hamiltonian system with parametr perturb . final , simul result are present to verifi the valid of the propos control","ordered_present_kp":[0,360,22,74,126,277,444,535,699],"keyphrases":["Hamiltonian modelling","nonlinear disturbance attenuation control","power system stability","passivity-based control","pre-feedback method","thyristor controlled serial compensator","Hamiltonian function","storage function","parametric perturbations","affine nonlinear system","port controlled Hamiltonian with dissipation model","nonlinear adaptive L\/sub 2\/ gain control method"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1484","title":"Portfolio optimization and the random magnet problem","abstract":"Diversification of an investment into independently fluctuating assets reduces its risk. In reality, movements of assets are mutually correlated and therefore knowledge of cross-correlations among asset price movements are of great importance. Our results support the possibility that the problem of finding an investment in stocks which exposes invested funds to a minimum level of risk is analogous to the problem of finding the magnetization of a random magnet. The interactions for this \"random magnet problem\" are given by the cross-correlation matrix C of stock returns. We find that random matrix theory allows us to make an estimate for C which outperforms the standard estimate in terms of constructing an investment which carries a minimum level of risk","tok_text":"portfolio optim and the random magnet problem \n diversif of an invest into independ fluctuat asset reduc it risk . in realiti , movement of asset are mutual correl and therefor knowledg of cross-correl among asset price movement are of great import . our result support the possibl that the problem of find an invest in stock which expos invest fund to a minimum level of risk is analog to the problem of find the magnet of a random magnet . the interact for thi \" random magnet problem \" are given by the cross-correl matrix c of stock return . we find that random matrix theori allow us to make an estim for c which outperform the standard estim in term of construct an invest which carri a minimum level of risk","ordered_present_kp":[0,84,189,214,63,320,338,31,506,24],"keyphrases":["portfolio optimization","random magnet problem","magnetization","investment","fluctuating assets","cross-correlations","price movements","stocks","invested funds","cross-correlation matrix","minimum risk level","spin glasses"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","U"]} {"id":"1479","title":"Agreeing with automated diagnostic aids: a study of users' concurrence strategies","abstract":"Automated diagnostic aids that are less than perfectly reliable often produce unwarranted levels of disuse by operators. In the present study, users' tendencies to either agree or disagree with automated diagnostic aids were examined under conditions in which: (1) the aids were less than perfectly reliable but aided-diagnosis was still more accurate that unaided diagnosis; and (2) the system was completely opaque, affording users no additional information upon which to base a diagnosis. The results revealed that some users adopted a strategy of always agreeing with the aids, thereby maximizing the number of correct diagnoses made over several trials. Other users, however, adopted a probability-matching strategy in which agreement and disagreement rates matched the rate of correct and incorrect diagnoses of the aids. The probability-matching strategy, therefore, resulted in diagnostic accuracy scores that were lower than was maximally possible. Users who adopted the maximization strategy had higher self-ratings of problem-solving and decision-making skills, were more accurate in estimating aid reliabilities, and were more confident in their diagnosis on trials in which they agreed with the aids. The potential applications of these findings include the design of interface and training solutions that facilitate the adoption of the most effective concurrence strategies by users of automated diagnostic aids","tok_text":"agre with autom diagnost aid : a studi of user ' concurr strategi \n autom diagnost aid that are less than perfectli reliabl often produc unwarr level of disus by oper . in the present studi , user ' tendenc to either agre or disagre with autom diagnost aid were examin under condit in which : ( 1 ) the aid were less than perfectli reliabl but aided-diagnosi wa still more accur that unaid diagnosi ; and ( 2 ) the system wa complet opaqu , afford user no addit inform upon which to base a diagnosi . the result reveal that some user adopt a strategi of alway agre with the aid , therebi maxim the number of correct diagnos made over sever trial . other user , howev , adopt a probability-match strategi in which agreement and disagr rate match the rate of correct and incorrect diagnos of the aid . the probability-match strategi , therefor , result in diagnost accuraci score that were lower than wa maxim possibl . user who adopt the maxim strategi had higher self-rat of problem-solv and decision-mak skill , were more accur in estim aid reliabl , and were more confid in their diagnosi on trial in which they agre with the aid . the potenti applic of these find includ the design of interfac and train solut that facilit the adopt of the most effect concurr strategi by user of autom diagnost aid","ordered_present_kp":[10,677,727,588,975,116],"keyphrases":["automated diagnostic aids","reliability","maximization","probability-matching","disagreement rates","problem-solving","user concurrence strategy","complex systems","fault diagnosis"],"prmu":["P","P","P","P","P","P","R","M","M"]} {"id":"1785","title":"The effect of a male-oriented computer gaming culture on careers in the computer industry","abstract":"If careers in the computer industry were viewed, it would be evident that there is a conspicuous gender gap between the number of male and female employees. The same gap can be observed at the college level where males are dominating females as to those who pursue and obtain a degree in computer science. The question that this research paper intends to show is: why are males so dominant when it comes to computer related matters? The author has traced this question back to the computer game. Computer games are a fun medium and provide the means for an individual to become computer literate through the engagement of spatial learning and cognitive processing abilities. Since such games are marketed almost exclusively to males, females have a distinct disadvantage. Males are more computer literate through the playing of computer games, and are provided with an easy lead-in to more advanced utilization of computers such as programming. Females tend to be turned off due to the male stereotypes and marketing associated with games and thus begins the gender gap","tok_text":"the effect of a male-ori comput game cultur on career in the comput industri \n if career in the comput industri were view , it would be evid that there is a conspicu gender gap between the number of male and femal employe . the same gap can be observ at the colleg level where male are domin femal as to those who pursu and obtain a degre in comput scienc . the question that thi research paper intend to show is : whi are male so domin when it come to comput relat matter ? the author ha trace thi question back to the comput game . comput game are a fun medium and provid the mean for an individu to becom comput liter through the engag of spatial learn and cognit process abil . sinc such game are market almost exclus to male , femal have a distinct disadvantag . male are more comput liter through the play of comput game , and are provid with an easi lead-in to more advanc util of comput such as program . femal tend to be turn off due to the male stereotyp and market associ with game and thu begin the gender gap","ordered_present_kp":[47,61,166,25,208,642,660,950,701],"keyphrases":["computer games","careers","computer industry","gender gap","female employees","spatial learning","cognitive processing","marketing","male stereotypes","computer science degree","computer literacy"],"prmu":["P","P","P","P","P","P","P","P","P","R","M"]} {"id":"1893","title":"Closed-loop model set validation under a stochastic framework","abstract":"Deals with probabilistic model set validation. It is assumed that the dynamics of a multi-input multi-output (MIMO) plant is described by a model set with unstructured uncertainties, and identification experiments are performed in closed loop. A necessary and sufficient condition has been derived for the consistency of the model set with both the stabilizing controller and closed-loop frequency domain experimental data (FDED). In this condition, only the Euclidean norm of a complex vector is involved, and this complex vector depends linearly on both the disturbances and the measurement errors. Based on this condition, an analytic formula has been derived for the sample unfalsified probability (SUP) of the model set. Some of the asymptotic statistical properties of the SUP have also been briefly discussed. A numerical example is included to illustrate the efficiency of the suggested method in model set quality evaluation","tok_text":"closed-loop model set valid under a stochast framework \n deal with probabilist model set valid . it is assum that the dynam of a multi-input multi-output ( mimo ) plant is describ by a model set with unstructur uncertainti , and identif experi are perform in close loop . a necessari and suffici condit ha been deriv for the consist of the model set with both the stabil control and closed-loop frequenc domain experiment data ( fded ) . in thi condit , onli the euclidean norm of a complex vector is involv , and thi complex vector depend linearli on both the disturb and the measur error . base on thi condit , an analyt formula ha been deriv for the sampl unfalsifi probabl ( sup ) of the model set . some of the asymptot statist properti of the sup have also been briefli discuss . a numer exampl is includ to illustr the effici of the suggest method in model set qualiti evalu","ordered_present_kp":[0,36,67,200,274,364,383,463,483,716,200],"keyphrases":["closed-loop model set validation","stochastic framework","probabilistic model set validation","unstructured uncertainties","unstructured uncertainties","necessary and sufficient condition","stabilizing controller","closed-loop frequency domain experimental data","Euclidean norm","complex vector","asymptotic statistical properties","multi-input multi-output plant","MIMO plant","robust control","unstructured uncertainty"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","R","M","P"]} {"id":"1700","title":"Computation of unmeasured third-generation VCT views from measured views","abstract":"We compute unmeasured cone-beam projections from projections measured by a third-generation helical volumetric computed tomography system by solving a characteristic problem for an ultrahyperbolic differential equation [John (1938)]. By working in the Fourier domain, we convert the second-order PDE into a family of first-order ordinary differential equations. A simple first-order integration is used to solve the ODES","tok_text":"comput of unmeasur third-gener vct view from measur view \n we comput unmeasur cone-beam project from project measur by a third-gener helic volumetr comput tomographi system by solv a characterist problem for an ultrahyperbol differenti equat [ john ( 1938 ) ] . by work in the fourier domain , we convert the second-ord pde into a famili of first-ord ordinari differenti equat . a simpl first-ord integr is use to solv the ode","ordered_present_kp":[45,78,211,277,341,381,121],"keyphrases":["measured views","cone-beam projections","third-generation helical volumetric computed tomography system","ultrahyperbolic differential equation","Fourier domain","first-order ordinary differential equations","simple first-order integration","unmeasured third-generation VCT views computation","characteristic problem solution","medical diagnostic imaging","range conditions"],"prmu":["P","P","P","P","P","P","P","R","M","U","U"]} {"id":"1745","title":"Approximate relaxed descent method for optimal control problems","abstract":"We consider an optimal control problem for systems governed by ordinary differential equations with control constraints. Since no convexity assumptions are made on the data, the problem is reformulated in relaxed form. The relaxed state equation is discretized by the implicit trapezoidal scheme and the relaxed controls are approximated by piecewise constant relaxed controls. We then propose a combined descent and discretization method that generates sequences of discrete relaxed controls and progressively refines the discretization. Since here the adjoint of the discrete state equation is not defined, we use, at each iteration, an approximate derivative of the cost functional defined by discretizing the continuous adjoint equation and the integral involved by appropriate trapezoidal schemes. It is proved that accumulation points of sequences constructed by this method satisfy the strong relaxed necessary conditions for optimality for the continuous problem. Finally, the computed relaxed controls can be easily approximated by piecewise constant classical controls","tok_text":"approxim relax descent method for optim control problem \n we consid an optim control problem for system govern by ordinari differenti equat with control constraint . sinc no convex assumpt are made on the data , the problem is reformul in relax form . the relax state equat is discret by the implicit trapezoid scheme and the relax control are approxim by piecewis constant relax control . we then propos a combin descent and discret method that gener sequenc of discret relax control and progress refin the discret . sinc here the adjoint of the discret state equat is not defin , we use , at each iter , an approxim deriv of the cost function defin by discret the continu adjoint equat and the integr involv by appropri trapezoid scheme . it is prove that accumul point of sequenc construct by thi method satisfi the strong relax necessari condit for optim for the continu problem . final , the comput relax control can be easili approxim by piecewis constant classic control","ordered_present_kp":[0,34,114,292,356,547,301],"keyphrases":["approximate relaxed descent method","optimal control problems","ordinary differential equations","implicit trapezoidal scheme","trapezoidal schemes","piecewise constant relaxed controls","discrete state equation","relaxed state equation discretization","relaxed control approximation","discrete relaxed control sequences","discretization refinement","cost functional approximate derivative"],"prmu":["P","P","P","P","P","P","P","R","R","R","R","R"]} {"id":"1658","title":"Chaos theory as a framework for studying information systems","abstract":"This paper introduces chaos theory as a means of studying information systems. It argues that chaos theory, combined with new techniques for discovering patterns in complex quantitative and qualitative evidence, offers a potentially more substantive approach to understand the nature of information systems in a variety of contexts. The paper introduces chaos theory concepts by way of an illustrative research design","tok_text":"chao theori as a framework for studi inform system \n thi paper introduc chao theori as a mean of studi inform system . it argu that chao theori , combin with new techniqu for discov pattern in complex quantit and qualit evid , offer a potenti more substant approach to understand the natur of inform system in a varieti of context . the paper introduc chao theori concept by way of an illustr research design","ordered_present_kp":[0,37,213],"keyphrases":["chaos theory","information systems","qualitative evidence","pattern discovery","complex quantitative evidence"],"prmu":["P","P","P","M","R"]} {"id":"149","title":"Extending Kamp's theorem to model time granularity","abstract":"In this paper, a generalization of Kamp's theorem relative to the functional completeness of the until operator is proved. Such a generalization consists in showing the functional completeness of more expressive temporal operators with respect to the extension of the first-order theory of linear orders MFO[<] with an extra binary relational symbol. The result is motivated by the search of a modal language capable of expressing properties and operators suitable to model time granularity in omega -layered temporal structures","tok_text":"extend kamp 's theorem to model time granular \n in thi paper , a gener of kamp 's theorem rel to the function complet of the until oper is prove . such a gener consist in show the function complet of more express tempor oper with respect to the extens of the first-ord theori of linear order mfo [ < ] with an extra binari relat symbol . the result is motiv by the search of a modal languag capabl of express properti and oper suitabl to model time granular in omega -layer tempor structur","ordered_present_kp":[7,101,125,213,259,279,316,461,26],"keyphrases":["Kamp's theorem","model time granularity","functional completeness","until operator","temporal operators","first-order theory","linear orders","binary relational symbol","omega -layered temporal structures"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"1559","title":"A comparison theorem for the iterative method with the preconditioner (I + S\/sub max\/)","abstract":"A.D. Gunawardena et al. (1991) have reported the modified Gauss-Seidel method with a preconditioner (I + S). In this article, we propose to use a preconditioner (I + S\/sub max\/) instead of (I + S). Here, S\/sub max\/ is constructed by only the largest element at each row of the upper triangular part of A. By using the lemma established by M. Neumann and R.J. Plemmons (1987), we get the comparison theorem for the proposed method. Simple numerical examples are also given","tok_text":"a comparison theorem for the iter method with the precondition ( i + s \/ sub max\/ ) \n a.d. gunawardena et al . ( 1991 ) have report the modifi gauss-seidel method with a precondition ( i + s ) . in thi articl , we propos to use a precondition ( i + s \/ sub max\/ ) instead of ( i + s ) . here , s \/ sub max\/ is construct by onli the largest element at each row of the upper triangular part of a. by use the lemma establish by m. neumann and r.j. plemmon ( 1987 ) , we get the comparison theorem for the propos method . simpl numer exampl are also given","ordered_present_kp":[29,50,136,2],"keyphrases":["comparison theorem","iterative method","preconditioner","modified Gauss-Seidel method"],"prmu":["P","P","P","P"]} {"id":"1781","title":"Making it to the major leagues: career movement between library and archival professions and from small college to large university libraries","abstract":"Issues of career movement and change are examined between library and archival fields and from small colleges to large universities. Issues examined include professional education and training, initial career-planning and placement, continuing education, scouting and mentoring, job market conditions, work experience and personal skills, professional involvement, and professional association self-interest. This examination leads to five observations: 1. It is easier, in terms of career transitions, for a librarian to become an archivist than it is for an archivist to become a librarian; 2. The progression from a small college venue to a large research university is very manageable with the proper planning and experience; 3. At least three of the career elements-professional education, career-planning, and professional association self-interest-in their best moments provide a foundation that enables a future consideration of change between institutional types and professional areas and in their worst moments conspire against the midcareer professional in terms of change; 4. The elements of scouting, continuing education, work experience, and professional involvement offer the greatest assistance in career transitions; 5. The job market is the wildcard that either stymies or stimulates occupational development","tok_text":"make it to the major leagu : career movement between librari and archiv profess and from small colleg to larg univers librari \n issu of career movement and chang are examin between librari and archiv field and from small colleg to larg univers . issu examin includ profession educ and train , initi career-plan and placement , continu educ , scout and mentor , job market condit , work experi and person skill , profession involv , and profession associ self-interest . thi examin lead to five observ : 1 . it is easier , in term of career transit , for a librarian to becom an archivist than it is for an archivist to becom a librarian ; 2 . the progress from a small colleg venu to a larg research univers is veri manag with the proper plan and experi ; 3 . at least three of the career elements-profession educ , career-plan , and profession associ self-interest-in their best moment provid a foundat that enabl a futur consider of chang between institut type and profession area and in their worst moment conspir against the midcar profession in term of chang ; 4 . the element of scout , continu educ , work experi , and profession involv offer the greatest assist in career transit ; 5 . the job market is the wildcard that either stymi or stimul occup develop","ordered_present_kp":[29,65,105,265,285,327,361,381,397,556,1253,1029],"keyphrases":["career movement","archival profession","large university libraries","professional education","training","continuing education","job market","work experience","personal skills","librarian","midcareer","occupational development","library profession","small college library"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"1739","title":"Application of normal possibility decision rule to silence","abstract":"The paper presents the way of combining two decision problems concerning a single (or a common) dimension, so that an effective fuzzy decision rule can be obtained. Normality of the possibility distribution is assumed, leading to possibility of fusing the respective functions related to the two decision problems and their characteristics (decisions, states of nature, utility functions, etc.). The approach proposed can be applied in cases when the statement of the problem requires making of more refined distinctions rather than considering simply a bi-criterion or bi-utility two-decision problem","tok_text":"applic of normal possibl decis rule to silenc \n the paper present the way of combin two decis problem concern a singl ( or a common ) dimens , so that an effect fuzzi decis rule can be obtain . normal of the possibl distribut is assum , lead to possibl of fuse the respect function relat to the two decis problem and their characterist ( decis , state of natur , util function , etc . ) . the approach propos can be appli in case when the statement of the problem requir make of more refin distinct rather than consid simpli a bi-criterion or bi-util two-decis problem","ordered_present_kp":[10,39,88],"keyphrases":["normal possibility decision rule","silence","decision problems","conflicting objectives","conflicting utilities","cool head","warm heart","two-dimensional fuzzy events"],"prmu":["P","P","P","U","M","U","U","M"]} {"id":"1812","title":"Computing the frequency response of systems affinely depending on uncertain parameters","abstract":"The computation of the frequency response of systems depending affinely on uncertain parameters can be reduced to that of all its one-dimensional edge plants while the image of such an edge plant at a fixed frequency is an arc or a line segment in the complex plane. Based on this conclusion, four computational formulas of the maximal and minimal (maxi-mini) magnitudes and phases of an edge plant at a fixed frequency are given. The formulas, besides sharing a simpler form of expression, concretely display how the extrema of the frequency response of the edge plant relate to the typical characteristics of the arc and line segment such as the centre, radius and tangent points of the arc, the distance from the origin to the line segment etc. The direct application of the results is to compute the Bode-, Nichols- and Nyquist-plot collections of the systems which are needed in robustness analysis and design","tok_text":"comput the frequenc respons of system affin depend on uncertain paramet \n the comput of the frequenc respons of system depend affin on uncertain paramet can be reduc to that of all it one-dimension edg plant while the imag of such an edg plant at a fix frequenc is an arc or a line segment in the complex plane . base on thi conclus , four comput formula of the maxim and minim ( maxi-mini ) magnitud and phase of an edg plant at a fix frequenc are given . the formula , besid share a simpler form of express , concret display how the extrema of the frequenc respons of the edg plant relat to the typic characterist of the arc and line segment such as the centr , radiu and tangent point of the arc , the distanc from the origin to the line segment etc . the direct applic of the result is to comput the bode- , nichols- and nyquist-plot collect of the system which are need in robust analysi and design","ordered_present_kp":[11,54,184,268,277,825,878],"keyphrases":["frequency response","uncertain parameters","one-dimensional edge plants","arc","line segment","Nyquist-plot","robustness analysis","affine systems","Bode-plot","Nichols-plot","robustness design","frequency-domain design methods"],"prmu":["P","P","P","P","P","P","P","R","U","U","R","M"]} {"id":"1480","title":"Formal verification of human-automation interaction","abstract":"This paper discusses a formal and rigorous approach to the analysis of operator interaction with machines. It addresses the acute problem of detecting design errors in human-machine interaction and focuses on verifying the correctness of the interaction in complex and automated control systems. The paper describes a systematic methodology for evaluating whether the interface provides the necessary information about the machine to enable the operator to perform a specified task successfully and unambiguously. It also addresses the adequacy of information provided to the user via training materials (e.g., user manual) about the machine's behavior. The essentials of the methodology, which can be automated and applied to the verification of large systems, are illustrated by several examples and through a case study of pilot interaction with an autopilot aboard a modern commercial aircraft. The expected application of this methodology is an augmentation and enhancement, by formal verification, of human-automation interfaces","tok_text":"formal verif of human-autom interact \n thi paper discuss a formal and rigor approach to the analysi of oper interact with machin . it address the acut problem of detect design error in human-machin interact and focus on verifi the correct of the interact in complex and autom control system . the paper describ a systemat methodolog for evalu whether the interfac provid the necessari inform about the machin to enabl the oper to perform a specifi task success and unambigu . it also address the adequaci of inform provid to the user via train materi ( e.g. , user manual ) about the machin 's behavior . the essenti of the methodolog , which can be autom and appli to the verif of larg system , are illustr by sever exampl and through a case studi of pilot interact with an autopilot aboard a modern commerci aircraft . the expect applic of thi methodolog is an augment and enhanc , by formal verif , of human-autom interfac","ordered_present_kp":[0,16,270,775,801],"keyphrases":["formal verification","human-automation interaction","automated control systems","autopilot","commercial aircraft","man-machine interaction","user interface"],"prmu":["P","P","P","P","P","M","R"]} {"id":"1661","title":"The road to perpetual progress [retail inventory management]","abstract":"With annual revenues increasing 17.0% to 20.0% consistently over the last three years and more than 2,500 new stores opened from 1998 through 2001, Dollar General is on the fast track. However, the road to riches could have easily become the road to ruin had the retailer not exerted control over its inventory management","tok_text":"the road to perpetu progress [ retail inventori manag ] \n with annual revenu increas 17.0 % to 20.0 % consist over the last three year and more than 2,500 new store open from 1998 through 2001 , dollar gener is on the fast track . howev , the road to rich could have easili becom the road to ruin had the retail not exert control over it inventori manag","ordered_present_kp":[195,31,38],"keyphrases":["retailer","inventory management","Dollar General"],"prmu":["P","P","P"]} {"id":"1624","title":"Genetic algorithm for input\/output selection in MIMO systems based on controllability and observability indices","abstract":"A time domain optimisation algorithm using a genetic algorithm in conjunction with a linear search scheme has been developed to find the smallest or near-smallest subset of inputs and outputs to control a multi-input-multi-output system. Experimental results have shown that this proposed algorithm has a very fast convergence rate and high computation efficiency","tok_text":"genet algorithm for input \/ output select in mimo system base on control and observ indic \n a time domain optimis algorithm use a genet algorithm in conjunct with a linear search scheme ha been develop to find the smallest or near-smallest subset of input and output to control a multi-input-multi-output system . experiment result have shown that thi propos algorithm ha a veri fast converg rate and high comput effici","ordered_present_kp":[0,20,45,77,94,165,226,280,374,401],"keyphrases":["genetic algorithm","input\/output selection","MIMO systems","observability indices","time domain optimisation algorithm","linear search scheme","near-smallest subset","multi-input-multi-output system","very fast convergence","high computation efficiency","controllability indices","smallest subset","multivariable control systems"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R","M"]} {"id":"170","title":"The impact of the product mix on the value of flexibility","abstract":"Product-mix flexibility is one of the major types of manufacturing flexibility, referring to the ability to produce a broad range of products or variants with presumed low changeover costs. The value of such a capability is important to establish for an industrial firm in order to ensure that the flexibility provided will be at the right level and used profitably rather than in excess of market requirements and consequently costly. We use option-pricing theory to analyse the impact of various product-mix issues on the value of flexibility. The real options model we use incorporates multiple products, capacity constraints as well as set-up costs. The issues treated here include the number of products, demand variability, correlation between products, and the relative demand distribution within the product mix. Thus, we are interested in the nature of the input data to analyse its effect on the value of flexibility. We also check the impact at different capacity levels. The results suggest that the value of flexibility (i) increases with an increasing number of products, (ii) decreases with increasing volatility of product demand, (iii) decreases the more positively correlated the demand is, and (iv) reduces for marginal capacity with increasing levels of capacity. Of these, the impact of positively correlated demand seems to be a major issue. However, the joint impact of the number of products and demand correlation showed some non-intuitive results","tok_text":"the impact of the product mix on the valu of flexibl \n product-mix flexibl is one of the major type of manufactur flexibl , refer to the abil to produc a broad rang of product or variant with presum low changeov cost . the valu of such a capabl is import to establish for an industri firm in order to ensur that the flexibl provid will be at the right level and use profit rather than in excess of market requir and consequ costli . we use option-pr theori to analys the impact of variou product-mix issu on the valu of flexibl . the real option model we use incorpor multipl product , capac constraint as well as set-up cost . the issu treat here includ the number of product , demand variabl , correl between product , and the rel demand distribut within the product mix . thu , we are interest in the natur of the input data to analys it effect on the valu of flexibl . we also check the impact at differ capac level . the result suggest that the valu of flexibl ( i ) increas with an increas number of product , ( ii ) decreas with increas volatil of product demand , ( iii ) decreas the more posit correl the demand is , and ( iv ) reduc for margin capac with increas level of capac . of these , the impact of posit correl demand seem to be a major issu . howev , the joint impact of the number of product and demand correl show some non-intuit result","ordered_present_kp":[55,103,199,275,440,534,568,586,614,679,729,1147,1215,1315],"keyphrases":["product-mix flexibility","manufacturing flexibility","low changeover costs","industrial firm","option-pricing theory","real options model","multiple products","capacity constraints","set-up costs","demand variability","relative demand distribution","marginal capacity","positively correlated demand","demand correlation","flexible manufacturing","product correlation","product demand volatility","capital budgeting"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","U"]} {"id":"1560","title":"Determinantal solutions of solvable chaotic systems","abstract":"It is shown that two solvable chaotic systems, the arithmetic-harmonic mean (ARM) algorithm and the Ulam-von Neumann (UvN) map, have determinantal solutions. An additional formula for certain determinants and Riccati difference equations play a key role in both cases. Two infinite hierarchies of solvable chaotic systems are presented which have determinantal solutions","tok_text":"determinant solut of solvabl chaotic system \n it is shown that two solvabl chaotic system , the arithmetic-harmon mean ( arm ) algorithm and the ulam-von neumann ( uvn ) map , have determinant solut . an addit formula for certain determin and riccati differ equat play a key role in both case . two infinit hierarchi of solvabl chaotic system are present which have determinant solut","ordered_present_kp":[0,21,0,243],"keyphrases":["determinantal solutions","determinants","solvable chaotic systems","Riccati difference equations","arithmetic-harmonic mean algorithm","Ulam-von Neumann map","Chebyshev polynomial"],"prmu":["P","P","P","P","R","R","U"]} {"id":"1525","title":"Dependence graphs: dependence within and between groups","abstract":"This paper applies the two-party dependence theory (Castelfranchi, Cesta and Miceli, 1992, in Y. Demazeau and E. Werner (Eds.) Decentralized AI-3, Elsevier, North Holland) to modelling multiagent and group dependence. These have theoretical potentialities for the study of emerging groups and collective structures, and more generally for understanding social and organisational complexity, and practical utility for both social-organisational and agent systems purposes. In the paper, the dependence theory is extended to describe multiagent links, with a special reference to group and collective phenomena, and is proposed as a framework for the study of emerging social structures, such as groups and collectives. In order to do so, we propose to extend the notion of dependence networks (applied to a single agent) to dependence graphs (applied to an agency). In its present version, the dependence theory is argued to provide (a) a theoretical instrument for the study of social complexity, and (b) a computational system for managing the negotiation process in competitive contexts and for monitoring complexity in organisational and other cooperative contexts","tok_text":"depend graph : depend within and between group \n thi paper appli the two-parti depend theori ( castelfranchi , cesta and mice , 1992 , in y. demazeau and e. werner ( ed . ) decentr ai-3 , elsevi , north holland ) to model multiag and group depend . these have theoret potenti for the studi of emerg group and collect structur , and more gener for understand social and organis complex , and practic util for both social-organis and agent system purpos . in the paper , the depend theori is extend to describ multiag link , with a special refer to group and collect phenomena , and is propos as a framework for the studi of emerg social structur , such as group and collect . in order to do so , we propos to extend the notion of depend network ( appli to a singl agent ) to depend graph ( appli to an agenc ) . in it present version , the depend theori is argu to provid ( a ) a theoret instrument for the studi of social complex , and ( b ) a comput system for manag the negoti process in competit context and for monitor complex in organis and other cooper context","ordered_present_kp":[0,234,69,293,309,369,915,432,729],"keyphrases":["dependence graphs","two-party dependence theory","group dependence","emerging groups","collective structures","organisational complexity","agent systems","dependence networks","social complexity","multiagent dependence","multiagent systems"],"prmu":["P","P","P","P","P","P","P","P","P","R","R"]} {"id":"1518","title":"Explicit matrix representation for NURBS curves and surfaces","abstract":"The matrix forms for curves and surfaces were largely promoted in CAD\/CAM. In this paper we have presented two matrix representation formulations for arbitrary degree NURBS curves and surfaces explicitly other than recursively. The two approaches are derived from the computation of divided difference and the Marsden identity respectively. The explicit coefficient matrix of B-spline with equally spaced knot and Bezier curves and surfaces can be obtained by these formulae. The coefficient formulae and the coefficient matrix formulae developed in this paper express non-uniform B-spline functions of arbitrary degree in explicit polynomial and matrix forms.. They are useful for the evaluation and the conversion of NURBS curves and surfaces, in CAD\/CAM systems","tok_text":"explicit matrix represent for nurb curv and surfac \n the matrix form for curv and surfac were larg promot in cad \/ cam . in thi paper we have present two matrix represent formul for arbitrari degre nurb curv and surfac explicitli other than recurs . the two approach are deriv from the comput of divid differ and the marsden ident respect . the explicit coeffici matrix of b-spline with equal space knot and bezier curv and surfac can be obtain by these formula . the coeffici formula and the coeffici matrix formula develop in thi paper express non-uniform b-spline function of arbitrari degre in explicit polynomi and matrix form .. they are use for the evalu and the convers of nurb curv and surfac , in cad \/ cam system","ordered_present_kp":[0,30,109,154,296,317,345,373,387,408,468,493],"keyphrases":["explicit matrix representation","NURBS curves","CAD\/CAM","matrix representation formulations","divided difference","Marsden identity","explicit coefficient matrix","B-spline","equally spaced knot","Bezier curves","coefficient formulae","coefficient matrix formulae","NURBS surfaces","Bezier surfaces","nonuniform B-spline functions","explicit polynomial forms","explicit matrix forms"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","R","M","R","R"]} {"id":"1619","title":"Rate allocation for video transmission over lossy correlated networks","abstract":"A novel rate allocation algorithm for video transmission over lossy networks subject to bursty packet losses is presented. A Gilbert-Elliot model is used at the encoder to drive the selection of coding parameters. Experimental results using the H.26L test model show a significant performance improvement with respect to the assumption of independent packet losses","tok_text":"rate alloc for video transmiss over lossi correl network \n a novel rate alloc algorithm for video transmiss over lossi network subject to bursti packet loss is present . a gilbert-elliot model is use at the encod to drive the select of code paramet . experiment result use the h.26l test model show a signific perform improv with respect to the assumpt of independ packet loss","ordered_present_kp":[67,15,36,138,172,236,277],"keyphrases":["video transmission","lossy correlated networks","rate allocation algorithm","bursty packet losses","Gilbert-Elliot model","coding parameters","H.26L test model","video coding"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"1704","title":"Statistical analysis of nonlinearly reconstructed near-infrared tomographic images. I. Theory and simulations","abstract":"Near-infrared (NIR) diffuse tomography is an emerging method for imaging the interior of tissues to quantify concentrations of hemoglobin and exogenous chromophores noninvasively in vivo. It often exploits an optical diffusion model-based image reconstruction algorithm to estimate spatial property values from measurements of the light flux at the surface of the tissue. In this study, mean-squared error (MSE) over the image is used to evaluate methods for regularizing the ill-posed inverse image reconstruction problem in NIR tomography. Estimates of image bias and image standard deviation were calculated based upon 100 repeated reconstructions of a test image with randomly distributed noise added to the light flux measurements. It was observed that the bias error dominates at high regularization parameter values while variance dominates as the algorithm is allowed to approach the optimal solution. This optimum does not necessarily correspond to the minimum projection error solution, but typically requires further iteration with a decreasing regularization parameter to reach the lowest image error. Increasing measurement noise causes a need to constrain the minimum regularization parameter to higher values in order to achieve a minimum in the overall image MSE","tok_text":"statist analysi of nonlinearli reconstruct near-infrar tomograph imag . i. theori and simul \n near-infrar ( nir ) diffus tomographi is an emerg method for imag the interior of tissu to quantifi concentr of hemoglobin and exogen chromophor noninvas in vivo . it often exploit an optic diffus model-bas imag reconstruct algorithm to estim spatial properti valu from measur of the light flux at the surfac of the tissu . in thi studi , mean-squar error ( mse ) over the imag is use to evalu method for regular the ill-pos invers imag reconstruct problem in nir tomographi . estim of imag bia and imag standard deviat were calcul base upon 100 repeat reconstruct of a test imag with randomli distribut nois ad to the light flux measur . it wa observ that the bia error domin at high regular paramet valu while varianc domin as the algorithm is allow to approach the optim solut . thi optimum doe not necessarili correspond to the minimum project error solut , but typic requir further iter with a decreas regular paramet to reach the lowest imag error . increas measur nois caus a need to constrain the minimum regular paramet to higher valu in order to achiev a minimum in the overal imag mse","ordered_present_kp":[206,278,993,1030,755,862,378,433,664,679],"keyphrases":["hemoglobin","optical diffusion model-based image reconstruction algorithm","light flux","mean-squared error","test image","randomly distributed noise","bias error","optimal solution","decreasing regularization parameter","lowest image error","medical diagnostic imaging","oxygen saturation","photon migration","minimum regularization parameter constraint","ill-posed inverse image reconstruction problem regularization","spatial property values estimation","O\/sub 2\/"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","U","U","M","R","R","U"]} {"id":"1741","title":"The top cycle and uncovered solutions for weak tournaments","abstract":"We study axiomatic properties of the top cycle and uncovered solutions for weak tournaments. Subsequently, we establish its connection with the rational choice theory","tok_text":"the top cycl and uncov solut for weak tournament \n we studi axiomat properti of the top cycl and uncov solut for weak tournament . subsequ , we establish it connect with the ration choic theori","ordered_present_kp":[4,17,33,60,174],"keyphrases":["top cycle","uncovered solutions","weak tournaments","axiomatic properties","rational choice theory"],"prmu":["P","P","P","P","P"]} {"id":"1897","title":"User-appropriate tyre-modelling for vehicle dynamics in standard and limit situations","abstract":"When modelling vehicles for the vehicle dynamic simulation, special attention must be paid to the modelling of tyre forces and -torques, according to their dominant influence on the results. This task is not only about sufficiently exact representation of the effective forces but also about user-friendly and practical relevant applicability, especially when the experimental tyre-input-data is incomplete or missing. This text firstly describes the basics of the vehicle dynamic tyre model, conceived to be a physically based, semi-empirical model for application in connection with multi-body-systems (MBS). On the basis of tyres for a passenger car and a heavy truck the simulated steady state tyre characteristics are shown together and compared with the underlying experimental values. The possibility to link the tyre model TMeasy to any MBS-program is described, as far as it supports the 'Standard Tyre Interface'. As an example, the simulated and experimental data of a heavy truck doing a standardized driving manoeuvre are compared","tok_text":"user-appropri tyre-model for vehicl dynam in standard and limit situat \n when model vehicl for the vehicl dynam simul , special attent must be paid to the model of tyre forc and -torqu , accord to their domin influenc on the result . thi task is not onli about suffici exact represent of the effect forc but also about user-friendli and practic relev applic , especi when the experiment tyre-input-data is incomplet or miss . thi text firstli describ the basic of the vehicl dynam tyre model , conceiv to be a physic base , semi-empir model for applic in connect with multi-body-system ( mb ) . on the basi of tyre for a passeng car and a heavi truck the simul steadi state tyre characterist are shown togeth and compar with the underli experiment valu . the possibl to link the tyre model tmeasi to ani mbs-program is describ , as far as it support the ' standard tyre interfac ' . as an exampl , the simul and experiment data of a heavi truck do a standard drive manoeuvr are compar","ordered_present_kp":[481,29,58,524,568,621,639,655,790,856,950],"keyphrases":["vehicle dynamics","limit situations","tyre modelling","semi-empirical model","multi-body-systems","passenger car","heavy truck","simulated steady state tyre characteristics","TMeasy","Standard Tyre Interface","standardized driving manoeuvre","standard situations","tyre torques"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","M"]} {"id":"1916","title":"Changes in the entropy and the Tsallis difference information during spontaneous decay and self-organization of nonextensive systems","abstract":"A theoretical-information description of self-organization processes during stimulated transitions between stationary states of open nonextensive systems is presented. S\/sub q\/- and I\/sub q\/-theorems on changes of the entropy and Tsallis difference information measures in the process of evolution in the space of control parameters are proved. The entropy and the Tsallis difference information are derived and their new extreme properties are discussed","tok_text":"chang in the entropi and the tsalli differ inform dure spontan decay and self-organ of nonextens system \n a theoretical-inform descript of self-organ process dure stimul transit between stationari state of open nonextens system is present . s \/ sub q\/- and i \/ sub q\/-theorem on chang of the entropi and tsalli differ inform measur in the process of evolut in the space of control paramet are prove . the entropi and the tsalli differ inform are deriv and their new extrem properti are discuss","ordered_present_kp":[13,29,55,73,87,163,318,373],"keyphrases":["entropy","Tsallis difference information","spontaneous decay","self-organization","nonextensive systems","stimulated transitions","information measures","control parameters","nonextensive statistical mechanics"],"prmu":["P","P","P","P","P","P","P","P","M"]} {"id":"1584","title":"Content all clear [workflow & content management]","abstract":"Graeme Muir of SchlumbergerSema cuts through the confusion between content, document and records management","tok_text":"content all clear [ workflow & content manag ] \n graem muir of schlumbergersema cut through the confus between content , document and record manag","ordered_present_kp":[63,31,134],"keyphrases":["content management","SchlumbergerSema","records management","document management"],"prmu":["P","P","P","R"]} {"id":"1678","title":"Parallel interior point schemes for solving multistage convex programming","abstract":"The predictor-corrector interior-point path-following algorithm is promising in solving multistage convex programming problems. Among many other general good features of this algorithm, especially attractive is that the algorithm allows the possibility to parallelise the major computations. The dynamic structure of the multistage problems specifies a block-tridiagonal system at each Newton step of the algorithm. A wrap-around permutation is then used to implement the parallel computation for this step","tok_text":"parallel interior point scheme for solv multistag convex program \n the predictor-corrector interior-point path-follow algorithm is promis in solv multistag convex program problem . among mani other gener good featur of thi algorithm , especi attract is that the algorithm allow the possibl to parallelis the major comput . the dynam structur of the multistag problem specifi a block-tridiagon system at each newton step of the algorithm . a wrap-around permut is then use to implement the parallel comput for thi step","ordered_present_kp":[0,40,71,327,377,408,441,489],"keyphrases":["parallel interior point schemes","multistage convex programming","predictor-corrector interior-point path-following algorithm","dynamic structure","block-tridiagonal system","Newton step","wrap-around permutation","parallel computation"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1685","title":"Use of web technologies in construction project management: what are the critical success\/failure factors?","abstract":"A concept of how the World Wide Web (WWW) and its associated technologies can be used to manage construction projects has been recognized by practitioners in the construction industry for quite sometime. This concept is often referred to as a Web-Based Project Management System (WPMS). It promises, to enhance construction project documentation and control, and to revolutionize the way construction project teams process and transmit project information. WPMS is an electronic project-management system conducted through the Internet. The system provides a centralized, commonly accessible, reliable means of transmitting and storing project information. Project information is stored on the server and a standard Web browser is used as the gateway to exchange this information, eliminating geographic and hardware platforms boundary","tok_text":"use of web technolog in construct project manag : what are the critic success \/ failur factor ? \n a concept of how the world wide web ( www ) and it associ technolog can be use to manag construct project ha been recogn by practition in the construct industri for quit sometim . thi concept is often refer to as a web-bas project manag system ( wpm ) . it promis , to enhanc construct project document and control , and to revolution the way construct project team process and transmit project inform . wpm is an electron project-manag system conduct through the internet . the system provid a central , commonli access , reliabl mean of transmit and store project inform . project inform is store on the server and a standard web browser is use as the gateway to exchang thi inform , elimin geograph and hardwar platform boundari","ordered_present_kp":[313,240,384,70,726],"keyphrases":["success","construction industry","Web-Based Project Management System","project documentation","Web browser","project control","implementation"],"prmu":["P","P","P","P","P","R","U"]} {"id":"169","title":"MRP in a job shop environment using a resource constrained project scheduling model","abstract":"One of the most difficult tasks in a job shop manufacturing environment is to balance schedule and capacity in an ongoing basis. MRP systems are commonly used for scheduling, although their inability to deal with capacity constraints adequately is a severe drawback. In this study, we show that material requirements planning can be done more effectively in a job shop environment using a resource constrained project scheduling model. The proposed model augments MRP models by incorporating capacity constraints and using variable lead time lengths. The efficacy of this approach is tested on MRP systems by comparing the inventory carrying costs and resource allocation of the solutions obtained by the proposed model to those obtained by using a traditional MRP model. In general, it is concluded that the proposed model provides improved schedules with considerable reductions in inventory carrying costs","tok_text":"mrp in a job shop environ use a resourc constrain project schedul model \n one of the most difficult task in a job shop manufactur environ is to balanc schedul and capac in an ongo basi . mrp system are commonli use for schedul , although their inabl to deal with capac constraint adequ is a sever drawback . in thi studi , we show that materi requir plan can be done more effect in a job shop environ use a resourc constrain project schedul model . the propos model augment mrp model by incorpor capac constraint and use variabl lead time length . the efficaci of thi approach is test on mrp system by compar the inventori carri cost and resourc alloc of the solut obtain by the propos model to those obtain by use a tradit mrp model . in gener , it is conclud that the propos model provid improv schedul with consider reduct in inventori carri cost","ordered_present_kp":[9,0,32,336,58,263,521,613,638],"keyphrases":["MRP","job shop environment","resource constrained project scheduling model","scheduling","capacity constraints","material requirements planning","variable lead time lengths","inventory carrying costs","resource allocation","project management"],"prmu":["P","P","P","P","P","P","P","P","P","M"]} {"id":"1798","title":"Robustness evaluation of a minimal RBF neural network for nonlinear-data-storage-channel equalisation","abstract":"The authors present a performance-robustness evaluation of the recently developed minimal resource allocation network (MRAN) for equalisation in highly nonlinear magnetic recording channels in disc storage systems. Unlike communication systems, equalisation of signals in these channels is a difficult problem, as they are corrupted by data-dependent noise and highly nonlinear distortions. Nair and Moon (1997) have proposed a maximum signal to distortion ratio (MSDR) equaliser for data storage channels, which uses a specially designed neural network, where all the parameters of the neural network are determined theoretically, based on the exact knowledge of the channel model parameters. In the present paper, the performance of the MSDR equaliser is compared with that of the MRAN equaliser using a magnetic recording channel model, under Conditions that include variations in partial erasure, jitter, width and noise power, as well as model mismatch. Results from the study indicate that the less complex MRAN equaliser gives consistently better performance robustness than the MSDR equaliser in terms of signal to distortion ratios (SDRs)","tok_text":"robust evalu of a minim rbf neural network for nonlinear-data-storage-channel equalis \n the author present a performance-robust evalu of the recent develop minim resourc alloc network ( mran ) for equalis in highli nonlinear magnet record channel in disc storag system . unlik commun system , equalis of signal in these channel is a difficult problem , as they are corrupt by data-depend nois and highli nonlinear distort . nair and moon ( 1997 ) have propos a maximum signal to distort ratio ( msdr ) equalis for data storag channel , which use a special design neural network , where all the paramet of the neural network are determin theoret , base on the exact knowledg of the channel model paramet . in the present paper , the perform of the msdr equalis is compar with that of the mran equalis use a magnet record channel model , under condit that includ variat in partial erasur , jitter , width and nois power , as well as model mismatch . result from the studi indic that the less complex mran equalis give consist better perform robust than the msdr equalis in term of signal to distort ratio ( sdr )","ordered_present_kp":[0,156,208,250,47,376,397,24,787,747],"keyphrases":["robustness evaluation","RBF neural network","nonlinear-data-storage-channel equalisation","minimal resource allocation network","highly nonlinear magnetic recording channels","disc storage systems","data-dependent noise","highly nonlinear distortions","MSDR equaliser","MRAN equaliser","maximum signal to distortion ratio equaliser","digital magnetic recording","jitter noise"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","M","R"]} {"id":"1464","title":"LR parsing for conjunctive grammars","abstract":"The generalized LR parsing algorithm for context-free grammars, introduced by Tomita in 1986, is a polynomial-time implementation of nondeterministic LR parsing that uses graph-structured stack to represent the contents of the nondeterministic parser's pushdown for all possible branches of computation at a single computation step. It has been specifically developed as a solution for practical parsing tasks arising in computational linguistics, and indeed has proved itself to be very suitable for natural language processing. Conjunctive grammars extend context-free grammars by allowing the use of an explicit intersection operation within grammar rules. This paper develops a new LR-style parsing algorithm for these grammars, which is based on the very same idea of a graph-structured pushdown, where the simultaneous existence of several paths in the graph is used to perform the mentioned intersection operation. The underlying finite automata are treated in the most general way: instead of showing the algorithm's correctness for some particular way of constructing automata, the paper defines a wide class of automata usable with a given grammar, which includes not only the traditional LR(k) automata, but also, for instance, a trivial automaton with a single reachable state. A modification of the SLR(k) table construction method that makes use of specific properties of conjunctive grammars is provided as one possible way of making finite automata to use with the algorithm","tok_text":"lr pars for conjunct grammar \n the gener lr pars algorithm for context-fre grammar , introduc by tomita in 1986 , is a polynomial-tim implement of nondeterminist lr pars that use graph-structur stack to repres the content of the nondeterminist parser 's pushdown for all possibl branch of comput at a singl comput step . it ha been specif develop as a solut for practic pars task aris in comput linguist , and inde ha prove itself to be veri suitabl for natur languag process . conjunct grammar extend context-fre grammar by allow the use of an explicit intersect oper within grammar rule . thi paper develop a new lr-style pars algorithm for these grammar , which is base on the veri same idea of a graph-structur pushdown , where the simultan exist of sever path in the graph is use to perform the mention intersect oper . the underli finit automata are treat in the most gener way : instead of show the algorithm 's correct for some particular way of construct automata , the paper defin a wide class of automata usabl with a given grammar , which includ not onli the tradit lr(k ) automata , but also , for instanc , a trivial automaton with a singl reachabl state . a modif of the slr(k ) tabl construct method that make use of specif properti of conjunct grammar is provid as one possibl way of make finit automata to use with the algorithm","ordered_present_kp":[12,35,179,289,388,454,63,545,576,837,1123,1148],"keyphrases":["conjunctive grammars","generalized LR parsing algorithm","context-free grammars","graph-structured stack","computation","computational linguistics","natural language processing","explicit intersection operation","grammar rules","finite automata","trivial automaton","single reachable state","nondeterministic parser pushdown","Boolean closure","deterministic context-free languages"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","U","M"]} {"id":"1499","title":"A digital-driving system for smart vehicles","abstract":"In the wake of the computer and information technology revolutions, vehicles are undergoing dramatic changes in their capabilities and how they interact with drivers. Although some vehicles can decide to either generate warnings for the human driver or control the vehicle autonomously, they must usually make these decisions in real time with only incomplete information. So, human drivers must still maintain control over the vehicle. I sketch a digital driving behavior model. By simulating and analyzing driver behavior during different maneuvers such as lane changing, lane following, and traffic avoidance, researchers participating in the Beijing Institute of Technology's digital-driving project will be able to examine the possible correlations or causal relations between the smart vehicle, IVISs, the intelligent road-traffic-information network, and the driver. We aim to successfully demonstrate that a digital-driving system can provide a direction for developing human-centered smart vehicles","tok_text":"a digital-driv system for smart vehicl \n in the wake of the comput and inform technolog revolut , vehicl are undergo dramat chang in their capabl and how they interact with driver . although some vehicl can decid to either gener warn for the human driver or control the vehicl autonom , they must usual make these decis in real time with onli incomplet inform . so , human driver must still maintain control over the vehicl . i sketch a digit drive behavior model . by simul and analyz driver behavior dure differ maneuv such as lane chang , lane follow , and traffic avoid , research particip in the beij institut of technolog 's digital-driv project will be abl to examin the possibl correl or causal relat between the smart vehicl , iviss , the intellig road-traffic-inform network , and the driver . we aim to success demonstr that a digital-driv system can provid a direct for develop human-cent smart vehicl","ordered_present_kp":[890,748,514,560,542,529],"keyphrases":["maneuvers","lane changing","lane following","traffic avoidance","intelligence","human-centered smart vehicles","digital driving system","in-vehicle information systems","intelligent driver-vehicle interface","ecological driver-vehicle interface","vehicle control","interactive communication","intelligent road traffic information network","intelligent transportation systems"],"prmu":["P","P","P","P","P","P","R","M","M","U","R","M","M","M"]} {"id":"1765","title":"On bandlimited scaling function","abstract":"This paper discusses band-limited scaling function, especially the single interval band case and three interval band cases. Their relationship to oversampling property and weakly translation invariance are also studied. At the end, we propose an open problem","tok_text":"on bandlimit scale function \n thi paper discuss band-limit scale function , especi the singl interv band case and three interv band case . their relationship to oversampl properti and weakli translat invari are also studi . at the end , we propos an open problem","ordered_present_kp":[3,93,161,184],"keyphrases":["bandlimited scaling function","interval band case","oversampling property","weakly translation invariance"],"prmu":["P","P","P","P"]} {"id":"1873","title":"A phytography of WALDMEISTER","abstract":"The architecture of the WALDMEISTER prover for unit equational deduction is based on a strict separation of active and passive facts. After an inspection of the system's proof procedure, the representation of each of the central data structures is outlined, namely indexing for the active facts, compression for the passive facts, successor sets for the hypotheses, and minimal recording of inference steps for the proof object. In order to cope with large search spaces, specialized redundancy criteria are employed, and the empirically gained control knowledge is integrated to ease the use of the system. The paper concludes with a quantitative comparison of the WALDMEISTER versions over the years, and a view of the future prospects","tok_text":"a phytographi of waldmeist \n the architectur of the waldmeist prover for unit equat deduct is base on a strict separ of activ and passiv fact . after an inspect of the system 's proof procedur , the represent of each of the central data structur is outlin , name index for the activ fact , compress for the passiv fact , successor set for the hypothes , and minim record of infer step for the proof object . in order to cope with larg search space , special redund criteria are employ , and the empir gain control knowledg is integr to eas the use of the system . the paper conclud with a quantit comparison of the waldmeist version over the year , and a view of the futur prospect","ordered_present_kp":[17,73,130,277,232,263,343,2,374,430,458,667],"keyphrases":["phytography","WALDMEISTER","unit equational deduction","passive facts","data structures","indexing","active facts","hypotheses","inference","large search spaces","redundancy","future prospects","theorem prover","CADE ATP System Competition"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","M","M"]} {"id":"1836","title":"Parcel boundary identification with computer-assisted boundary overlay process for Taiwan","abstract":"The study investigates the design of a process for parcel boundary identification with cadastral map overlay using the principle of least squares. The objective of this research is to provide an objective tool for boundary identification survey. The proposed process includes an adjustment model, a weighting scheme, and other related operations. A numerical example is included","tok_text":"parcel boundari identif with computer-assist boundari overlay process for taiwan \n the studi investig the design of a process for parcel boundari identif with cadastr map overlay use the principl of least squar . the object of thi research is to provid an object tool for boundari identif survey . the propos process includ an adjust model , a weight scheme , and other relat oper . a numer exampl is includ","ordered_present_kp":[0,74,159,256,272,327,344],"keyphrases":["parcel boundary identification","Taiwan","cadastral map overlay","objective tool","boundary identification survey","adjustment model","weighting scheme","computer assisted boundary overlay process","Gauss-Marker model","geographic information system","weighted least squares adjustment"],"prmu":["P","P","P","P","P","P","P","M","M","U","R"]} {"id":"1758","title":"Hilbert modular threefolds of arithmetic genus one","abstract":"D. Weisser (1981) proved that there are exactly four Galois cubic number fields with Hilbert modular threefolds of arithmetic genus one. In this paper, we extend Weisser's work to cover all cubic number fields. Our main result is that there are exactly 33 fields with Hilbert modular threefolds of arithmetic genus one. These fields are enumerated explicitly","tok_text":"hilbert modular threefold of arithmet genu one \n d. weisser ( 1981 ) prove that there are exactli four galoi cubic number field with hilbert modular threefold of arithmet genu one . in thi paper , we extend weisser 's work to cover all cubic number field . our main result is that there are exactli 33 field with hilbert modular threefold of arithmet genu one . these field are enumer explicitli","ordered_present_kp":[0,29,103],"keyphrases":["Hilbert modular threefolds","arithmetic genus one","Galois cubic number fields"],"prmu":["P","P","P"]} {"id":"154","title":"Verifying concurrent systems with symbolic execution","abstract":"Current techniques for interactively proving temporal properties of concurrent systems translate transition systems into temporal formulas by introducing program counter variables. Proofs are not intuitive, because control flow is not explicitly considered. For sequential programs symbolic execution is a very intuitive, interactive proof strategy. In this paper we adopt this technique for parallel programs. Properties are formulated in interval temporal logic. An implementation in the interactive theorem prover KIV has shown that this technique offers a high degree of automation and allows simple, local invariants","tok_text":"verifi concurr system with symbol execut \n current techniqu for interact prove tempor properti of concurr system translat transit system into tempor formula by introduc program counter variabl . proof are not intuit , becaus control flow is not explicitli consid . for sequenti program symbol execut is a veri intuit , interact proof strategi . in thi paper we adopt thi techniqu for parallel program . properti are formul in interv tempor logic . an implement in the interact theorem prover kiv ha shown that thi techniqu offer a high degre of autom and allow simpl , local invari","ordered_present_kp":[27,79,7,122,142,169,269,384,468,569],"keyphrases":["concurrent systems","symbolic execution","temporal properties","transition systems","temporal formulas","program counter variables","sequential programs","parallel programs","interactive theorem prover KIV","local invariants","concurrent systems verification"],"prmu":["P","P","P","P","P","P","P","P","P","P","M"]} {"id":"1544","title":"Driving the NKK Smartswitch.2. Graphics and text","abstract":"Whether your message is one of workplace safety or world peace, the long nights of brooding over ways to tell the world are over. Part 1 described the basic interface to drive the Smartswitch. Part 2 adds the bells and whistles to allow both text and messages to be placed anywhere on the screen. It considers character generation, graphic generation and the user interface","tok_text":"drive the nkk smartswitch.2 . graphic and text \n whether your messag is one of workplac safeti or world peac , the long night of brood over way to tell the world are over . part 1 describ the basic interfac to drive the smartswitch . part 2 add the bell and whistl to allow both text and messag to be place anywher on the screen . it consid charact gener , graphic gener and the user interfac","ordered_present_kp":[42,62,341,357,379],"keyphrases":["text","messages","character generation","graphic generation","user interface","NKK Smartswitch","computer graphics"],"prmu":["P","P","P","P","P","R","M"]} {"id":"1501","title":"Computational challenges in cell simulation: a software engineering approach","abstract":"Molecular biology's advent in the 20th century has exponentially increased our knowledge about the inner workings of life. We have dozens of completed genomes and an array of high-throughput methods to characterize gene encodings and gene product operation. The question now is how we will assemble the various pieces. In other words, given sufficient information about a living cell's molecular components, can we predict its behavior? We introduce the major classes of cellular processes relevant to modeling, discuss software engineering's role in cell simulation, and identify cell simulation requirements. Our E-Cell project aims to develop the theories, techniques, and software platforms necessary for whole-cell-scale modeling, simulation, and analysis. Since the project's launch in 1996, we have built a variety of cell models, and we are currently developing new models that vary with respect to species, target subsystem, and overall scale","tok_text":"comput challeng in cell simul : a softwar engin approach \n molecular biolog 's advent in the 20th centuri ha exponenti increas our knowledg about the inner work of life . we have dozen of complet genom and an array of high-throughput method to character gene encod and gene product oper . the question now is how we will assembl the variou piec . in other word , given suffici inform about a live cell 's molecular compon , can we predict it behavior ? we introduc the major class of cellular process relev to model , discuss softwar engin 's role in cell simul , and identifi cell simul requir . our e-cel project aim to develop the theori , techniqu , and softwar platform necessari for whole-cell-scal model , simul , and analysi . sinc the project 's launch in 1996 , we have built a varieti of cell model , and we are current develop new model that vari with respect to speci , target subsystem , and overal scale","ordered_present_kp":[19,34,59,601,689],"keyphrases":["cell simulation","software engineering","molecular biology","E-Cell project","whole-cell-scale modeling","object-oriented design"],"prmu":["P","P","P","P","P","U"]} {"id":"1645","title":"Effects of the transition to a client-centred team organization in administrative surveying work","abstract":"A new work organization was introduced in administrative surveying work in Sweden during 1998. The new work organization implied a transition to a client-centred team-based organization and required a change in competence from specialist to generalist knowledge as well as a transition to a new information technology, implying a greater integration within the company. The aim of this study was to follow the surveyors for two years from the start of the transition and investigate how perceived consequences of the transition, job, organizational factors, well-being and effectiveness measures changed between 1998 and 2000. The Teamwork Profile and QPS Nordic questionnaire were used. The 205 surveyors who participated in all three study phases constituted the study group. The result showed that surveyors who perceived that they were working as generalists rated the improvements in job and organizational factors significantly higher than those who perceived that they were not yet generalists. Improvements were noted in 2000 in quality of service to clients, time available to handle a case and effectiveness of teamwork in a transfer to a team-based work organization group, cohesion and continuous improvement practices-for example, learning by doing, mentoring and guided delegation-were important to improve the social effectiveness of group work","tok_text":"effect of the transit to a client-centr team organ in administr survey work \n a new work organ wa introduc in administr survey work in sweden dure 1998 . the new work organ impli a transit to a client-centr team-bas organ and requir a chang in compet from specialist to generalist knowledg as well as a transit to a new inform technolog , impli a greater integr within the compani . the aim of thi studi wa to follow the surveyor for two year from the start of the transit and investig how perceiv consequ of the transit , job , organiz factor , well-b and effect measur chang between 1998 and 2000 . the teamwork profil and qp nordic questionnair were use . the 205 surveyor who particip in all three studi phase constitut the studi group . the result show that surveyor who perceiv that they were work as generalist rate the improv in job and organiz factor significantli higher than those who perceiv that they were not yet generalist . improv were note in 2000 in qualiti of servic to client , time avail to handl a case and effect of teamwork in a transfer to a team-bas work organ group , cohes and continu improv practices-for exampl , learn by do , mentor and guid delegation-wer import to improv the social effect of group work","ordered_present_kp":[27,54,320,373,523,529,557,605,625,1209],"keyphrases":["client-centred team organization","administrative surveying work","information technology","company","job","organizational factors","effectiveness measures","Teamwork Profile","QPS Nordic questionnaire","social effectiveness","public administrative sector"],"prmu":["P","P","P","P","P","P","P","P","P","P","M"]} {"id":"1600","title":"The development and evaluation of a fuzzy logic expert system for renal transplantation assignment: Is this a useful tool?","abstract":"Allocating donor kidneys to patients is a complex, multicriteria decision-making problem which involves not only medical, but also ethical and political issues. In this paper, a fuzzy logic expert system approach was proposed as an innovative way to deal with the vagueness and complexity faced by medical doctors in kidney allocation decision making. A pilot fuzzy logic expert system for kidney allocation was developed and evaluated in comparison with two existing allocation algorithms: a priority sorting system used by multiple organ retrieval and exchange (MORE) in Canada and a point scoring systems used by united network for organ sharing (UNOS) in US. Our simulated experiment based on real data indicated that the fuzzy logic system can represent the expert's thinking well in handling complex tradeoffs, and overall, the fuzzy logic derived recommendations were more acceptable to the expert than those from the MORE and UNOS algorithms","tok_text":"the develop and evalu of a fuzzi logic expert system for renal transplant assign : is thi a use tool ? \n alloc donor kidney to patient is a complex , multicriteria decision-mak problem which involv not onli medic , but also ethic and polit issu . in thi paper , a fuzzi logic expert system approach wa propos as an innov way to deal with the vagu and complex face by medic doctor in kidney alloc decis make . a pilot fuzzi logic expert system for kidney alloc wa develop and evalu in comparison with two exist alloc algorithm : a prioriti sort system use by multipl organ retriev and exchang ( more ) in canada and a point score system use by unit network for organ share ( uno ) in us . our simul experi base on real data indic that the fuzzi logic system can repres the expert 's think well in handl complex tradeoff , and overal , the fuzzi logic deriv recommend were more accept to the expert than those from the more and uno algorithm","ordered_present_kp":[57,27,111,150,383,530,617,643,692],"keyphrases":["fuzzy logic expert system","renal transplantation assignment","donor kidneys","multicriteria decision-making problem","kidney allocation decision making","priority sorting system","point scoring systems","united network for organ sharing","simulated experiment","multiple organ retrieval exchange","complex tradeoff handling"],"prmu":["P","P","P","P","P","P","P","P","P","R","R"]} {"id":"1871","title":"Strong and weak points of the MUSCADET theorem prover-examples from CASC-JC","abstract":"MUSCADET is a knowledge-based theorem prover based on natural deduction. It has participated in CADE Automated theorem proving System Competitions. The results show its complementarity with regard to resolution-based provers. This paper presents some of its crucial methods and gives some examples of MUSCADET proofs from the last competition (CASC-JC in IJCAR 2001)","tok_text":"strong and weak point of the muscadet theorem prover-exampl from casc-jc \n muscadet is a knowledge-bas theorem prover base on natur deduct . it ha particip in cade autom theorem prove system competit . the result show it complementar with regard to resolution-bas prover . thi paper present some of it crucial method and give some exampl of muscadet proof from the last competit ( casc-jc in ijcar 2001 )","ordered_present_kp":[29,65,89,126,159,249],"keyphrases":["MUSCADET","CASC-JC","knowledge-based theorem prover","natural deduction","CADE Automated theorem proving System Competitions","resolution-based provers"],"prmu":["P","P","P","P","P","P"]} {"id":"1834","title":"A formal model of correctness in a cadastre","abstract":"A key issue for cadastral systems is the maintenance of their correctness. Correctness is defined to be the proper correspondence between the valid legal situation and the content of the cadastre. This correspondence is generally difficult to achieve, since the cadastre is not a complete representation of all aspects influencing the legal situation in reality. The goal of the paper is to develop a formal model comprising representations of the cadastre and of reality that allows the simulation and investigation of cases where this correspondence is potentially violated. For this purpose the model consists of two parts, the first part represents the valid legal situation and the second part represents the cadastre. This makes it feasible to mark the differences between reality and the cadastre. The marking together with the two parts of the model facilitate the discussion of issues in \"real-world\" cadastral systems where incorrectness occurs. In order to develop a formal model, the paper uses the transfer of ownership of a parcel between two persons as minimal case study. The foundation for the formalization is a modern version of the situation calculus. The focus moves from the analysis of the cadastre to the preparation of a conceptual and a formalized model and the implementation of a prototype","tok_text":"a formal model of correct in a cadastr \n a key issu for cadastr system is the mainten of their correct . correct is defin to be the proper correspond between the valid legal situat and the content of the cadastr . thi correspond is gener difficult to achiev , sinc the cadastr is not a complet represent of all aspect influenc the legal situat in realiti . the goal of the paper is to develop a formal model compris represent of the cadastr and of realiti that allow the simul and investig of case where thi correspond is potenti violat . for thi purpos the model consist of two part , the first part repres the valid legal situat and the second part repres the cadastr . thi make it feasibl to mark the differ between realiti and the cadastr . the mark togeth with the two part of the model facilit the discuss of issu in \" real-world \" cadastr system where incorrect occur . in order to develop a formal model , the paper use the transfer of ownership of a parcel between two person as minim case studi . the foundat for the formal is a modern version of the situat calculu . the focu move from the analysi of the cadastr to the prepar of a conceptu and a formal model and the implement of a prototyp","ordered_present_kp":[31,56,168,2,932,988,1061,2],"keyphrases":["formal model","formal model","cadastre","cadastral systems","legal situation","transfer of ownership","minimal case study","situation calculus","formal correctness model","correctness maintenance","formalized model"],"prmu":["P","P","P","P","P","P","P","P","R","R","P"]} {"id":"1647","title":"Examining children's reading performance and preference for different computer-displayed text","abstract":"This study investigated how common online text affects reading performance of elementary school-age children by examining the actual and perceived readability of four computer-displayed typefaces at 12- and 14-point sizes. Twenty-seven children, ages 9 to 11, were asked to read eight children's passages and identify erroneous\/substituted words while reading. Comic Sans MS, Arial and Times New Roman typefaces, regardless of size, were found to be more readable (as measured by a reading efficiency score) than Courier New. No differences in reading speed were found for any of the typeface combinations. In general, the 14-point size and the examined sans serif typefaces were perceived as being the easiest to read, fastest, most attractive, and most desirable for school-related material. In addition, participants significantly preferred Comic Sans MS and 14-point Arial to 12-point Courier. Recommendations for appropriate typeface combinations for children reading on computers are discussed","tok_text":"examin children 's read perform and prefer for differ computer-display text \n thi studi investig how common onlin text affect read perform of elementari school-ag children by examin the actual and perceiv readabl of four computer-display typefac at 12- and 14-point size . twenty-seven children , age 9 to 11 , were ask to read eight children 's passag and identifi erron \/ substitut word while read . comic san ms , arial and time new roman typefac , regardless of size , were found to be more readabl ( as measur by a read effici score ) than courier new . no differ in read speed were found for ani of the typefac combin . in gener , the 14-point size and the examin san serif typefac were perceiv as be the easiest to read , fastest , most attract , and most desir for school-rel materi . in addit , particip significantli prefer comic san ms and 14-point arial to 12-point courier . recommend for appropri typefac combin for children read on comput are discuss","ordered_present_kp":[54,108,142,221],"keyphrases":["computer-displayed text","online text","elementary school-age children","computer-displayed typefaces","child reading performance","fonts","user interface","human factors","educational computing"],"prmu":["P","P","P","P","M","U","U","U","M"]} {"id":"1602","title":"An optimization approach to plan for reusable software components","abstract":"It is well acknowledged in software engineering that there is a great potential for accomplishing significant productivity improvements through the implementation of a successful software reuse program. On the other hand, such gains are attainable only by instituting detailed action plans at both the organizational and program level. Given this need, the paucity of research papers related to planning, and in particular, optimized planning is surprising. This research, which is aimed at this gap, brings out an application of optimization for the planning of reusable software components (SCs). We present a model that selects a set of SCs that must be built, in order to lower development and adaptation costs. We also provide implications to project management based on simulation, an approach that has been adopted by other cost models in the software engineering literature. Such a prescriptive model does not exist in the literature","tok_text":"an optim approach to plan for reusabl softwar compon \n it is well acknowledg in softwar engin that there is a great potenti for accomplish signific product improv through the implement of a success softwar reus program . on the other hand , such gain are attain onli by institut detail action plan at both the organiz and program level . given thi need , the pauciti of research paper relat to plan , and in particular , optim plan is surpris . thi research , which is aim at thi gap , bring out an applic of optim for the plan of reusabl softwar compon ( sc ) . we present a model that select a set of sc that must be built , in order to lower develop and adapt cost . we also provid implic to project manag base on simul , an approach that ha been adopt by other cost model in the softwar engin literatur . such a prescript model doe not exist in the literatur","ordered_present_kp":[80,148,198,3,286,421,30,657,695,717],"keyphrases":["optimization","reusable software components","software engineering","productivity improvements","software reuse program","action plans","optimized planning","adaptation costs","project management","simulation","development costs"],"prmu":["P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1929","title":"Optimal time of switching between portfolios of securities","abstract":"Optimal time of switching between several portfolios of securities are found for the purpose of profit maximization. Two methods of their determination are considered. The cases with three and n portfolios are studied in detail","tok_text":"optim time of switch between portfolio of secur \n optim time of switch between sever portfolio of secur are found for the purpos of profit maxim . two method of their determin are consid . the case with three and n portfolio are studi in detail","ordered_present_kp":[0,29,132],"keyphrases":["optimal time","portfolios of securities","profit maximization"],"prmu":["P","P","P"]} {"id":"156","title":"Using extended logic programming for alarm-correlation in cellular phone networks","abstract":"Alarm correlation is a necessity in large mobile phone networks, where the alarm bursts resulting from severe failures would otherwise overload the network operators. We describe how to realize alarm-correlation in cellular phone networks using extended logic programming. To this end, we describe an algorithm and system solving the problem, a model of a mobile phone network application, and a detailed solution for a specific scenario","tok_text":"use extend logic program for alarm-correl in cellular phone network \n alarm correl is a necess in larg mobil phone network , where the alarm burst result from sever failur would otherwis overload the network oper . we describ how to realiz alarm-correl in cellular phone network use extend logic program . to thi end , we describ an algorithm and system solv the problem , a model of a mobil phone network applic , and a detail solut for a specif scenario","ordered_present_kp":[4,29,45,98,200],"keyphrases":["extended logic programming","alarm-correlation","cellular phone networks","large mobile phone networks","network operators","fault diagnosis"],"prmu":["P","P","P","P","P","U"]} {"id":"1546","title":"Necessary conditions of optimality for impulsive systems on Banach spaces","abstract":"We present necessary conditions of optimality for optimal control problems arising in systems governed by impulsive evolution equations on Banach spaces. Basic notations and terminologies are first presented and necessary conditions of optimality are presented. Special cases are discussed and we present an application to the classical linear quadratic regulator problem","tok_text":"necessari condit of optim for impuls system on banach space \n we present necessari condit of optim for optim control problem aris in system govern by impuls evolut equat on banach space . basic notat and terminolog are first present and necessari condit of optim are present . special case are discuss and we present an applic to the classic linear quadrat regul problem","ordered_present_kp":[342,20,30,103,150,47,0],"keyphrases":["necessary conditions","optimality","impulsive systems","Banach spaces","optimal control","impulsive evolution equations","linear quadratic regulator"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1503","title":"Neural networks for web content filtering","abstract":"With the proliferation of harmful Internet content such as pornography, violence, and hate messages, effective content-filtering systems are essential. Many Web-filtering systems are commercially available, and potential users can download trial versions from the Internet. However, the techniques these systems use are insufficiently accurate and do not adapt well to the ever-changing Web. To solve this problem, we propose using artificial neural networks to classify Web pages during content filtering. We focus on blocking pornography because it is among the most prolific and harmful Web content. However, our general framework is adaptable for filtering other objectionable Web material","tok_text":"neural network for web content filter \n with the prolifer of harm internet content such as pornographi , violenc , and hate messag , effect content-filt system are essenti . mani web-filt system are commerci avail , and potenti user can download trial version from the internet . howev , the techniqu these system use are insuffici accur and do not adapt well to the ever-chang web . to solv thi problem , we propos use artifici neural network to classifi web page dure content filter . we focu on block pornographi becaus it is among the most prolif and harm web content . howev , our gener framework is adapt for filter other objection web materi","ordered_present_kp":[420,19,105,555],"keyphrases":["Web content filtering","violence","artificial neural networks","harmful Web content","Intelligent Classification Engine","learning capabilities","pornographic\/nonpornographic Web page differentiation","Web page classification"],"prmu":["P","P","P","P","U","U","M","M"]} {"id":"1687","title":"Cleared for take-off [Hummingbird Enterprise]","abstract":"A recent Gartner report identifies Hummingbird in the first wave of vendors as an early example of convergence in the 'smart enterprise suite' market. We spoke to Hummingbird's Marketing Director for Northern Europe","tok_text":"clear for take-off [ hummingbird enterpris ] \n a recent gartner report identifi hummingbird in the first wave of vendor as an earli exampl of converg in the ' smart enterpris suit ' market . we spoke to hummingbird 's market director for northern europ","ordered_present_kp":[159,21],"keyphrases":["Hummingbird Enterprise","smart enterprise suite","information content","knowledge content","collaboration"],"prmu":["P","P","U","U","U"]} {"id":"1914","title":"Vacuum-compatible vibration isolation stack for an interferometric gravitational wave detector TAMA300","abstract":"Interferometric gravitational wave detectors require a large degree of vibration isolation. For this purpose, a multilayer stack constructed of rubber and metal blocks is suitable, because it provides isolation in all degrees of freedom at once. In TAMA300, a 300 m interferometer in Japan, long-term dimensional stability and compatibility with an ultrahigh vacuum environment of about 10\/sup -6\/ Pa are also required. To keep the interferometer at its operating point despite ground strain and thermal drift of the isolation system, a thermal actuator was introduced. To prevent the high outgassing rate of the rubber from spoiling the vacuum, the rubber blocks were enclosed by gas-tight bellows. Using these techniques, we have successfully developed a three-layer stack which has a vibration isolation ratio of more than 10\/sup 3\/ at 300 Hz with control of drift and enough vacuum compatibility","tok_text":"vacuum-compat vibrat isol stack for an interferometr gravit wave detector tama300 \n interferometr gravit wave detector requir a larg degre of vibrat isol . for thi purpos , a multilay stack construct of rubber and metal block is suitabl , becaus it provid isol in all degre of freedom at onc . in tama300 , a 300 m interferomet in japan , long-term dimension stabil and compat with an ultrahigh vacuum environ of about 10 \/ sup -6\/ pa are also requir . to keep the interferomet at it oper point despit ground strain and thermal drift of the isol system , a thermal actuat wa introduc . to prevent the high outgass rate of the rubber from spoil the vacuum , the rubber block were enclos by gas-tight bellow . use these techniqu , we have success develop a three-lay stack which ha a vibrat isol ratio of more than 10 \/ sup 3\/ at 300 hz with control of drift and enough vacuum compat","ordered_present_kp":[14,39,661,175,214,339,385,484,502,520,557,689,868,309,419,828],"keyphrases":["vibration isolation stack","interferometric gravitational wave detectors","multilayer stack","metal blocks","300 m","long-term dimensional stability","ultrahigh vacuum environment","10\/sup -6\/ Pa","operating point","ground strain","thermal drift","thermal actuator","rubber blocks","gas-tight bellows","300 Hz","vacuum compatibility","TAMA300 interferometer","rubber outgassing"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"1767","title":"Bivariate fractal interpolation functions on rectangular domains","abstract":"Non-tensor product bivariate fractal interpolation functions defined on gridded rectangular domains are constructed. Linear spaces consisting of these functions are introduced. The relevant Lagrange interpolation problem is discussed. A negative result about the existence of affine fractal interpolation functions defined on such domains is obtained","tok_text":"bivari fractal interpol function on rectangular domain \n non-tensor product bivari fractal interpol function defin on grid rectangular domain are construct . linear space consist of these function are introduc . the relev lagrang interpol problem is discuss . a neg result about the exist of affin fractal interpol function defin on such domain is obtain","ordered_present_kp":[0,36,118,158,222,292],"keyphrases":["bivariate fractal interpolation functions","rectangular domains","gridded rectangular domains","linear spaces","Lagrange interpolation problem","affine fractal interpolation functions"],"prmu":["P","P","P","P","P","P"]} {"id":"1809","title":"Approach to adaptive neural net-based H\/sub infinity \/ control design","abstract":"An approach is investigated for the adaptive neural net-based H\/sub infinity \/ control design of a class of nonlinear uncertain systems. In the proposed framework, two multilayer feedforward neural networks are constructed as an alternative to approximate the nonlinear system. The neural networks are piecewisely interpolated to generate a linear differential inclusion model by which a linear state feedback H\/sub infinity \/ control law can be applied. An adaptive weight adjustment mechanism for the multilayer feedforward neural networks is developed to ensure H\/sub infinity \/ regulation performance. It is shown that finding the control gain matrices can be transformed into a standard linear matrix inequality problem and solved via a developed recurrent neural network","tok_text":"approach to adapt neural net-bas h \/ sub infin \/ control design \n an approach is investig for the adapt neural net-bas h \/ sub infin \/ control design of a class of nonlinear uncertain system . in the propos framework , two multilay feedforward neural network are construct as an altern to approxim the nonlinear system . the neural network are piecewis interpol to gener a linear differenti inclus model by which a linear state feedback h \/ sub infin \/ control law can be appli . an adapt weight adjust mechan for the multilay feedforward neural network is develop to ensur h \/ sub infin \/ regul perform . it is shown that find the control gain matric can be transform into a standard linear matrix inequ problem and solv via a develop recurr neural network","ordered_present_kp":[12,164,223,344,373,415,632,685,736],"keyphrases":["adaptive neural net-based H\/sub infinity \/ control design","nonlinear uncertain systems","multilayer feedforward neural networks","piecewise interpolation","linear differential inclusion model","linear state feedback","control gain matrices","linear matrix inequality problem","recurrent neural network","LMI"],"prmu":["P","P","P","P","P","P","P","P","P","U"]} {"id":"1466","title":"Feldkamp-type image reconstruction from equiangular data","abstract":"The cone-beam approach for image reconstruction attracts increasing attention in various applications, especially medical imaging. Previously, the traditional practical cone-beam reconstruction method, the Feldkamp algorithm, was generalized into the case of spiral\/helical scanning loci with equispatial cone-beam projection data. In this paper, we formulated the generalized Feldkamp algorithm in the case of equiangular cone-beam projection data, and performed numerical simulation to evaluate the image quality. Because medical multi-slice\/cone-beam CT scanners typically use equiangular projection data, our new formula may be useful in this area as a framework for further refinement and a benchmark for comparison","tok_text":"feldkamp-typ imag reconstruct from equiangular data \n the cone-beam approach for imag reconstruct attract increas attent in variou applic , especi medic imag . previous , the tradit practic cone-beam reconstruct method , the feldkamp algorithm , wa gener into the case of spiral \/ helic scan loci with equispati cone-beam project data . in thi paper , we formul the gener feldkamp algorithm in the case of equiangular cone-beam project data , and perform numer simul to evalu the imag qualiti . becaus medic multi-slic \/ cone-beam ct scanner typic use equiangular project data , our new formula may be use in thi area as a framework for further refin and a benchmark for comparison","ordered_present_kp":[0,35,58,147,182,272,302,366,406,455,480,502],"keyphrases":["Feldkamp-type image reconstruction","equiangular data","cone-beam approach","medical imaging","practical cone-beam reconstruction method","spiral\/helical scanning loci","equispatial cone-beam projection data","generalized Feldkamp algorithm","equiangular cone-beam projection data","numerical simulation","image quality","medical multi-slice\/cone-beam CT scanners"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1895","title":"An algorithm combining neural networks with fundamental parameters","abstract":"An algorithm combining neural networks with the fundamental parameters equations (NNFP) is proposed for making corrections for non-linear matrix effects in x-ray fluorescence analysis. In the algorithm, neural networks were applied to relate the concentrations of components to both the measured intensities and the relative theoretical intensities calculated by the fundamental parameter equations. The NNFP algorithm is compared with the classical theoretical correction models, including the fundamental parameters approach, the Lachance-Traill model, a hyperbolic function model and the COLA algorithm. For an alloy system with 15 measured elements, in most cases, the prediction errors of the NNFP algorithm are lower than those of the fundamental parameters approach, the Lachance-Traill model, the hyperbolic function model and the COLA algorithm separately. If there are the serious matrix effects, such as matrix effects among Cr, Fe and Ni, the NNFP algorithm generally decreased predictive errors as compared with the classical models, except for the case of Cr by the fundamental parameters approach. The main reason why the NNFP algorithm has generally a better predictive ability than the classical theoretical correction models might be that neural networks can better calibrate the non-linear matrix effects in a complex multivariate system","tok_text":"an algorithm combin neural network with fundament paramet \n an algorithm combin neural network with the fundament paramet equat ( nnfp ) is propos for make correct for non-linear matrix effect in x-ray fluoresc analysi . in the algorithm , neural network were appli to relat the concentr of compon to both the measur intens and the rel theoret intens calcul by the fundament paramet equat . the nnfp algorithm is compar with the classic theoret correct model , includ the fundament paramet approach , the lachance-trail model , a hyperbol function model and the cola algorithm . for an alloy system with 15 measur element , in most case , the predict error of the nnfp algorithm are lower than those of the fundament paramet approach , the lachance-trail model , the hyperbol function model and the cola algorithm separ . if there are the seriou matrix effect , such as matrix effect among cr , fe and ni , the nnfp algorithm gener decreas predict error as compar with the classic model , except for the case of cr by the fundament paramet approach . the main reason whi the nnfp algorithm ha gener a better predict abil than the classic theoret correct model might be that neural network can better calibr the non-linear matrix effect in a complex multivari system","ordered_present_kp":[3,20,40,104,196,317,395,437,505,530,562,586,890,188,902,1241],"keyphrases":["algorithm","neural networks","fundamental parameters","fundamental parameters equations","Fe","x-ray fluorescence analysis","intensities","NNFP algorithm","theoretical correction models","Lachance-Traill model","hyperbolic function model","COLA algorithm","alloy system","Cr","Ni","complex multivariate system","nonlinear matrix effects"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","M"]} {"id":"1868","title":"Estimation of an N-L-N Hammerstein-Wiener model","abstract":"Estimation of a single-input single-output block-oriented model is studied. The model consists of a linear block embedded between two static nonlinear gains. Hence, it is called an N-L-N Hammerstein-Wiener model. First, the model structure is motivated and the disturbance model is discussed. The paper then concentrates on parameter estimation. A relaxation iteration scheme is proposed by making use of a model structure in which the error is bilinear-in-parameters. This leads to a simple algorithm which minimizes the original loss function. The convergence and consistency of the algorithm are studied. In order to reduce the variance error, the obtained linear model is further reduced using frequency weighted model reduction. A simulation study is used to illustrate the method","tok_text":"estim of an n-l-n hammerstein-wien model \n estim of a single-input single-output block-ori model is studi . the model consist of a linear block embed between two static nonlinear gain . henc , it is call an n-l-n hammerstein-wien model . first , the model structur is motiv and the disturb model is discuss . the paper then concentr on paramet estim . a relax iter scheme is propos by make use of a model structur in which the error is bilinear-in-paramet . thi lead to a simpl algorithm which minim the origin loss function . the converg and consist of the algorithm are studi . in order to reduc the varianc error , the obtain linear model is further reduc use frequenc weight model reduct . a simul studi is use to illustr the method","ordered_present_kp":[12,54,131,162,250,282,336,354,531,118,602,663],"keyphrases":["N-L-N Hammerstein-Wiener model","single-input single-output block-oriented model","consistency","linear block","static nonlinear gains","model structure","disturbance model","parameter estimation","relaxation iteration scheme","convergence","variance error","frequency weighted model reduction","bilinear-in-parameters error","nonlinear process"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","M"]} {"id":"1706","title":"Quantitative analysis of reconstructed 3-D coronary arterial tree and intracoronary devices","abstract":"Traditional quantitative coronary angiography is performed on two-dimensional (2-D) projection views. These views are chosen by the angiographer to minimize vessel overlap and foreshortening. With 2-D projection views that are acquired in this nonstandardized fashion, however, there is no way to know or estimate how much error occurs in the QCA process. Furthermore, coronary arteries possess a curvilinear shape and undergo a cyclical deformation due to their attachment to the myocardium. Therefore, it is necessary to obtain three-dimensional (3-D) information to best describe and quantify the dynamic curvilinear nature of the human coronary artery. Using a patient-specific 3-D coronary reconstruction algorithm and routine angiographic images, a new technique is proposed to describe: (1) the curvilinear nature of 3-D coronary arteries and intracoronary devices; (2) the magnitude of the arterial deformation caused by intracoronary devices and due to heart motion; and (3) optimal view(s) with respect to the desired \"pathway\" for delivering intracoronary devices","tok_text":"quantit analysi of reconstruct 3-d coronari arteri tree and intracoronari devic \n tradit quantit coronari angiographi is perform on two-dimension ( 2-d ) project view . these view are chosen by the angiograph to minim vessel overlap and foreshorten . with 2-d project view that are acquir in thi nonstandard fashion , howev , there is no way to know or estim how much error occur in the qca process . furthermor , coronari arteri possess a curvilinear shape and undergo a cyclic deform due to their attach to the myocardium . therefor , it is necessari to obtain three-dimension ( 3-d ) inform to best describ and quantifi the dynam curvilinear natur of the human coronari arteri . use a patient-specif 3-d coronari reconstruct algorithm and routin angiograph imag , a new techniqu is propos to describ : ( 1 ) the curvilinear natur of 3-d coronari arteri and intracoronari devic ; ( 2 ) the magnitud of the arteri deform caus by intracoronari devic and due to heart motion ; and ( 3 ) optim view( ) with respect to the desir \" pathway \" for deliv intracoronari devic","ordered_present_kp":[472,513,688,742,658],"keyphrases":["cyclical deformation","myocardium","human coronary artery","patient-specific 3-D coronary reconstruction algorithm","routine angiographic images","medical diagnostic imaging","dynamic curvilinear nature quantification","arterial deformation magnitude","intracoronary devices delivery pathway"],"prmu":["P","P","P","P","P","M","M","R","M"]} {"id":"1743","title":"Adaptive stabilization of undamped flexible structures","abstract":"In the paper non-identifier-based adaptive stabilization of undamped flexible structures is considered in the case of collocated input and output operators. The systems have poles and zeros on the imaginary axis. In the case where velocity feedback is available, the adaptive stabilizer is constructed by an adaptive PD-controller (proportional plus derivative controller). In the case where only position feedback is available, the adaptive stabilizer is constructed by an adaptive P-controller for the augmented system which consists of the controlled system and a parallel compensator. Numerical examples are given to illustrate the effectiveness of the proposed controllers","tok_text":"adapt stabil of undamp flexibl structur \n in the paper non-identifier-bas adapt stabil of undamp flexibl structur is consid in the case of colloc input and output oper . the system have pole and zero on the imaginari axi . in the case where veloc feedback is avail , the adapt stabil is construct by an adapt pd-control ( proport plu deriv control ) . in the case where onli posit feedback is avail , the adapt stabil is construct by an adapt p-control for the augment system which consist of the control system and a parallel compens . numer exampl are given to illustr the effect of the propos control","ordered_present_kp":[0,16,186,207,241,303,322,375,437,461,518],"keyphrases":["adaptive stabilization","undamped flexible structures","poles and zeros","imaginary axis","velocity feedback","adaptive PD-controller","proportional plus derivative controller","position feedback","adaptive P-controller","augmented system","parallel compensator"],"prmu":["P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1855","title":"Distribution software: ROI is king","abstract":"Middle-market accounting software vendors are taking to the open road, by way of souped-up distribution suites that can track product as it wends its way from warehouse floor to customer site. Integration provides efficiencies, and cost savings","tok_text":"distribut softwar : roi is king \n middle-market account softwar vendor are take to the open road , by way of souped-up distribut suit that can track product as it wend it way from warehous floor to custom site . integr provid effici , and cost save","ordered_present_kp":[48,0],"keyphrases":["distribution","accounting software","warehouse management"],"prmu":["P","P","M"]} {"id":"1810","title":"Input-output based pole-placement controller for a class of time-delay systems","abstract":"A controller structure valid for SISO plants involving both internal and external point delays is presented. The control signal is based only on the input and output plant signals. The controller allows finite or infinite spectrum assignment. The most important feature of the proposed controller is that it only involves the use of a class of point-delayed signals. Thus the controller synthesis involves less computational cost than former methods. Since the plant control input is generated by filtering the input and output plant signals, this controller structure is potentially applicable to the adaptive case of unknown plant parameters","tok_text":"input-output base pole-plac control for a class of time-delay system \n a control structur valid for siso plant involv both intern and extern point delay is present . the control signal is base onli on the input and output plant signal . the control allow finit or infinit spectrum assign . the most import featur of the propos control is that it onli involv the use of a class of point-delay signal . thu the control synthesi involv less comput cost than former method . sinc the plant control input is gener by filter the input and output plant signal , thi control structur is potenti applic to the adapt case of unknown plant paramet","ordered_present_kp":[0,51,100,130,264,380,409,438,512],"keyphrases":["input-output based pole-placement controller","time-delay systems","SISO plants","and external point delays","infinite spectrum assignment","point-delayed signals","controller synthesis","computational cost","filtering","I\/O-based pole-placement controller","internal point delays","finite spectrum assignment"],"prmu":["P","P","P","P","P","P","P","P","P","M","R","R"]} {"id":"1482","title":"A parareal in time procedure for the control of partial differential equations","abstract":"We have proposed in a previous note a time discretization for partial differential evolution equation that allows for parallel implementations. This scheme is here reinterpreted as a preconditioning procedure on an algebraic setting of the time discretization. This allows for extending the parallel methodology to the problem of optimal control for partial differential equations. We report a first numerical implementation that reveals a large interest","tok_text":"a parar in time procedur for the control of partial differenti equat \n we have propos in a previou note a time discret for partial differenti evolut equat that allow for parallel implement . thi scheme is here reinterpret as a precondit procedur on an algebra set of the time discret . thi allow for extend the parallel methodolog to the problem of optim control for partial differenti equat . we report a first numer implement that reveal a larg interest","ordered_present_kp":[11,142,227,252,106,349],"keyphrases":["time procedure","time discretization","evolution equation","preconditioning procedure","algebraic setting","optimal control","partial differential equation control","Hilbert space"],"prmu":["P","P","P","P","P","P","R","U"]} {"id":"1783","title":"Becoming a chief librarian: an analysis of transition stages in academic library leadership","abstract":"The author explores how the four-part model of transition cycles identified by Nicholson and West (1988) applies to becoming a chief librarian of an academic library. The four stages: preparation, encounter, adjustment, and stabilization, are considered from the micro-, mezzo-, and macrolevels of the organization, as well as for their psychological and social impact on the new job incumbent. An instrument for assessment of transitional success which could be administered in the adjustment or stabilization stage is considered","tok_text":"becom a chief librarian : an analysi of transit stage in academ librari leadership \n the author explor how the four-part model of transit cycl identifi by nicholson and west ( 1988 ) appli to becom a chief librarian of an academ librari . the four stage : prepar , encount , adjust , and stabil , are consid from the micro- , mezzo- , and macrolevel of the organ , as well as for their psycholog and social impact on the new job incumb . an instrument for assess of transit success which could be administ in the adjust or stabil stage is consid","ordered_present_kp":[8,40,57,357,400,425],"keyphrases":["chief librarian","transition stages","academic library leadership","organization","social impact","job","psychological impact","transition cycles model"],"prmu":["P","P","P","P","P","P","R","R"]} {"id":"172","title":"A VMEbus interface for multi-detector trigger and control system","abstract":"MUSE (MUltiplicity SElector) is the trigger and control system of CHIMERA, a 4 pi charged particle detector. Initialization of MUSE can be performed via VMEbus. This paper describes the design of VMEbus interface and functional module in MUSE, and briefly discusses an application of MUSE","tok_text":"a vmebu interfac for multi-detector trigger and control system \n muse ( multipl selector ) is the trigger and control system of chimera , a 4 pi charg particl detector . initi of muse can be perform via vmebu . thi paper describ the design of vmebu interfac and function modul in muse , and briefli discuss an applic of muse","ordered_present_kp":[2,65,128,48],"keyphrases":["VMEbus interface","control system","MUSE","CHIMERA","trigger system"],"prmu":["P","P","P","P","R"]} {"id":"1562","title":"Solution of a class of two-dimensional integral equations","abstract":"The two-dimensional integral equation 1\/ pi integral integral \/sub D\/( phi (r, theta )\/R\/sup 2\/)dS=f(r\/sub 0\/, theta \/sub 0\/) defined on a circular disk D: r\/sub 0\/or=5, n is not a multiple of 3 and (h, n)=1, where h is the class number of the filed Q( square root (-q)), then the diophantine equation x\/sup 2\/+q\/sup 2k+1\/=y\/sup n\/ has exactly two families of solutions (q, n, k, x, y)","tok_text":"on the diophantin equat x \/ sup 2\/+q \/ sup 2k+1\/=i \/ sup n\/ \n in thi paper it ha been prove that if q is an odd prime , q not=7 ( mod 8) , n is an odd integ > or=5 , n is not a multipl of 3 and ( h , n)=1 , where h is the class number of the file q ( squar root ( -q ) ) , then the diophantin equat x \/ sup 2\/+q \/ sup 2k+1\/=i \/ sup n\/ ha exactli two famili of solut ( q , n , k , x , y )","ordered_present_kp":[7,108,147],"keyphrases":["diophantine equation","odd prime","odd integer","Lucas sequence","primitive divisors"],"prmu":["P","P","P","U","U"]} {"id":"1713","title":"A uniform framework for regulating service access and information release on the Web","abstract":"The widespread use of Internet-based services is increasing the amount of information (such as user profiles) that clients are required to disclose. This information demand is necessary for regulating access to services, and functionally convenient (e.g., to support service customization), but it has raised privacy-related concerns which, if not addressed, may affect the users disposition to use network services. At the same time, servers need to regulate service access without disclosing entirely the details of their access control policy. There is therefore a pressing need for privacy-aware techniques to regulate access to services open to the network. We propose an approach for regulating service access and information disclosure on the Web. The approach consists of a uniform formal framework to formulate - and reason about - both service access and information disclosure constraints. It also provides a means for parties to communicate their requirements while ensuring that no private information be disclosed and that the communicated requirements are correct with respect to the constraints","tok_text":"a uniform framework for regul servic access and inform releas on the web \n the widespread use of internet-bas servic is increas the amount of inform ( such as user profil ) that client are requir to disclos . thi inform demand is necessari for regul access to servic , and function conveni ( e.g. , to support servic custom ) , but it ha rais privacy-rel concern which , if not address , may affect the user disposit to use network servic . at the same time , server need to regul servic access without disclos entir the detail of their access control polici . there is therefor a press need for privacy-awar techniqu to regul access to servic open to the network . we propos an approach for regul servic access and inform disclosur on the web . the approach consist of a uniform formal framework to formul - and reason about - both servic access and inform disclosur constraint . it also provid a mean for parti to commun their requir while ensur that no privat inform be disclos and that the commun requir are correct with respect to the constraint","ordered_present_kp":[48,159,213,537,596,424,716,772,813],"keyphrases":["information release","user profiles","information demand","network services","access control policy","privacy-aware techniques","information disclosure","uniform formal framework","reasoning","service access regulation","WWW","Internet","client server systems"],"prmu":["P","P","P","P","P","P","P","P","P","R","U","U","M"]} {"id":"1925","title":"On the accuracy of polynomial interpolation in Hilbert space with disturbed nodal values of the operator","abstract":"The interpolation accuracy of polynomial operators in a Hilbert space with a measure is estimated when nodal values of these operators are given approximately","tok_text":"on the accuraci of polynomi interpol in hilbert space with disturb nodal valu of the oper \n the interpol accuraci of polynomi oper in a hilbert space with a measur is estim when nodal valu of these oper are given approxim","ordered_present_kp":[19,40,59,117],"keyphrases":["polynomial interpolation","Hilbert space","disturbed nodal values","polynomial operators"],"prmu":["P","P","P","P"]} {"id":"1754","title":"Coordination [crisis management]","abstract":"Communications during a crisis, both internal and external, set the tone during response and carry a message through recovery. The authors describe how to set up a system for information coordination to make sure the right people get the right message, and the organization stays in control","tok_text":"coordin [ crisi manag ] \n commun dure a crisi , both intern and extern , set the tone dure respons and carri a messag through recoveri . the author describ how to set up a system for inform coordin to make sure the right peopl get the right messag , and the organ stay in control","ordered_present_kp":[10,183],"keyphrases":["crisis management","information coordination","communications process"],"prmu":["P","P","M"]} {"id":"1711","title":"Developing a CD-ROM as a teaching and learning tool in food and beverage management: a case study in hospitality education","abstract":"Food and beverage management is the traditional core of hospitality education but, in its laboratory manifestation, has come under increasing pressure in recent years. It is an area that, arguably, presents the greatest challenges in adaptation to contemporary learning technologies but, at the same time, stands to benefit most from the potential of the Web. This paper addresses the design and development of a CD-ROM learning resource for food and beverage. It is a learning resource which is designed to integrate with rather than to replace existing conventional classroom and laboratory learning methods and, thus, compensate for the decline in the resource base faced in food and beverage education in recent years. The paper includes illustrative material drawn from the CD-ROM which demonstrates its use in teaching and learning","tok_text":"develop a cd-rom as a teach and learn tool in food and beverag manag : a case studi in hospit educ \n food and beverag manag is the tradit core of hospit educ but , in it laboratori manifest , ha come under increas pressur in recent year . it is an area that , arguabl , present the greatest challeng in adapt to contemporari learn technolog but , at the same time , stand to benefit most from the potenti of the web . thi paper address the design and develop of a cd-rom learn resourc for food and beverag . it is a learn resourc which is design to integr with rather than to replac exist convent classroom and laboratori learn method and , thu , compens for the declin in the resourc base face in food and beverag educ in recent year . the paper includ illustr materi drawn from the cd-rom which demonstr it use in teach and learn","ordered_present_kp":[46,87,10,32],"keyphrases":["CD-ROM","learning tool","food and beverage management","hospitality education","teaching tool"],"prmu":["P","P","P","P","R"]} {"id":"1882","title":"Bandwidth vs. gains design of H\/sub infinity \/ tracking controllers for current-fed induction motors","abstract":"Describes a systematic procedure for designing speed and rotor flux norm tracking H\/sub infinity \/. controllers with unknown load torque disturbances for current-fed induction motors. A new effective design tool is developed to allow selection of the control gains so as to adjust the disturbances' rejection capability of the controllers in the face of the bandwidth requirements of the closed-loop system. Application of the proposed design procedure is demonstrated in a case study, and the results of numerical simulations illustrate the satisfactory performance achievable even in presence of rotor resistance uncertainty","tok_text":"bandwidth vs. gain design of h \/ sub infin \/ track control for current-f induct motor \n describ a systemat procedur for design speed and rotor flux norm track h \/ sub infin \/. control with unknown load torqu disturb for current-f induct motor . a new effect design tool is develop to allow select of the control gain so as to adjust the disturb ' reject capabl of the control in the face of the bandwidth requir of the closed-loop system . applic of the propos design procedur is demonstr in a case studi , and the result of numer simul illustr the satisfactori perform achiev even in presenc of rotor resist uncertainti","ordered_present_kp":[29,63,189,258,395,419],"keyphrases":["H\/sub infinity \/ tracking controllers","current-fed induction motors","unknown load torque disturbances","design tool","bandwidth requirements","closed-loop system","speed controllers","rotor flux norm controllers","disturbances rejection capability","feedback linearization","observers"],"prmu":["P","P","P","P","P","P","R","R","R","U","U"]} {"id":"1548","title":"A second order characteristic finite element scheme for convection-diffusion problems","abstract":"A new characteristic finite element scheme is presented for convection-diffusion problems. It is of second order accuracy in time increment, symmetric, and unconditionally stable. Optimal error estimates are proved in the framework of L\/sup 2\/-theory. Numerical results are presented for two examples, which show the advantage of the scheme","tok_text":"a second order characterist finit element scheme for convection-diffus problem \n a new characterist finit element scheme is present for convection-diffus problem . it is of second order accuraci in time increment , symmetr , and uncondit stabl . optim error estim are prove in the framework of l \/ sup 2\/-theori . numer result are present for two exampl , which show the advantag of the scheme","ordered_present_kp":[2,53,173,246],"keyphrases":["second order characteristic finite element scheme","convection-diffusion problems","second order accuracy","optimal error estimates","L\/sup 2\/ -theory"],"prmu":["P","P","P","P","M"]} {"id":"158","title":"Neural and neuro-fuzzy integration in a knowledge-based system for air quality prediction","abstract":"We propose a unified approach for integrating implicit and explicit knowledge in neurosymbolic systems as a combination of neural and neuro-fuzzy modules. In the developed hybrid system, a training data set is used for building neuro-fuzzy modules, and represents implicit domain knowledge. The explicit domain knowledge on the other hand is represented by fuzzy rules, which are directly mapped into equivalent neural structures. The aim of this approach is to improve the abilities of modular neural structures, which are based on incomplete learning data sets, since the knowledge acquired from human experts is taken into account for adapting the general neural architecture. Three methods to combine the explicit and implicit knowledge modules are proposed. The techniques used to extract fuzzy rules from neural implicit knowledge modules are described. These techniques improve the structure and the behavior of the entire system. The proposed methodology has been applied in the field of air quality prediction with very encouraging results. These experiments show that the method is worth further investigation","tok_text":"neural and neuro-fuzzi integr in a knowledge-bas system for air qualiti predict \n we propos a unifi approach for integr implicit and explicit knowledg in neurosymbol system as a combin of neural and neuro-fuzzi modul . in the develop hybrid system , a train data set is use for build neuro-fuzzi modul , and repres implicit domain knowledg . the explicit domain knowledg on the other hand is repres by fuzzi rule , which are directli map into equival neural structur . the aim of thi approach is to improv the abil of modular neural structur , which are base on incomplet learn data set , sinc the knowledg acquir from human expert is taken into account for adapt the gener neural architectur . three method to combin the explicit and implicit knowledg modul are propos . the techniqu use to extract fuzzi rule from neural implicit knowledg modul are describ . these techniqu improv the structur and the behavior of the entir system . the propos methodolog ha been appli in the field of air qualiti predict with veri encourag result . these experi show that the method is worth further investig","ordered_present_kp":[11,35,60,154,234,252,402,562,674,1041],"keyphrases":["neuro-fuzzy integration","knowledge-based system","air quality prediction","neurosymbolic systems","hybrid system","training data set","fuzzy rules","incomplete learning","neural architecture","experiments","implicit domain knowledge representation","air pollution"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","M"]} {"id":"1649","title":"Office essentials [stationery suppliers]","abstract":"Make purchasing stationery a relatively simple task through effective planning and management of stock, and identifying the right supplier","tok_text":"offic essenti [ stationeri supplier ] \n make purchas stationeri a rel simpl task through effect plan and manag of stock , and identifi the right supplier","ordered_present_kp":[16,45,96,105],"keyphrases":["stationery suppliers","purchasing","planning","management of stock"],"prmu":["P","P","P","P"]} {"id":"1927","title":"Optimal strategies for a semi-Markovian inventory system","abstract":"Control for a semi-Markovian inventory system is considered. Under general assumptions on system functioning, conditions for existence of an optimal nonrandomized Markovian strategy are found. It is shown that under some additional assumptions on storing conditions for the inventory, the optimal strategy has a threshold (s, S)-frame","tok_text":"optim strategi for a semi-markovian inventori system \n control for a semi-markovian inventori system is consid . under gener assumpt on system function , condit for exist of an optim nonrandom markovian strategi are found . it is shown that under some addit assumpt on store condit for the inventori , the optim strategi ha a threshold ( s , s)-frame","ordered_present_kp":[0,21,136,177,0],"keyphrases":["optimal strategies","optimal strategies","semi-Markovian inventory system","system functioning","optimal nonrandomized Markovian strategy","optimal strategy"],"prmu":["P","P","P","P","P","P"]} {"id":"1631","title":"Recovering lost efficiency of exponentiation algorithms on smart cards","abstract":"At the RSA cryptosystem implementation stage, a major security concern is resistance against so-called side-channel attacks. Solutions are known but they increase the overall complexity by a non-negligible factor (typically, a protected RSA exponentiation is 133% slower). For the first time, protected solutions are proposed that do not penalise the running time of an exponentiation","tok_text":"recov lost effici of exponenti algorithm on smart card \n at the rsa cryptosystem implement stage , a major secur concern is resist against so-cal side-channel attack . solut are known but they increas the overal complex by a non-neglig factor ( typic , a protect rsa exponenti is 133 % slower ) . for the first time , protect solut are propos that do not penalis the run time of an exponenti","ordered_present_kp":[44,21,64,107],"keyphrases":["exponentiation algorithms","smart cards","RSA cryptosystem implementation stage","security","side-channel attack resistance","public-key encryption"],"prmu":["P","P","P","P","R","U"]} {"id":"1674","title":"A column generation approach to delivery planning over time with inhomogeneous service providers and service interval constraints","abstract":"We consider a problem of delivery planning over multiple time periods. Deliveries must be made to customers having nominated demand in each time period. Demand must be met in each time period by use of some combination of inhomogeneous service providers. Each service provider has a different delivery capacity, different cost of delivery to each customer, a different utilisation requirement, and different rules governing the spread of deliveries in time. The problem is to plan deliveries so as to minimise overall costs, subject to demand being met and service rules obeyed. A natural integer programming model was found to be intractable, except on problems with loose demand constraints, with gaps between best lower bound and best feasible solution of up to 35.1%, with an average of 15.4% over the test data set. In all but the problem with loosest demand constraints, Cplex 6.5 applied to this formulation failed to find the optimal solution before running out of memory. However a column generation approach improved the lower bound by between 0.6% and 21.9%, with an average of 9.9%, and in all cases found the optimal solution at the root node, without requiring branching","tok_text":"a column gener approach to deliveri plan over time with inhomogen servic provid and servic interv constraint \n we consid a problem of deliveri plan over multipl time period . deliveri must be made to custom have nomin demand in each time period . demand must be met in each time period by use of some combin of inhomogen servic provid . each servic provid ha a differ deliveri capac , differ cost of deliveri to each custom , a differ utilis requir , and differ rule govern the spread of deliveri in time . the problem is to plan deliveri so as to minimis overal cost , subject to demand be met and servic rule obey . a natur integ program model wa found to be intract , except on problem with loos demand constraint , with gap between best lower bound and best feasibl solut of up to 35.1 % , with an averag of 15.4 % over the test data set . in all but the problem with loosest demand constraint , cplex 6.5 appli to thi formul fail to find the optim solut befor run out of memori . howev a column gener approach improv the lower bound by between 0.6 % and 21.9 % , with an averag of 9.9 % , and in all case found the optim solut at the root node , without requir branch","ordered_present_kp":[2,27,56,84,368,741],"keyphrases":["column generation approach","delivery planning over time","inhomogeneous service providers","service interval constraints","delivery capacity","lower bound","transportation"],"prmu":["P","P","P","P","P","P","U"]} {"id":"1588","title":"Contentment management","abstract":"Andersen's William Yarker and Richard Young outline the route to a successful content management strategy","tok_text":"content manag \n andersen 's william yarker and richard young outlin the rout to a success content manag strategi","ordered_present_kp":[90],"keyphrases":["content management strategy","Andersen Consulting"],"prmu":["P","M"]} {"id":"1530","title":"Uniform supersaturated design and its construction","abstract":"Supersaturated designs are factorial designs in which the number of main effects is greater than the number of experimental runs. In this paper, a discrete discrepancy is proposed as a measure of uniformity for supersaturated designs, and a lower bound of this discrepancy is obtained as,a benchmark of design uniformity. A construction method for uniform supersaturated designs via resolvable balanced incomplete block designs is also presented along with the investigation of properties of the resulting designs. The construction method shows a strong link between these two different kinds of designs","tok_text":"uniform supersatur design and it construct \n supersatur design are factori design in which the number of main effect is greater than the number of experiment run . in thi paper , a discret discrep is propos as a measur of uniform for supersatur design , and a lower bound of thi discrep is obtain as , a benchmark of design uniform . a construct method for uniform supersatur design via resolv balanc incomplet block design is also present along with the investig of properti of the result design . the construct method show a strong link between these two differ kind of design","ordered_present_kp":[0,67,147,181,387],"keyphrases":["uniform supersaturated design","factorial designs","experimental runs","discrete discrepancy","resolvable balanced incomplete block designs"],"prmu":["P","P","P","P","P"]} {"id":"165","title":"Monitoring the news online","abstract":"The author looks at how we can focus on what we want, finding small stories in vast oceans of news. There is no one tool that will scan every news resource available and give alerts on new available materials. Every one has a slightly different focus. Some are paid sources, while many are free. If used wisely, an excellent news monitoring system for a large number of topics can be set up for surprisingly little cost","tok_text":"monitor the news onlin \n the author look at how we can focu on what we want , find small stori in vast ocean of news . there is no one tool that will scan everi news resourc avail and give alert on new avail materi . everi one ha a slightli differ focu . some are paid sourc , while mani are free . if use wise , an excel news monitor system for a larg number of topic can be set up for surprisingli littl cost","ordered_present_kp":[322],"keyphrases":["news monitoring","online news","Internet"],"prmu":["P","R","U"]} {"id":"1468","title":"Developing Web-enhanced learning for information fluency-a liberal arts college's perspective","abstract":"Learning is likely to take a new form in the twenty-first century, and a transformation is already in process. Under the framework of information fluency, efforts are being made at Rollins College to develop a Web-enhanced course that encompasses information literacy, basic computer literacy, and critical thinking skills. Computer-based education can be successful when librarians use technology effectively to enhance their integrated library teaching. In an online learning environment, students choose a time for learning that best suits their needs and motivational levels. They can learn at their own pace, take a nonlinear approach to the subject, and maintain constant communication with instructors and other students. The quality of a technology-facilitated course can be upheld if the educational objectives and methods for achieving those objectives are carefully planned and explored","tok_text":"develop web-enhanc learn for inform fluency-a liber art colleg 's perspect \n learn is like to take a new form in the twenty-first centuri , and a transform is alreadi in process . under the framework of inform fluenci , effort are be made at rollin colleg to develop a web-enhanc cours that encompass inform literaci , basic comput literaci , and critic think skill . computer-bas educ can be success when librarian use technolog effect to enhanc their integr librari teach . in an onlin learn environ , student choos a time for learn that best suit their need and motiv level . they can learn at their own pace , take a nonlinear approach to the subject , and maintain constant commun with instructor and other student . the qualiti of a technology-facilit cours can be upheld if the educ object and method for achiev those object are care plan and explor","ordered_present_kp":[8,203,46,301,325,347,368,406,453,482],"keyphrases":["Web-enhanced learning","liberal arts college","information fluency","information literacy","computer literacy","critical thinking skills","computer-based education","librarians","integrated library teaching","online learning"],"prmu":["P","P","P","P","P","P","P","P","P","P"]} {"id":"1794","title":"Well-posed anisotropic diffusion for image denoising","abstract":"A nonlinear iterative smoothing filter based on a second-order partial differential equation is introduced. It smooths out the image according to an anisotropic diffusion process. The approach is based on a smooth approximation of the total variation (TV) functional which overcomes the non-differentiability of the TV functional at the origin. In particular, the authors perform linear smoothing over smooth areas but selective smoothing over candidate edges. By relating the smoothing parameter to the time step, they arrive at a CFL condition which guarantees the causality of the discrete scheme. This allows the adoption of higher time discretisation steps, while ensuring the absence of artefacts deriving from the non-smooth behaviour of the TV functional at the origin. In particular, it is shown that the proposed approach avoids the typical staircase effects in smooth areas which occur in the standard time-marching TV scheme","tok_text":"well-pos anisotrop diffus for imag denois \n a nonlinear iter smooth filter base on a second-ord partial differenti equat is introduc . it smooth out the imag accord to an anisotrop diffus process . the approach is base on a smooth approxim of the total variat ( tv ) function which overcom the non-differenti of the tv function at the origin . in particular , the author perform linear smooth over smooth area but select smooth over candid edg . by relat the smooth paramet to the time step , they arriv at a cfl condit which guarante the causal of the discret scheme . thi allow the adopt of higher time discretis step , while ensur the absenc of artefact deriv from the non-smooth behaviour of the tv function at the origin . in particular , it is shown that the propos approach avoid the typic staircas effect in smooth area which occur in the standard time-march tv scheme","ordered_present_kp":[30,0,46,85,379,414,509,553,539,593],"keyphrases":["well-posed anisotropic diffusion","image denoising","nonlinear iterative smoothing filter","second-order partial differential equation","linear smoothing","selective smoothing","CFL condition","causality","discrete scheme","higher time discretisation steps","total variation functional","image restoration problem","random Gaussian noise"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","M","U"]} {"id":"1769","title":"Transformation rules and strategies for functional-logic programs","abstract":"This paper abstracts the contents of a PhD dissertation entitled 'Transformation Rules and Strategies for Functional-Logic Programs' which has been recently defended. These techniques are based on fold\/unfold transformations and they can be used to optimize integrated (functional-logic) programs for a wide class of applications. Experimental results show that typical examples in the field of artificial intelligence are successfully enhanced by our transformation system SYNTH. The thesis presents the first approach of these methods for declarative languages that integrate the best features from functional and logic programming","tok_text":"transform rule and strategi for functional-log program \n thi paper abstract the content of a phd dissert entitl ' transform rule and strategi for functional-log program ' which ha been recent defend . these techniqu are base on fold \/ unfold transform and they can be use to optim integr ( functional-log ) program for a wide class of applic . experiment result show that typic exampl in the field of artifici intellig are success enhanc by our transform system synth . the thesi present the first approach of these method for declar languag that integr the best featur from function and logic program","ordered_present_kp":[32,588,344,401,462,527],"keyphrases":["functional-logic programs","experimental results","artificial intelligence","SYNTH","declarative languages","logic programming","program transformation rules","functional programming","fold-unfold transformations"],"prmu":["P","P","P","P","P","P","R","R","M"]} {"id":"1807","title":"Regional flux target with minimum energy","abstract":"An extension of a gradient controllability problem to the case where the target subregion is a part of the boundary of a parabolic system domain is discussed. A definition and some properties adapted to this case are presented. The focus is on the characterisation of the control achieving a regional boundary gradient target with minimum energy. An approach is developed that leads to a numerical algorithm for the computation of optimal control. Numerical illustrations show the efficiency of the approach and lead to conjectures","tok_text":"region flux target with minimum energi \n an extens of a gradient control problem to the case where the target subregion is a part of the boundari of a parabol system domain is discuss . a definit and some properti adapt to thi case are present . the focu is on the characteris of the control achiev a region boundari gradient target with minimum energi . an approach is develop that lead to a numer algorithm for the comput of optim control . numer illustr show the effici of the approach and lead to conjectur","ordered_present_kp":[0,24,56,103,301,393,427],"keyphrases":["regional flux target","minimum energy","gradient controllability problem","target subregion","regional boundary gradient target","numerical algorithm","optimal control","parabolic system domain boundary"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"1495","title":"Laptops zip to 2 GHz-plus","abstract":"Intel's Pentium 4-M processor has reached the coveted 2-GHz mark, and speed-hungry mobile users will be tempted to buy a laptop with the chip. However, while our exclusive tests found 2-GHz P4-M notebooks among the fastest units we've tested, the new models failed to make dramatic gains compared with those based on Intel's 1.8-GHz mobile chip. Since 2-GHz notebooks carry a hefty price premium, buyers seeking both good performance and a good price might prefer a 1.8-GHz unit instead","tok_text":"laptop zip to 2 ghz-plu \n intel 's pentium 4-m processor ha reach the covet 2-ghz mark , and speed-hungri mobil user will be tempt to buy a laptop with the chip . howev , while our exclus test found 2-ghz p4-m notebook among the fastest unit we 've test , the new model fail to make dramat gain compar with those base on intel 's 1.8-ghz mobil chip . sinc 2-ghz notebook carri a hefti price premium , buyer seek both good perform and a good price might prefer a 1.8-ghz unit instead","ordered_present_kp":[106,0,210],"keyphrases":["laptop","mobile","notebooks","Intel Pentium 4-M processor","2 GHz"],"prmu":["P","P","P","R","M"]} {"id":"1842","title":"The role of B2B engines in B2B integration architectures","abstract":"Semantic B2B integration architectures must enable enterprises to communicate standards-based B2B events like purchase orders with any potential trading partner. This requires not only back end application integration capabilities to integrate with e.g. enterprise resource planning (ERP) systems as the company-internal source and destination of B2B events, but also a capability to implement every necessary B2B protocol like electronic data interchange (EDI), RosettaNet as well as more generic capabilities like Web services (WS). This paper shows the placement and functionality of B2B engines in semantic B2B integration architectures that implement a generic framework for modeling and executing any B2B protocol. A detailed discussion shows how a B2B engine can provide the necessary abstractions to implement any standard-based B2B protocol or any trading partner specific specialization","tok_text":"the role of b2b engin in b2b integr architectur \n semant b2b integr architectur must enabl enterpris to commun standards-bas b2b event like purchas order with ani potenti trade partner . thi requir not onli back end applic integr capabl to integr with e.g. enterpris resourc plan ( erp ) system as the company-intern sourc and destin of b2b event , but also a capabl to implement everi necessari b2b protocol like electron data interchang ( edi ) , rosettanet as well as more gener capabl like web servic ( ws ) . thi paper show the placement and function of b2b engin in semant b2b integr architectur that implement a gener framework for model and execut ani b2b protocol . a detail discuss show how a b2b engin can provid the necessari abstract to implement ani standard-bas b2b protocol or ani trade partner specif special","ordered_present_kp":[12,50,140,171,441,449,494,639],"keyphrases":["B2B engines","semantic B2B integration architectures","purchase orders","trading partner","EDI","RosettaNet","Web services","modeling","standards-based B2B event communication","ERP systems"],"prmu":["P","P","P","P","P","P","P","P","R","R"]} {"id":"1514","title":"Universal parametrization in constructing smoothly-connected B-spline surfaces","abstract":"In this paper, we explore the feasibility of universal parametrization in generating B-spline surfaces, which was proposed recently in the literature (Lim, 1999). We present an interesting property of the new parametrization that it guarantees Go continuity on B-spline surfaces when several independently constructed patches are put together without imposing any constraints. Also, a simple blending method of patchwork is proposed to construct C\/sup n-1\/ surfaces, where overlapping control nets are utilized. It takes into account the semi-localness property of universal parametrization. It effectively helps us construct very natural looking B-spline surfaces while keeping the deviation from given data points very low. Experimental results are shown with several sets of surface data points","tok_text":"univers parametr in construct smoothly-connect b-spline surfac \n in thi paper , we explor the feasibl of univers parametr in gener b-spline surfac , which wa propos recent in the literatur ( lim , 1999 ) . we present an interest properti of the new parametr that it guarante go continu on b-spline surfac when sever independ construct patch are put togeth without impos ani constraint . also , a simpl blend method of patchwork is propos to construct c \/ sup n-1\/ surfac , where overlap control net are util . it take into account the semi-loc properti of univers parametr . it effect help us construct veri natur look b-spline surfac while keep the deviat from given data point veri low . experiment result are shown with sever set of surfac data point","ordered_present_kp":[0,335,451,479,535,736],"keyphrases":["universal parametrization","patches","C\/sup n-1\/ surfaces","overlapping control nets","semi-localness property","surface data points","smoothly-connected B-spline surface generation","G\/sup 0\/ continuity","patchwork blending method"],"prmu":["P","P","P","P","P","P","R","M","R"]} {"id":"1551","title":"The numerical solution of an evolution problem of second order in time on a closed smooth boundary","abstract":"We consider an initial value problem for the second-order differential equation with a Dirichlet-to-Neumann operator coefficient. For the numerical solution we carry out semi-discretization by the Laguerre transformation with respect to the time variable. Then an infinite system of the stationary operator equations is obtained. By potential theory, the operator equations are reduced to boundary integral equations of the second kind with logarithmic or hypersingular kernels. The full discretization is realized by Nystrom's method which is based on the trigonometric quadrature rules. Numerical tests confirm the ability of the method to solve these types of nonstationary problems","tok_text":"the numer solut of an evolut problem of second order in time on a close smooth boundari \n we consid an initi valu problem for the second-ord differenti equat with a dirichlet-to-neumann oper coeffici . for the numer solut we carri out semi-discret by the laguerr transform with respect to the time variabl . then an infinit system of the stationari oper equat is obtain . by potenti theori , the oper equat are reduc to boundari integr equat of the second kind with logarithm or hypersingular kernel . the full discret is realiz by nystrom 's method which is base on the trigonometr quadratur rule . numer test confirm the abil of the method to solv these type of nonstationari problem","ordered_present_kp":[103,130,22,66,255,479,420,338],"keyphrases":["evolution problem","closed smooth boundary","initial value problem","second-order differential equation","Laguerre transformation","stationary operator equations","boundary integral equations","hypersingular kernels"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1615","title":"Laguerre approximation of fractional systems","abstract":"Systems characterised by fractional power poles can be called fractional systems. Here, Laguerre orthogonal polynomials are employed to approximate fractional systems by minimum phase, reduced order, rational transfer functions. Both the time and the frequency-domain analysis exhibit the accuracy of the approximation","tok_text":"laguerr approxim of fraction system \n system characteris by fraction power pole can be call fraction system . here , laguerr orthogon polynomi are employ to approxim fraction system by minimum phase , reduc order , ration transfer function . both the time and the frequency-domain analysi exhibit the accuraci of the approxim","ordered_present_kp":[0,20,60,125,185,201,215,264],"keyphrases":["Laguerre approximation","fractional systems","fractional power poles","orthogonal polynomials","minimum phase","reduced order","rational transfer functions","frequency-domain analysis","robust controllers","closed-loop system","time-domain analysis"],"prmu":["P","P","P","P","P","P","P","P","U","M","M"]} {"id":"1650","title":"Low to mid-speed copiers [buyer's guide]","abstract":"The low to mid-speed copier market is being transformed by the almost universal adoption of digital solutions. The days of the analogue copier are numbered as the remaining vendors plan to withdraw from this sector by 2005. Reflecting the growing market for digital, vendors are reducing prices, making a digital solution much more affordable. The battle for the copier market is intense, and the popularity of the multifunctional device is going to transform the office equipment market. As total cost of ownership becomes increasingly important and as budgets are squeezed, the most cost-effective solutions are those that will survive this shake-down","tok_text":"low to mid-spe copier [ buyer 's guid ] \n the low to mid-spe copier market is be transform by the almost univers adopt of digit solut . the day of the analogu copier are number as the remain vendor plan to withdraw from thi sector by 2005 . reflect the grow market for digit , vendor are reduc price , make a digit solut much more afford . the battl for the copier market is intens , and the popular of the multifunct devic is go to transform the offic equip market . as total cost of ownership becom increasingli import and as budget are squeez , the most cost-effect solut are those that will surviv thi shake-down","ordered_present_kp":[46,471],"keyphrases":["low to mid-speed copier market","total cost of ownership"],"prmu":["P","P"]} {"id":"1823","title":"Single-phase shunt active power filter with harmonic detection","abstract":"An advanced active power filter for the compensation of instantaneous harmonic current components in nonlinear current loads is presented. A signal processing technique using an adaptive neural network algorithm is applied for the detection of harmonic components generated by nonlinear current loads and it can efficiently determine the instantaneous harmonic components in real time. The validity of this active filtering processing system to compensate current harmonics is substantiated by simulation results","tok_text":"single-phas shunt activ power filter with harmon detect \n an advanc activ power filter for the compens of instantan harmon current compon in nonlinear current load is present . a signal process techniqu use an adapt neural network algorithm is appli for the detect of harmon compon gener by nonlinear current load and it can effici determin the instantan harmon compon in real time . the valid of thi activ filter process system to compens current harmon is substanti by simul result","ordered_present_kp":[0,42,141,179,210,345,471],"keyphrases":["single-phase shunt active power filter","harmonic detection","nonlinear current loads","signal processing technique","adaptive neural network algorithm","instantaneous harmonic components","simulation","instantaneous harmonic current components compensation"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"1866","title":"Tracking with sensor failures","abstract":"Studies the reliability with sensor failures of the asymptotic tracking problem for linear time invariant systems using the factorization approach. The plant is two-output and the compensator is two-degree-of-freedom. Necessary and sufficient conditions are presented for the general problem and a simple solution is given for problems with stable plants","tok_text":"track with sensor failur \n studi the reliabl with sensor failur of the asymptot track problem for linear time invari system use the factor approach . the plant is two-output and the compens is two-degree-of-freedom . necessari and suffici condit are present for the gener problem and a simpl solut is given for problem with stabl plant","ordered_present_kp":[11,37,71,98,132,217],"keyphrases":["sensor failures","reliability","asymptotic tracking problem","linear time invariant systems","factorization approach","necessary and sufficient conditions","two-output plant","two-degree-of-freedom compensator"],"prmu":["P","P","P","P","P","P","R","R"]} {"id":"1708","title":"A study of hospitality and tourism information technology education and industrial applications","abstract":"The purpose of this study was to examine the subject relevance of information technology (IT) in hospitality and tourism management programs with skills deployed in the workplace. This study aimed at investigating graduates' transition from education to employment, and to determine how well they appear to be equipped to meet the needs of the hospitality and tourism industry. One hundred and seventeen graduates responded to a mail survey. These graduates rated the importance of IT skills in the workplace, the level of IT teaching in hotel and tourism management programs, and the self-competence level in IT. This study concluded that a gap exists between the IT skills required at work and those acquired at university","tok_text":"a studi of hospit and tourism inform technolog educ and industri applic \n the purpos of thi studi wa to examin the subject relev of inform technolog ( it ) in hospit and tourism manag program with skill deploy in the workplac . thi studi aim at investig graduat ' transit from educ to employ , and to determin how well they appear to be equip to meet the need of the hospit and tourism industri . one hundr and seventeen graduat respond to a mail survey . these graduat rate the import of it skill in the workplac , the level of it teach in hotel and tourism manag program , and the self-compet level in it . thi studi conclud that a gap exist between the it skill requir at work and those acquir at univers","ordered_present_kp":[159,47,285,378,442,254,489,700,529],"keyphrases":["education","hospitality and tourism management programs","graduates","employment","tourism industry","mail survey","IT skills","IT teaching","university","hospitality industry"],"prmu":["P","P","P","P","P","P","P","P","P","R"]} {"id":"1471","title":"E-commerce-resources for doing business on the Internet","abstract":"There are many different types of e-commerce depending upon who or what is selling and who or what is buying. In addition, e-commerce is more than an exchange of funds and goods or services, it encompasses an entire infrastructure of services, computer hardware and software products, technologies, and communications formats. The paper discusses e-commerce terminology, types and information resources, including books and Web sites","tok_text":"e-commerce-resourc for do busi on the internet \n there are mani differ type of e-commerc depend upon who or what is sell and who or what is buy . in addit , e-commerc is more than an exchang of fund and good or servic , it encompass an entir infrastructur of servic , comput hardwar and softwar product , technolog , and commun format . the paper discuss e-commerc terminolog , type and inform resourc , includ book and web site","ordered_present_kp":[26,38,0,365,387,411,420],"keyphrases":["e-commerce","business","Internet","terminology","information resources","books","Web sites"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1770","title":"New developments in inductive learning","abstract":"Any intelligent system, whether natural or artificial, must have three characteristics: knowledge, reasoning, and learning. Artificial intelligence (AI) studies these three aspects in artificial systems. Briefly, we could say that knowledge refers to the system's world model, and reasoning to the manipulation of this knowledge. Learning is slightly more complex; the system interacts with the world and as a consequence it builds onto and modifies its knowledge. This process of self-building and self-modifying is known as learning. This thesis is set within the field of artificial intelligence and focuses on learning. More specifically, it deals with the inductive learning of decision trees","tok_text":"new develop in induct learn \n ani intellig system , whether natur or artifici , must have three characterist : knowledg , reason , and learn . artifici intellig ( ai ) studi these three aspect in artifici system . briefli , we could say that knowledg refer to the system 's world model , and reason to the manipul of thi knowledg . learn is slightli more complex ; the system interact with the world and as a consequ it build onto and modifi it knowledg . thi process of self-build and self-modifi is known as learn . thi thesi is set within the field of artifici intellig and focus on learn . more specif , it deal with the induct learn of decis tree","ordered_present_kp":[15,0,34,111,122,143,641],"keyphrases":["new developments","inductive learning","intelligent system","knowledge","reasoning","artificial intelligence","decision trees"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1735","title":"Mid-market accounting systems","abstract":"Welcome to our fourth annual survey of accounting systems and enterprise resource planning (ERP) systems. Last September, we concentrated on financial and distribution systems for medium-sized businesses (mid market) and included 22 products in our charts. This year, we extended the products to include manufacturing and added 34 products to the list","tok_text":"mid-market account system \n welcom to our fourth annual survey of account system and enterpris resourc plan ( erp ) system . last septemb , we concentr on financi and distribut system for medium-s busi ( mid market ) and includ 22 product in our chart . thi year , we extend the product to includ manufactur and ad 34 product to the list","ordered_present_kp":[0,56,85,297],"keyphrases":["mid-market accounting systems","survey","enterprise resource planning","manufacturing"],"prmu":["P","P","P","P"]} {"id":"181","title":"Electromagnetics computations using the MPI parallel implementation of the steepest descent fast multipole method (SDFMM)","abstract":"The computational solution of large-scale linear systems of equations necessitates the use of fast algorithms but is also greatly enhanced by employing parallelization techniques. The objective of this work is to demonstrate the speedup achieved by the MPI (message passing interface) parallel implementation of the steepest descent fast multipole method (SDFMM). Although this algorithm has already been optimized to take advantage of the structure of the physics of scattering problems, there is still the opportunity to speed up the calculation by dividing tasks into components using multiple processors and solve them in parallel. The SDFMM has three bottlenecks ordered as (1) filling the sparse impedance matrix associated with the near-field method of moments interactions (MoM), (2) the matrix vector multiplications associated with this sparse matrix and (3) the far field interactions associated with the fast multipole method. The parallel implementation task is accomplished using a thirty-one node Intel Pentium Beowulf cluster and is also validated on a 4-processor Alpha workstation. The Beowulf cluster consists of thirty-one nodes of 350 MHz Intel Pentium IIs with 256 MB of RAM and one node of a 4*450 MHz Intel Pentium II Xeon shared memory processor with 2 GB of RAM with all nodes connected to a 100 BaseTX Ethernet network. The Alpha workstation has a maximum of four 667 MHz processors. Our numerical results show significant linear speedup in filling the sparse impedance matrix. Using the 32-processors on the Beowulf cluster lead to a 7.2 overall speedup while a 2.5 overall speedup is gained using the 4-processors on the Alpha workstation","tok_text":"electromagnet comput use the mpi parallel implement of the steepest descent fast multipol method ( sdfmm ) \n the comput solut of large-scal linear system of equat necessit the use of fast algorithm but is also greatli enhanc by employ parallel techniqu . the object of thi work is to demonstr the speedup achiev by the mpi ( messag pass interfac ) parallel implement of the steepest descent fast multipol method ( sdfmm ) . although thi algorithm ha alreadi been optim to take advantag of the structur of the physic of scatter problem , there is still the opportun to speed up the calcul by divid task into compon use multipl processor and solv them in parallel . the sdfmm ha three bottleneck order as ( 1 ) fill the spars imped matrix associ with the near-field method of moment interact ( mom ) , ( 2 ) the matrix vector multipl associ with thi spars matrix and ( 3 ) the far field interact associ with the fast multipol method . the parallel implement task is accomplish use a thirty-on node intel pentium beowulf cluster and is also valid on a 4-processor alpha workstat . the beowulf cluster consist of thirty-on node of 350 mhz intel pentium ii with 256 mb of ram and one node of a 4 * 450 mhz intel pentium ii xeon share memori processor with 2 gb of ram with all node connect to a 100 basetx ethernet network . the alpha workstat ha a maximum of four 667 mhz processor . our numer result show signific linear speedup in fill the spars imped matrix . use the 32-processor on the beowulf cluster lead to a 7.2 overal speedup while a 2.5 overal speedup is gain use the 4-processor on the alpha workstat","ordered_present_kp":[0,29,59,129,183,325,509,618,718,764,519,810,996,1049,1135,1167,1218,1290,1127,1193,1360],"keyphrases":["electromagnetics computations","MPI parallel implementation","steepest descent fast multipole method","large-scale linear systems","fast algorithms","message passing interface","physics","scattering problems","multiple processors","sparse impedance matrix","method of moments","matrix vector multiplications","Intel Pentium Beowulf cluster","4-processor Alpha workstation","350 MHz","Intel Pentium II","RAM","450 MHz","Xeon shared memory processor","100 BaseTX Ethernet network","667 MHz","near-field MoM","scattered electric field","scattered magnetic field","256 MByte","2 GByte"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","M","M","M","M"]} {"id":"1903","title":"The BLISS programming language: a history","abstract":"The BLISS programming language was invented by William A. Wulf and others at Carnegie-Mellon University in 1969, originally for the DEC PDP-10. BLISS-10 caught the interest of Ronald F. Brender of DEC (Digital Equipment Corporation). After several years of collaboration, including the creation of BLISS-11 for the PDP-11, BLISS was adopted as DEC's implementation language for use on its new line of VAX computers in 1975. DEC developed a completely new generation of BLISSs for the VAX, PDP-10 and PDP-11, which became widely used at DEC during the 1970s and 1980s. With the creation of the Alpha architecture in the early 1990s, BLISS was extended again, in both 32- and 64-bit flavors. BLISS support for the Intel IA-32 architecture was introduced in 1995 and IA-64 support is now in progress. BLISS has a number of unusual characteristics: it is typeless, requires use of an explicit contents of operator (written as a period or 'dot'), takes an algorithmic approach to data structure definition, has no goto, is an expression language, and has an unusually rich compile-time language. This paper reviews the evolution and use of BLISS over its three decade lifetime. Emphasis is on how the language evolved to facilitate portable programming while retaining its initial highly machine-specific character. Finally, the success of its characteristics are assessed","tok_text":"the bliss program languag : a histori \n the bliss program languag wa invent by william a. wulf and other at carnegie-mellon univers in 1969 , origin for the dec pdp-10 . bliss-10 caught the interest of ronald f. brender of dec ( digit equip corpor ) . after sever year of collabor , includ the creation of bliss-11 for the pdp-11 , bliss wa adopt as dec 's implement languag for use on it new line of vax comput in 1975 . dec develop a complet new gener of blisss for the vax , pdp-10 and pdp-11 , which becam wide use at dec dure the 1970 and 1980 . with the creation of the alpha architectur in the earli 1990 , bliss wa extend again , in both 32- and 64-bit flavor . bliss support for the intel ia-32 architectur wa introduc in 1995 and ia-64 support is now in progress . bliss ha a number of unusu characterist : it is typeless , requir use of an explicit content of oper ( written as a period or ' dot ' ) , take an algorithm approach to data structur definit , ha no goto , is an express languag , and ha an unusu rich compile-tim languag . thi paper review the evolut and use of bliss over it three decad lifetim . emphasi is on how the languag evolv to facilit portabl program while retain it initi highli machine-specif charact . final , the success of it characterist are assess","ordered_present_kp":[4,1169,943,1025],"keyphrases":["BLISS programming language","data structure definition","compile-time language","portable programming","machine-oriented language","system implementation language"],"prmu":["P","P","P","P","M","M"]} {"id":"1591","title":"Quadratic interpolation on spheres","abstract":"Riemannian quadratics are C\/sup 1\/ curves on Riemannian manifolds, obtained by performing the quadratic recursive deCastlejeau algorithm in a Riemannian setting. They are of interest for interpolation problems in Riemannian manifolds, such as trajectory-planning for rigid body motion. Some interpolation properties of Riemannian quadratics are analysed when the ambient manifold is a sphere or projective space, with the usual Riemannian metrics","tok_text":"quadrat interpol on sphere \n riemannian quadrat are c \/ sup 1\/ curv on riemannian manifold , obtain by perform the quadrat recurs decastlejeau algorithm in a riemannian set . they are of interest for interpol problem in riemannian manifold , such as trajectory-plan for rigid bodi motion . some interpol properti of riemannian quadrat are analys when the ambient manifold is a sphere or project space , with the usual riemannian metric","ordered_present_kp":[0,71,250,270,355],"keyphrases":["quadratic interpolation","Riemannian manifolds","trajectory-planning","rigid body motion","ambient manifold","corner-cutting","parallel translation","approximation theory"],"prmu":["P","P","P","P","P","U","U","U"]} {"id":"1628","title":"Quasi-Newton algorithm for adaptive minor component extraction","abstract":"An adaptive quasi-Newton algorithm is first developed to extract a single minor component corresponding to the smallest eigenvalue of a stationary sample covariance matrix. A deflation technique instead of the commonly used inflation method is then applied to extract the higher-order minor components. The algorithm enjoys the advantage of having a simpler computational complexity and a highly modular and parallel structure for efficient implementation. Simulation results are given to demonstrate the effectiveness of the proposed algorithm for extracting multiple minor components adaptively","tok_text":"quasi-newton algorithm for adapt minor compon extract \n an adapt quasi-newton algorithm is first develop to extract a singl minor compon correspond to the smallest eigenvalu of a stationari sampl covari matrix . a deflat techniqu instead of the commonli use inflat method is then appli to extract the higher-ord minor compon . the algorithm enjoy the advantag of have a simpler comput complex and a highli modular and parallel structur for effici implement . simul result are given to demonstr the effect of the propos algorithm for extract multipl minor compon adapt","ordered_present_kp":[0,27,164,179,214,301,378,418,459],"keyphrases":["quasi-Newton algorithm","adaptive minor component extraction","eigenvalue","stationary sample covariance matrix","deflation technique","higher-order minor components","computational complexity","parallel structure","simulation results","modular structure","adaptive estimation","DOA estimation","ROOT-MUSIC estimator"],"prmu":["P","P","P","P","P","P","P","P","P","R","M","U","U"]} {"id":"1529","title":"Quantized-State Systems: A DEVS-approach for continuous system simulation","abstract":"A new class of dynamical systems, Quantized State Systems or QSS, is introduced in this paper. QSS are continuous time systems where the input trajectories are piecewise constant functions and the state variable trajectories - being themselves piecewise linear functions - are converted into piecewise constant functions via a quantization function equipped with hysteresis. It is shown that QSS can be exactly represented and simulated by a discrete event model, within the framework of the DEVS-approach. Further, it is shown that QSS can be used to approximate continuous systems, thus allowing their discrete-event simulation in opposition to the classical discrete-time simulation. It is also shown that in an approximating QSS, some stability properties of the original system are conserved and the solutions of the QSS go to the solutions of the original system when the quantization goes to zero","tok_text":"quantized-st system : a devs-approach for continu system simul \n a new class of dynam system , quantiz state system or qss , is introduc in thi paper . qss are continu time system where the input trajectori are piecewis constant function and the state variabl trajectori - be themselv piecewis linear function - are convert into piecewis constant function via a quantiz function equip with hysteresi . it is shown that qss can be exactli repres and simul by a discret event model , within the framework of the devs-approach . further , it is shown that qss can be use to approxim continu system , thu allow their discrete-ev simul in opposit to the classic discrete-tim simul . it is also shown that in an approxim qss , some stabil properti of the origin system are conserv and the solut of the qss go to the solut of the origin system when the quantiz goe to zero","ordered_present_kp":[80,95,160,211,460,613],"keyphrases":["dynamical systems","Quantized State Systems","continuous time systems","piecewise constant functions","discrete event model","discrete-event simulation"],"prmu":["P","P","P","P","P","P"]} {"id":"1827","title":"Gossip is synteny: Incomplete gossip and the syntenic distance between genomes","abstract":"The syntenic distance between two genomes is given by the minimum number of fusions, fissions, and translocations required to transform one into the other, ignoring the order of genes within chromosomes. Computing this distance is NP-hard. In the present work, we give a tight connection between syntenic distance and the incomplete gossip problem, a novel generalization of the classical gossip problem. In this problem, there are n gossipers, each with a unique piece of initial information; they communicate by phone calls in which the two participants exchange all their information. The goal is to minimize the total number of phone calls necessary to inform each gossiper of his set of relevant gossip which he desires to learn. As an application of the connection between syntenic distance and incomplete gossip, we derive an O(2\/sup O(n log n)\/) algorithm to exactly compute the syntenic distance between two genomes with at most n chromosomes each. Our algorithm requires O(n\/sup 2\/+2\/sup O(d log d)\/) time when this distance is d, improving the O(n\/sup 2\/+2(O(d\/\/sup 2\/))) running time of the best previous exact algorithm","tok_text":"gossip is synteni : incomplet gossip and the synten distanc between genom \n the synten distanc between two genom is given by the minimum number of fusion , fission , and transloc requir to transform one into the other , ignor the order of gene within chromosom . comput thi distanc is np-hard . in the present work , we give a tight connect between synten distanc and the incomplet gossip problem , a novel gener of the classic gossip problem . in thi problem , there are n gossip , each with a uniqu piec of initi inform ; they commun by phone call in which the two particip exchang all their inform . the goal is to minim the total number of phone call necessari to inform each gossip of hi set of relev gossip which he desir to learn . as an applic of the connect between synten distanc and incomplet gossip , we deriv an o(2 \/ sup o(n log n)\/ ) algorithm to exactli comput the synten distanc between two genom with at most n chromosom each . our algorithm requir o(n \/ sup 2\/+2 \/ sup o(d log d)\/ ) time when thi distanc is d , improv the o(n \/ sup 2\/+2(o(d\/\/sup 2\/ ) ) ) run time of the best previou exact algorithm","ordered_present_kp":[45,68,285,372,1075,251],"keyphrases":["syntenic distance","genomes","chromosomes","NP-hard","incomplete gossip problem","running time","comparative genomics"],"prmu":["P","P","P","P","P","P","M"]} {"id":"1862","title":"Global comparison of stages of growth based on critical success factors","abstract":"With increasing globalization of business, the management of IT in international organizations is faced with the complex task of dealing with the difference between local and international IT needs. This study evaluates, and compares, the level of IT maturity and the critical success factors (CSFs) in selected geographic regions, namely, Norway, Australia\/New Zealand, North America, Europe, Asia\/Pacific, and India. The results show that significant differences in the IT management needs in these geographic regions exist, and that the IT management operating in these regions must balance the multiple critical success factors for achieving an optimal local-global mix for business success","tok_text":"global comparison of stage of growth base on critic success factor \n with increas global of busi , the manag of it in intern organ is face with the complex task of deal with the differ between local and intern it need . thi studi evalu , and compar , the level of it matur and the critic success factor ( csf ) in select geograph region , name , norway , australia \/ new zealand , north america , europ , asia \/ pacif , and india . the result show that signific differ in the it manag need in these geograph region exist , and that the it manag oper in these region must balanc the multipl critic success factor for achiev an optim local-glob mix for busi success","ordered_present_kp":[476,203,264,45,346,355,367,381,397,405,424,626,651],"keyphrases":["critical success factors","international IT needs","IT maturity","Norway","Australia","New Zealand","North America","Europe","Asia\/Pacific","India","IT management","optimal local-global mix","business success","business globalization","local IT needs"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"1749","title":"Advanced aerostatic stability analysis of cable-stayed bridges using finite-element method","abstract":"Based on the concept of limit point instability, an advanced nonlinear finite-element method that can be used to analyze the aerostatic stability of cable-stayed bridges is proposed. Both geometric nonlinearity and three components of wind loads are considered in this method. The example bridge is the second Santou Bay cable-stayed bridge with a main span length of 518 m built in China. Aerostatic stability of the example bridge is investigated using linear and proposed methods. The effect of pitch moment coefficient on the aerostatic stability of the bridge has been studied. The results show that the aerostatic instability analyses of cable-stayed bridges based on the linear method considerably overestimate the wind-resisting capacity of cable-stayed bridges. The proposed method is highly accurate and efficient. Pitch moment coefficient has a major effect on the aerostatic stability of cable-stayed bridges. Finally, the aerostatic failure mechanism of cable-stayed bridges is explained by tracing the aerostatic instability path","tok_text":"advanc aerostat stabil analysi of cable-stay bridg use finite-el method \n base on the concept of limit point instabl , an advanc nonlinear finite-el method that can be use to analyz the aerostat stabil of cable-stay bridg is propos . both geometr nonlinear and three compon of wind load are consid in thi method . the exampl bridg is the second santou bay cable-stay bridg with a main span length of 518 m built in china . aerostat stabil of the exampl bridg is investig use linear and propos method . the effect of pitch moment coeffici on the aerostat stabil of the bridg ha been studi . the result show that the aerostat instabl analys of cable-stay bridg base on the linear method consider overestim the wind-resist capac of cable-stay bridg . the propos method is highli accur and effici . pitch moment coeffici ha a major effect on the aerostat stabil of cable-stay bridg . final , the aerostat failur mechan of cable-stay bridg is explain by trace the aerostat instabl path","ordered_present_kp":[97,0,34,239,277,345,415,516,892],"keyphrases":["advanced aerostatic stability analysis","cable-stayed bridges","limit point instability","geometric nonlinearity","wind loads","Santou Bay cable-stayed bridge","China","pitch moment coefficient","aerostatic failure mechanism","advanced nonlinear finite element method"],"prmu":["P","P","P","P","P","P","P","P","P","M"]} {"id":"1611","title":"Data mining business intelligence for competitive advantage","abstract":"Organizations have lately realized that just processing transactions and\/or information faster and more efficiently no longer provides them with a competitive advantage vis-a-vis their competitors for achieving business excellence. Information technology (IT) tools that are oriented towards knowledge processing can provide the edge that organizations need to survive and thrive in the current era of fierce competition. Enterprises are no longer satisfied with business information system(s); they require business intelligence system(s). The increasing competitive pressures and the desire to leverage information technology techniques have led many organizations to explore the benefits of new emerging technology, data warehousing and data mining. The paper discusses data warehouses and data mining tools and applications","tok_text":"data mine busi intellig for competit advantag \n organ have late realiz that just process transact and\/or inform faster and more effici no longer provid them with a competit advantag vis-a-vi their competitor for achiev busi excel . inform technolog ( it ) tool that are orient toward knowledg process can provid the edg that organ need to surviv and thrive in the current era of fierc competit . enterpris are no longer satisfi with busi inform system( ) ; they requir busi intellig system( ) . the increas competit pressur and the desir to leverag inform technolog techniqu have led mani organ to explor the benefit of new emerg technolog , data wareh and data mine . the paper discuss data warehous and data mine tool and applic","ordered_present_kp":[10,28,48,232,284,687,0],"keyphrases":["data mining","business intelligence","competitive advantage","organizations","information technology","knowledge processing","data warehouses","business information system"],"prmu":["P","P","P","P","P","P","P","M"]} {"id":"1654","title":"Numerical validation of solutions of complementarity problems: the nonlinear case","abstract":"This paper proposes a validation method for solutions of nonlinear complementarity problems. The validation procedure performs a computational test. If the result of the test is positive, then it is guaranteed that a given multi-dimensional interval either includes a solution or excludes all solutions of the nonlinear complementarity problem","tok_text":"numer valid of solut of complementar problem : the nonlinear case \n thi paper propos a valid method for solut of nonlinear complementar problem . the valid procedur perform a comput test . if the result of the test is posit , then it is guarante that a given multi-dimension interv either includ a solut or exclud all solut of the nonlinear complementar problem","ordered_present_kp":[0,175,113],"keyphrases":["numerical validation","nonlinear complementarity problem","computational test","optimization"],"prmu":["P","P","P","U"]} {"id":"17","title":"Fault diagnosis and fault tolerant control of linear stochastic systems with unknown inputs","abstract":"This paper presents an integrated robust fault detection and isolation (FDI) and fault tolerant control (FTC) scheme for a fault in actuators or sensors of linear stochastic systems subjected to unknown inputs (disturbances). As usual in this kind of works, it is assumed that single fault occurs at a time and the fault treated is of random bias type. The FDI module is constructed using banks of robust two-stage Kalman filters, which simultaneously estimate the state and the fault bias, and generate residual sets decoupled from unknown disturbances. All elements of residual sets are evaluated by using a hypothesis statistical test, and the fault is declared according to the prepared decision logic. The FTC module is activated based on the fault indicator, and additive compensation signal is computed using the fault bias estimate and combined to the nominal control law for compensating the fault's effect on the system. Simulation results for the simplified longitudinal flight control system with parameter variations, process and measurement noises demonstrate the effectiveness of the approach proposed","tok_text":"fault diagnosi and fault toler control of linear stochast system with unknown input \n thi paper present an integr robust fault detect and isol ( fdi ) and fault toler control ( ftc ) scheme for a fault in actuat or sensor of linear stochast system subject to unknown input ( disturb ) . as usual in thi kind of work , it is assum that singl fault occur at a time and the fault treat is of random bia type . the fdi modul is construct use bank of robust two-stag kalman filter , which simultan estim the state and the fault bia , and gener residu set decoupl from unknown disturb . all element of residu set are evalu by use a hypothesi statist test , and the fault is declar accord to the prepar decis logic . the ftc modul is activ base on the fault indic , and addit compens signal is comput use the fault bia estim and combin to the nomin control law for compens the fault 's effect on the system . simul result for the simplifi longitudin flight control system with paramet variat , process and measur nois demonstr the effect of the approach propos","ordered_present_kp":[121,19,49,453,932],"keyphrases":["fault tolerant control","stochastic systems","fault detection","two-stage Kalman filters","longitudinal flight control system","fault isolation","linear systems","state estimation","robust control","discrete-time system"],"prmu":["P","P","P","P","P","R","R","R","R","M"]} {"id":"1510","title":"Estimation of the gradient of the solution of an adjoint diffusion equation by the Monte Carlo method","abstract":"For the case of isotropic diffusion we consider the representation of the weighted concentration of trajectories and its space derivatives in the form of integrals (with some weights) of the solution to the corresponding boundary value problem and its directional derivative of a convective velocity. If the convective velocity at the domain boundary is degenerate and some other additional conditions are imposed this representation allows us to construct an efficient 'random walk by spheres and balls' algorithm. When these conditions are violated, transition to modelling the diffusion trajectories by the Euler scheme is realized, and the directional derivative of velocity is estimated by the dependent testing method, using the parallel modelling of two closely-spaced diffusion trajectories. We succeeded in justifying this method by statistically equivalent transition to modelling a single trajectory after the first step in the Euler scheme, using a suitable weight. This weight also admits direct differentiation with respect to the initial coordinate along a given direction. The resulting weight algorithm for calculating concentration derivatives is especially efficient if the initial point is in the subdomain in which the coefficients of the diffusion equation are constant","tok_text":"estim of the gradient of the solut of an adjoint diffus equat by the mont carlo method \n for the case of isotrop diffus we consid the represent of the weight concentr of trajectori and it space deriv in the form of integr ( with some weight ) of the solut to the correspond boundari valu problem and it direct deriv of a convect veloc . if the convect veloc at the domain boundari is degener and some other addit condit are impos thi represent allow us to construct an effici ' random walk by sphere and ball ' algorithm . when these condit are violat , transit to model the diffus trajectori by the euler scheme is realiz , and the direct deriv of veloc is estim by the depend test method , use the parallel model of two closely-spac diffus trajectori . we succeed in justifi thi method by statist equival transit to model a singl trajectori after the first step in the euler scheme , use a suitabl weight . thi weight also admit direct differenti with respect to the initi coordin along a given direct . the result weight algorithm for calcul concentr deriv is especi effici if the initi point is in the subdomain in which the coeffici of the diffus equat are constant","ordered_present_kp":[105,188,215,274,303,321,365,41,69,575,600,671,700,722,791,151,931,969,1045],"keyphrases":["adjoint diffusion equation","Monte Carlo method","isotropic diffusion","weight","space derivatives","integrals","boundary value problem","directional derivative","convective velocity","domain boundary","diffusion trajectories","Euler scheme","dependent testing method","parallel modelling","closely-spaced diffusion trajectories","statistically equivalent transition","direct differentiation","initial coordinate","concentration derivatives","weighted trajectory concentration","gradient estimation","random walk by spheres and balls algorithm"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1555","title":"A note on multi-index polynomials of Dickson type and their applications in quantum optics","abstract":"We discuss the properties of a new family of multi-index Lucas type polynomials, which are often encountered in problems of intracavity photon statistics. We develop an approach based on the integral representation method and show that this class of polynomials can be derived from recently introduced multi-index Hermite like polynomials","tok_text":"a note on multi-index polynomi of dickson type and their applic in quantum optic \n we discuss the properti of a new famili of multi-index luca type polynomi , which are often encount in problem of intracav photon statist . we develop an approach base on the integr represent method and show that thi class of polynomi can be deriv from recent introduc multi-index hermit like polynomi","ordered_present_kp":[138,10,67,197,258],"keyphrases":["multi-index polynomials","quantum optics","Lucas type polynomials","intracavity photon statistics","integral representation","generating functions"],"prmu":["P","P","P","P","P","U"]} {"id":"1694","title":"Product development: using a 3D computer model to optimize the stability of the Rocket TM powered wheelchair","abstract":"A three-dimensional (3D) lumped-parameter model of a powered wheelchair was created to aid the development of the Rocket prototype wheelchair and to help explore the effect of innovative design features on its stability. The model was developed using simulation software, specifically Working Model 3D. The accuracy of the model was determined by comparing both its static stability angles and dynamic behavior as it passed down a 4.8-cm (1.9\") road curb at a heading of 45 degrees with the performance of the actual wheelchair. The model's predictions of the static stability angles in the forward, rearward, and lateral directions were within 9.3, 7.1, and 3.8% of the measured values, respectively. The average absolute error in the predicted position of the wheelchair as it moved down the curb was 2.2 cm\/m (0.9\" per 3'3\") traveled. The accuracy was limited by the inability to model soft bodies, the inherent difficulties in modeling a statically indeterminate system, and the computing time. Nevertheless, it was found to be useful in investigating the effect of eight design alterations on the lateral stability of the wheelchair. Stability was quantified by determining the static lateral stability angles and the maximum height of a road curb over which the wheelchair could successfully drive on a diagonal heading. The model predicted that the stability was more dependent on the configuration of the suspension system than on the dimensions and weight distribution of the wheelchair. Furthermore, for the situations and design alterations studied, predicted improvements in static stability were not correlated with improvements in dynamic stability","tok_text":"product develop : use a 3d comput model to optim the stabil of the rocket tm power wheelchair \n a three-dimension ( 3d ) lumped-paramet model of a power wheelchair wa creat to aid the develop of the rocket prototyp wheelchair and to help explor the effect of innov design featur on it stabil . the model wa develop use simul softwar , specif work model 3d. the accuraci of the model wa determin by compar both it static stabil angl and dynam behavior as it pass down a 4.8-cm ( 1.9 \" ) road curb at a head of 45 degre with the perform of the actual wheelchair . the model 's predict of the static stabil angl in the forward , rearward , and later direct were within 9.3 , 7.1 , and 3.8 % of the measur valu , respect . the averag absolut error in the predict posit of the wheelchair as it move down the curb wa 2.2 cm \/ m ( 0.9 \" per 3'3 \" ) travel . the accuraci wa limit by the inabl to model soft bodi , the inher difficulti in model a static indetermin system , and the comput time . nevertheless , it wa found to be use in investig the effect of eight design alter on the later stabil of the wheelchair . stabil wa quantifi by determin the static later stabil angl and the maximum height of a road curb over which the wheelchair could success drive on a diagon head . the model predict that the stabil wa more depend on the configur of the suspens system than on the dimens and weight distribut of the wheelchair . furthermor , for the situat and design alter studi , predict improv in static stabil were not correl with improv in dynam stabil","ordered_present_kp":[24,0,259,723,751,939,974,1259,1383,67],"keyphrases":["product development","3D computer model","Rocket TM powered wheelchair","innovative design features","average absolute error","predicted position","statically indeterminate system","computing time","diagonal heading","weight distribution","suspension system configuration","dynamic stability improvements","soft bodies modeling","design alterations effect","4.8 cm"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R","R","R","M"]} {"id":"1568","title":"Natural language from artificial life","abstract":"This article aims to show that linguistics, in particular the study of the lexico-syntactic aspects of language, provides fertile ground for artificial life modeling. A survey of the models that have been developed over the last decade and a half is presented to demonstrate that ALife techniques have a lot to offer an explanatory theory of language. It is argued that this is because much of the structure of language is determined by the interaction of three complex adaptive systems: learning, culture, and biological evolution. Computational simulation, informed by theoretical linguistics, is an appropriate response to the challenge of explaining real linguistic data in terms of the processes that underpin human language","tok_text":"natur languag from artifici life \n thi articl aim to show that linguist , in particular the studi of the lexico-syntact aspect of languag , provid fertil ground for artifici life model . a survey of the model that have been develop over the last decad and a half is present to demonstr that alif techniqu have a lot to offer an explanatori theori of languag . it is argu that thi is becaus much of the structur of languag is determin by the interact of three complex adapt system : learn , cultur , and biolog evolut . comput simul , inform by theoret linguist , is an appropri respons to the challeng of explain real linguist data in term of the process that underpin human languag","ordered_present_kp":[0,63,105,291,467,482,490,503,519,19],"keyphrases":["natural language","artificial life","linguistics","lexico-syntactic aspects","ALife","adaptive systems","learning","culture","biological evolution","computational simulation"],"prmu":["P","P","P","P","P","P","P","P","P","P"]} {"id":"178","title":"A parallelized indexing method for large-scale case-based reasoning","abstract":"Case-based reasoning (CBR) is a problem solving methodology commonly seen in artificial intelligence. It can correctly take advantage of the situations and methods in former cases to find out suitable solutions for new problems. CBR must accurately retrieve similar prior cases for getting a good performance. In the past, many researchers proposed useful technologies to handle this problem. However, the performance of retrieving similar cases may be greatly influenced by the number of cases. In this paper, the performance issue of large-scale CBR is discussed and a parallelized indexing architecture is then proposed for efficiently retrieving similar cases in large-scale CBR. Several algorithms for implementing the proposed architecture are also described. Some experiments are made and the results show the efficiency of the proposed method","tok_text":"a parallel index method for large-scal case-bas reason \n case-bas reason ( cbr ) is a problem solv methodolog commonli seen in artifici intellig . it can correctli take advantag of the situat and method in former case to find out suitabl solut for new problem . cbr must accur retriev similar prior case for get a good perform . in the past , mani research propos use technolog to handl thi problem . howev , the perform of retriev similar case may be greatli influenc by the number of case . in thi paper , the perform issu of large-scal cbr is discuss and a parallel index architectur is then propos for effici retriev similar case in large-scal cbr . sever algorithm for implement the propos architectur are also describ . some experi are made and the result show the effici of the propos method","ordered_present_kp":[2,28,86,127,319,731],"keyphrases":["parallelized indexing method","large-scale case-based reasoning","problem solving methodology","artificial intelligence","performance","experiments","bitwise indexing","similar prior case retrieval"],"prmu":["P","P","P","P","P","P","M","R"]} {"id":"185","title":"Property testers for dense Constraint Satisfaction programs on finite domains","abstract":"Many NP-hard languages can be \"decided\" in subexponential time if the definition of \"decide\" is relaxed only slightly. Rubinfeld and Sudan introduced the notion of property testers, probabilistic algorithms that can decide, with high probability, if a function has a certain property or if it is far from any function having this property. Goldreich, Goldwasser, and Ron constructed property testers with constant query complexity for dense instances of a large class of graph problems. Since many graph problems can be viewed as special cases of the Constraint Satisfaction Problem on Boolean domains, it is natural to try to construct property testers for more general cases of the Constraint Satisfaction Problem. In this paper, we give explicit constructions of property testers using a constant number of queries for dense instances of Constraint Satisfaction Problems where the constraints have constant arity and the variables assume values in some domain of finite size","tok_text":"properti tester for dens constraint satisfact program on finit domain \n mani np-hard languag can be \" decid \" in subexponenti time if the definit of \" decid \" is relax onli slightli . rubinfeld and sudan introduc the notion of properti tester , probabilist algorithm that can decid , with high probabl , if a function ha a certain properti or if it is far from ani function have thi properti . goldreich , goldwass , and ron construct properti tester with constant queri complex for dens instanc of a larg class of graph problem . sinc mani graph problem can be view as special case of the constraint satisfact problem on boolean domain , it is natur to tri to construct properti tester for more gener case of the constraint satisfact problem . in thi paper , we give explicit construct of properti tester use a constant number of queri for dens instanc of constraint satisfact problem where the constraint have constant ariti and the variabl assum valu in some domain of finit size","ordered_present_kp":[77,0,245,456,25,483,113,515,590],"keyphrases":["property testers","constraint satisfaction","NP-hard languages","subexponential time","probabilistic algorithms","constant query complexity","dense instances","graph problems","Constraint Satisfaction Problem","randomized sampling"],"prmu":["P","P","P","P","P","P","P","P","P","U"]} {"id":"1907","title":"Multiple comparison methods for means","abstract":"Multiple comparison methods (MCMs) are used to investigate differences between pairs of population means or, more generally, between subsets of population means using sample data. Although several such methods are commonly available in statistical software packages, users may be poorly informed about the appropriate method(s) to use and\/or the correct way to interpret the results. This paper classifies the MCMs and presents the important methods for each class. Both simulated and real data are used to compare the methods, and emphasis is placed on a correct application and interpretation. We include suggestions for choosing the best method. Mathematica programs developed by the authors are used to compare MCMs. By taking the advantage of Mathematica's notebook structure, all interested student can use these programs to explore the subject more deeply","tok_text":"multipl comparison method for mean \n multipl comparison method ( mcm ) are use to investig differ between pair of popul mean or , more gener , between subset of popul mean use sampl data . although sever such method are commonli avail in statist softwar packag , user may be poorli inform about the appropri method( ) to use and\/or the correct way to interpret the result . thi paper classifi the mcm and present the import method for each class . both simul and real data are use to compar the method , and emphasi is place on a correct applic and interpret . we includ suggest for choos the best method . mathematica program develop by the author are use to compar mcm . by take the advantag of mathematica 's notebook structur , all interest student can use these program to explor the subject more deepli","ordered_present_kp":[114],"keyphrases":["population means","multiple comparison procedures","error rate","single-step procedures","step-down procedures","sales management","pack-age design"],"prmu":["P","M","U","U","U","U","U"]} {"id":"1595","title":"Convergence of finite element approximations and multilevel linearization for Ginzburg-Landau model of d-wave superconductors","abstract":"In this paper, we consider the finite element approximations of a recently proposed Ginzburg-Landau-type model for d-wave superconductors. In contrast to the conventional Ginzburg-Landau model the scalar complex valued order-parameter is replaced by a multicomponent complex order-parameter and the free energy is modified according to the d-wave paring symmetry. Convergence and optimal error estimates and some super-convergent estimates for the derivatives are derived. Furthermore, we propose a multilevel linearization procedure to solve the nonlinear systems. It is proved that the optimal error estimates and super-convergence for the derivatives are preserved by the multi-level linearization algorithm","tok_text":"converg of finit element approxim and multilevel linear for ginzburg-landau model of d-wave superconductor \n in thi paper , we consid the finit element approxim of a recent propos ginzburg-landau-typ model for d-wave superconductor . in contrast to the convent ginzburg-landau model the scalar complex valu order-paramet is replac by a multicompon complex order-paramet and the free energi is modifi accord to the d-wave pare symmetri . converg and optim error estim and some super-converg estim for the deriv are deriv . furthermor , we propos a multilevel linear procedur to solv the nonlinear system . it is prove that the optim error estim and super-converg for the deriv are preserv by the multi-level linear algorithm","ordered_present_kp":[60,85,586,455,378,38],"keyphrases":["multilevel linearization","Ginzburg-Landau model","d-wave","free energy","error estimation","nonlinear systems","superconductivity","finite element method","two-grid method"],"prmu":["P","P","P","P","P","P","U","M","U"]} {"id":"1669","title":"Supply chain optimisation in the paper industry","abstract":"We describe the formulation and development of a supply-chain optimisation model for Fletcher Challenge Paper Australasia (FCPA). This model, known as the paper industry value optimisation tool (PIVOT), is a large mixed integer program that finds an optimal allocation of supplier to mill, product to paper machine, and paper machine to customer, while at the same time modelling many of the supply chain details and nuances which are peculiar to FCPA. PIVOT has assisted FCPA in solving a number of strategic and tactical decision problems, and provided significant economic benefits for the company","tok_text":"suppli chain optimis in the paper industri \n we describ the formul and develop of a supply-chain optimis model for fletcher challeng paper australasia ( fcpa ) . thi model , known as the paper industri valu optimis tool ( pivot ) , is a larg mix integ program that find an optim alloc of supplier to mill , product to paper machin , and paper machin to custom , while at the same time model mani of the suppli chain detail and nuanc which are peculiar to fcpa . pivot ha assist fcpa in solv a number of strateg and tactic decis problem , and provid signific econom benefit for the compani","ordered_present_kp":[0,115,187,222,237,273,515,558],"keyphrases":["supply chain optimisation","Fletcher Challenge Paper Australasia","paper industry value optimisation tool","PIVOT","large mixed integer program","optimal allocation","tactical decision problems","economic benefits","strategic decision problems"],"prmu":["P","P","P","P","P","P","P","P","R"]} {"id":"1488","title":"Social presence in telemedicine","abstract":"We studied consultations between a doctor, emergency nurse practitioners (ENPs) and their patients in a minor accident and treatment service (MATS). In the conventional consultations, all three people were located at the main hospital. In the teleconsultations, the doctor was located in a hospital 6 km away from the MATS and used a videoconferencing link connected at 384 kbit\/s. There were 30 patients in the conventional group and 30 in the telemedical group. The presenting problems were similar in the two groups. The mean duration of teleconsultations was 951 s and the mean duration of face-to-face consultations was 247 s. In doctor-nurse communication there was a higher rate of turn taking in teleconsultations than in face-to-face consultations; there were also more interruptions, more words and more `backchannels' (e.g. `mhm', `uh-huh') per teleconsultation. In doctor-patient communication there was a higher rate of turn taking, more words, more interruptions and more backchannels per teleconsultation. In patient-nurse communication there was. relatively little difference between the two modes of consulting the doctor. Telemedicine appeared to empower the patient to ask more questions of the doctor. It also seemed that the doctor took greater care in a teleconsultation to achieve coordination of beliefs with the patient than in a face-to-face consultation","tok_text":"social presenc in telemedicin \n we studi consult between a doctor , emerg nurs practition ( enp ) and their patient in a minor accid and treatment servic ( mat ) . in the convent consult , all three peopl were locat at the main hospit . in the teleconsult , the doctor wa locat in a hospit 6 km away from the mat and use a videoconferenc link connect at 384 kbit \/ s. there were 30 patient in the convent group and 30 in the telemed group . the present problem were similar in the two group . the mean durat of teleconsult wa 951 s and the mean durat of face-to-fac consult wa 247 s. in doctor-nurs commun there wa a higher rate of turn take in teleconsult than in face-to-fac consult ; there were also more interrupt , more word and more ` backchannel ' ( e.g. ` mhm ' , ` uh-huh ' ) per teleconsult . in doctor-pati commun there wa a higher rate of turn take , more word , more interrupt and more backchannel per teleconsult . in patient-nurs commun there wa . rel littl differ between the two mode of consult the doctor . telemedicin appear to empow the patient to ask more question of the doctor . it also seem that the doctor took greater care in a teleconsult to achiev coordin of belief with the patient than in a face-to-fac consult","ordered_present_kp":[0,18,59,68,108,121,244,323,554,587,708,741,725,632,932,526],"keyphrases":["social presence","telemedicine","doctor","emergency nurse practitioners","patients","minor accident and treatment service","teleconsultations","videoconferencing link","951 s","face-to-face consultations","doctor-nurse communication","turn taking","interruptions","words","backchannels","patient-nurse communication","belief coordination","384 kbit\/s","247 s"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1774","title":"A work journal [librarianship]","abstract":"Keeping a work journal can be useful in exploring one's thoughts and feelings about work challenges and work decisions. It can help bring about greater fulfillment in one's work life by facilitating self-renewal, change, the search for new meaning, and job satisfaction. One example of a work journal which I kept in 1998 is considered. It touches on several issues of potential interest to midlife career librarians including the challenge of technology, returning to work at midlife after raising a family, further education, professional writing, and job exchange","tok_text":"a work journal [ librarianship ] \n keep a work journal can be use in explor one 's thought and feel about work challeng and work decis . it can help bring about greater fulfil in one 's work life by facilit self-renew , chang , the search for new mean , and job satisfact . one exampl of a work journal which i kept in 1998 is consid . it touch on sever issu of potenti interest to midlif career librarian includ the challeng of technolog , return to work at midlif after rais a famili , further educ , profession write , and job exchang","ordered_present_kp":[124,106,258,207,2,220,382,429,488,503,526],"keyphrases":["work journal","work challenges","work decisions","self-renewal","change","job satisfaction","midlife career librarians","technology","further education","professional writing","job exchange"],"prmu":["P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1731","title":"Hit the road, Jack","abstract":"Going freelance offers the potential of higher earnings, variety and independence - but also removes the benefits of permanent employment and can mean long distance travel and periods out of work. The author looks at the benefits and drawbacks - and how to get started as an IT contractor","tok_text":"hit the road , jack \n go freelanc offer the potenti of higher earn , varieti and independ - but also remov the benefit of perman employ and can mean long distanc travel and period out of work . the author look at the benefit and drawback - and how to get start as an it contractor","ordered_present_kp":[267],"keyphrases":["IT contractor","freelance working"],"prmu":["P","R"]} {"id":"1789","title":"Dousing terrorist funding: mission possible? [banks]","abstract":"The government is tightening its grip on terrorist money flows. But as the banking industry continues to expand its Patriot Act compliance activities, it is with the realization that a great deal of work remains to be done before the American financial system can become truly airtight. Identification instruments, especially drivers licenses, represent a significant weak spot","tok_text":"dous terrorist fund : mission possibl ? [ bank ] \n the govern is tighten it grip on terrorist money flow . but as the bank industri continu to expand it patriot act complianc activ , it is with the realiz that a great deal of work remain to be done befor the american financi system can becom truli airtight . identif instrument , especi driver licens , repres a signific weak spot","ordered_present_kp":[42,153,5,310],"keyphrases":["terrorist funding","banking","Patriot Act","identification"],"prmu":["P","P","P","P"]} {"id":"1475","title":"Relation between glare and driving performance","abstract":"The present study investigated the effects of discomfort glare on driving behavior. Participants (old and young; US and Europeans) were exposed to a simulated low- beam light source mounted on the hood of an instrumented vehicle. Participants drove at night in actual traffic along a track consisting of urban, rural, and highway stretches. The results show that the relatively low glare source caused a significant drop in detecting simulated pedestrians along the roadside and made participants drive significantly slower on dark and winding roads. Older participants showed the largest drop in pedestrian detection performance and reduced their driving speed the most. The results indicate that the de Boer rating scale, the most commonly used rating scale for discomfort glare, is practically useless as a predictor of driving performance. Furthermore, the maximum US headlamp intensity (1380 cd per headlamp) appears to be an acceptable upper limit","tok_text":"relat between glare and drive perform \n the present studi investig the effect of discomfort glare on drive behavior . particip ( old and young ; us and european ) were expos to a simul low- beam light sourc mount on the hood of an instrument vehicl . particip drove at night in actual traffic along a track consist of urban , rural , and highway stretch . the result show that the rel low glare sourc caus a signific drop in detect simul pedestrian along the roadsid and made particip drive significantli slower on dark and wind road . older particip show the largest drop in pedestrian detect perform and reduc their drive speed the most . the result indic that the de boer rate scale , the most commonli use rate scale for discomfort glare , is practic useless as a predictor of drive perform . furthermor , the maximum us headlamp intens ( 1380 cd per headlamp ) appear to be an accept upper limit","ordered_present_kp":[14,24,81,338],"keyphrases":["glare","driving performance","discomfort glare","highway","simulated low-beam light source","road traffic","urban road","rural road","deBoer rating scale"],"prmu":["P","P","P","P","M","R","R","R","M"]} {"id":"1608","title":"A geometric process equivalent model for a multistate degenerative system","abstract":"In this paper, a monotone process model for a one-component degenerative system with k+1 states (k failure states and one working state) is studied. We show that this model is equivalent to a geometric process (GP) model for a two-state one component system such that both systems have the same long-run average cost per unit time and the same optimal policy. Furthermore, an explicit expression for the determination of an optimal policy is derived","tok_text":"a geometr process equival model for a multist degen system \n in thi paper , a monoton process model for a one-compon degen system with k+1 state ( k failur state and one work state ) is studi . we show that thi model is equival to a geometr process ( gp ) model for a two-stat one compon system such that both system have the same long-run averag cost per unit time and the same optim polici . furthermor , an explicit express for the determin of an optim polici is deriv","ordered_present_kp":[38,2,78,106,149,170,268,331,379],"keyphrases":["geometric process equivalent model","multistate degenerative system","monotone process model","one-component degenerative system","failure states","working state","two-state one component system","long-run average cost","optimal policy","replacement policy","renewal reward process"],"prmu":["P","P","P","P","P","P","P","P","P","M","M"]} {"id":"1923","title":"Predictive control of a high temperature-short time pasteurisation process","abstract":"Modifications on the dynamic matrix control (DMC) algorithm are presented to deal with transfer functions with varying parameters in order to control a high temperature-short time pasteurisation process. To control processes with first order with pure time delay models whose parameters present an exogenous variable dependence, a new method of free response calculation, using multiple model information, is developed. Two methods, to cope with those nonlinear models that allow a generalised Hammerstein model description, are proposed. The proposed methods have been tested, both in simulation and in real cases, in comparison with PID and DMC classic controllers, showing important improvements on reference tracking and disturbance rejection","tok_text":"predict control of a high temperature-short time pasteuris process \n modif on the dynam matrix control ( dmc ) algorithm are present to deal with transfer function with vari paramet in order to control a high temperature-short time pasteuris process . to control process with first order with pure time delay model whose paramet present an exogen variabl depend , a new method of free respons calcul , use multipl model inform , is develop . two method , to cope with those nonlinear model that allow a generalis hammerstein model descript , are propos . the propos method have been test , both in simul and in real case , in comparison with pid and dmc classic control , show import improv on refer track and disturb reject","ordered_present_kp":[21,0,146,298,340,380,406,474,503,694,710],"keyphrases":["predictive control","high temperature-short time pasteurisation process","transfer functions","time delay models","exogenous variable dependence","free response calculation","multiple model information","nonlinear models","generalised Hammerstein model description","reference tracking","disturbance rejection","dynamic matrix control algorithm","first order processes"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R","R"]} {"id":"1509","title":"Mathematical modelling of the work of the system of wells in a layer with the exponential law of permeability variation and the mobile liquid interface","abstract":"We construct and study a two-dimensional model of the work of the system of wells in a layer with the mobile boundary between liquids of various viscosity. We use a 'plunger' displacement model of liquids. The boundaries of the filtration region of these liquids are modelled by curves of the Lyapunov class. Unlike familiar work, we solve two-dimensonal problems in an inhomogeneous layer when the mobile boundary and the boundaries of the filtration region are modelled by curves of the Lyapunov class. We show the practical convergence of the numerical solution of the problems studied","tok_text":"mathemat model of the work of the system of well in a layer with the exponenti law of permeabl variat and the mobil liquid interfac \n we construct and studi a two-dimension model of the work of the system of well in a layer with the mobil boundari between liquid of variou viscos . we use a ' plunger ' displac model of liquid . the boundari of the filtrat region of these liquid are model by curv of the lyapunov class . unlik familiar work , we solv two-dimenson problem in an inhomogen layer when the mobil boundari and the boundari of the filtrat region are model by curv of the lyapunov class . we show the practic converg of the numer solut of the problem studi","ordered_present_kp":[22,0,69,86,110,233,273,479,620,635],"keyphrases":["mathematical modelling","work","exponential law","permeability variation","mobile liquid interface","mobile boundary","viscosity","inhomogeneous layer","convergence","numerical solution","2D model","well system","plunger displacement model","filtration region boundaries","Lyapunov class curves"],"prmu":["P","P","P","P","P","P","P","P","P","P","M","R","R","R","R"]} {"id":"1886","title":"Non-asymptotic confidence ellipsoids for the least-squares estimate","abstract":"We consider the finite sample properties of least-squares system identification, and derive non-asymptotic confidence ellipsoids for the estimate. The shape of the confidence ellipsoids is similar to the shape of the ellipsoids derived using asymptotic theory, but unlike asymptotic theory, they are valid for a finite number of data points. The probability that the estimate belongs to a certain ellipsoid has a natural dependence on the volume of the ellipsoid, the data generating mechanism, the model order and the number of data points available","tok_text":"non-asymptot confid ellipsoid for the least-squar estim \n we consid the finit sampl properti of least-squar system identif , and deriv non-asymptot confid ellipsoid for the estim . the shape of the confid ellipsoid is similar to the shape of the ellipsoid deriv use asymptot theori , but unlik asymptot theori , they are valid for a finit number of data point . the probabl that the estim belong to a certain ellipsoid ha a natur depend on the volum of the ellipsoid , the data gener mechan , the model order and the number of data point avail","ordered_present_kp":[38,72,96,366,473,497,349],"keyphrases":["least-squares estimate","finite sample properties","least-squares system identification","data points","probability","data generating mechanism","model order","nonasymptotic confidence ellipsoids"],"prmu":["P","P","P","P","P","P","P","M"]} {"id":"1750","title":"A dynamic method for weighted linear least squares problems","abstract":"A new method for solving the weighted linear least squares problems with full rank is proposed. Based on the theory of Liapunov's stability, the method associates a dynamic system with a weighted linear least squares problem, whose solution we are interested in and integrates the former numerically by an A-stable numerical method. The numerical tests suggest that the new method is more than comparative with current conventional techniques based on the normal equations","tok_text":"a dynam method for weight linear least squar problem \n a new method for solv the weight linear least squar problem with full rank is propos . base on the theori of liapunov 's stabil , the method associ a dynam system with a weight linear least squar problem , whose solut we are interest in and integr the former numer by an a-stabl numer method . the numer test suggest that the new method is more than compar with current convent techniqu base on the normal equat","ordered_present_kp":[2,19,326],"keyphrases":["dynamic method","weighted linear least squares problems","A-stable numerical method","Lyapunov stability"],"prmu":["P","P","P","M"]} {"id":"1715","title":"Information-processing and computing systems at thermal power stations in China","abstract":"The development and commissioning of information-processing and computing systems (IPCSs) at four power units, each of 500 MW capacity at the thermal power stations Tszisyan' and Imin' in China, are considered. The functional structure and the characteristics of the functions of the IPCSs are presented as is information on the technology of development and experience in adjustments. Ways of using the experience gained in creating a comprehensive functional firmware system are shown","tok_text":"information-process and comput system at thermal power station in china \n the develop and commiss of information-process and comput system ( ipcss ) at four power unit , each of 500 mw capac at the thermal power station tszisyan ' and imin ' in china , are consid . the function structur and the characterist of the function of the ipcss are present as is inform on the technolog of develop and experi in adjust . way of use the experi gain in creat a comprehens function firmwar system are shown","ordered_present_kp":[66,41,24,90,78,270,472,178],"keyphrases":["computing systems","thermal power stations","China","development","commissioning","500 MW","functional structure","firmware system","information-processing systems","functions characteristics"],"prmu":["P","P","P","P","P","P","P","P","R","R"]} {"id":"1728","title":"A characterization of generalized Pareto distributions by progressive censoring schemes and goodness-of-fit tests","abstract":"In this paper we generalize a characterization property of generalized Pareto distributions, which is known for ordinary order statistics, to arbitrary schemes of progressive type-II censored order statistics. Various goodness-of-fit tests for generalized Pareto distributions based on progressively censored data statistics are discussed","tok_text":"a character of gener pareto distribut by progress censor scheme and goodness-of-fit test \n in thi paper we gener a character properti of gener pareto distribut , which is known for ordinari order statist , to arbitrari scheme of progress type-ii censor order statist . variou goodness-of-fit test for gener pareto distribut base on progress censor data statist are discuss","ordered_present_kp":[15,41,68,229,181],"keyphrases":["generalized Pareto distributions","progressive censoring schemes","goodness-of-fit tests","ordinary order statistics","progressive type-II censored order statistics"],"prmu":["P","P","P","P","P"]} {"id":"1803","title":"Linear complexity of polyphase power residue sequences","abstract":"The well known family of binary Legendre or quadratic residue sequences can be generalised to the multiple-valued case by employing a polyphase representation. These p-phase sequences, with p prime, also have prime length L, and can be constructed from the index sequence of length L or, equivalently, from the cosets of pth power residues and non-residues modulo-L. The linear complexity of these polyphase sequences is derived and shown to fall into four classes depending on the value assigned to b\/sub 0\/, the initial digit of the sequence, and on whether p belongs to the set of pth power residues or not. The characteristic polynomials of the linear feedback shift registers that generate these sequences are also derived","tok_text":"linear complex of polyphas power residu sequenc \n the well known famili of binari legendr or quadrat residu sequenc can be generalis to the multiple-valu case by employ a polyphas represent . these p-phase sequenc , with p prime , also have prime length l , and can be construct from the index sequenc of length l or , equival , from the coset of pth power residu and non-residu modulo-l. the linear complex of these polyphas sequenc is deriv and shown to fall into four class depend on the valu assign to b \/ sub 0\/ , the initi digit of the sequenc , and on whether p belong to the set of pth power residu or not . the characterist polynomi of the linear feedback shift regist that gener these sequenc are also deriv","ordered_present_kp":[0,18,93,140,198,633,649],"keyphrases":["linear complexity","polyphase power residue sequences","quadratic residue sequences","multiple-valued case","p-phase sequences","polynomials","linear feedback shift registers","binary Legendre sequences","cryptographic applications","key stream ciphers","binary sequences"],"prmu":["P","P","P","P","P","P","P","R","U","U","R"]} {"id":"1491","title":"Evaluation of videoconferenced grand rounds","abstract":"We evaluated various aspects of grand rounds videoconferenced from a tertiary care hospital to a regional hospital in Nova Scotia. During a five-month study period, 29 rounds were broadcast (19 in medicine and 10 in cardiology). The total recorded attendance at the remote site was 103, comprising 70 specialists, nine family physicians and 24 other health-care professionals. We received 55 evaluations, a response rate of 53%. On a five-point Likert scale (on which higher scores indicated better quality), mean ratings by remote-site participants of the technical quality of the videoconference were 3.0-3.5, with the lowest ratings being for ability to hear the discussion (3.0) and to see visual aids (3.1). Mean ratings for content, presentation, discussion and educational value were 3.8 or higher. Of the 49 physicians who presented the rounds, we received evaluations from 41, a response rate of 84%. The presenters rated all aspects of the videoconference and interaction with remote sites at 3.8 or lower. The lowest ratings were for ability to see the remote sites (3.0) and the usefulness of the discussion (3.4). We received 278 evaluations from participants at the presenting site, an estimated response rate of about 55%. The results indicated no adverse opinions of the effect of videoconferencing (mean scores 3.1-3.3). The estimated costs of videoconferencing one grand round to one site and four sites were C$723 and C$1515, respectively. The study confirmed that videoconferenced rounds can provide satisfactory continuing medical education to community specialists, which is an especially important consideration as maintenance of certification becomes mandatory","tok_text":"evalu of videoconferenc grand round \n we evalu variou aspect of grand round videoconferenc from a tertiari care hospit to a region hospit in nova scotia . dure a five-month studi period , 29 round were broadcast ( 19 in medicin and 10 in cardiolog ) . the total record attend at the remot site wa 103 , compris 70 specialist , nine famili physician and 24 other health-car profession . we receiv 55 evalu , a respons rate of 53 % . on a five-point likert scale ( on which higher score indic better qualiti ) , mean rate by remote-sit particip of the technic qualiti of the videoconfer were 3.0 - 3.5 , with the lowest rate be for abil to hear the discuss ( 3.0 ) and to see visual aid ( 3.1 ) . mean rate for content , present , discuss and educ valu were 3.8 or higher . of the 49 physician who present the round , we receiv evalu from 41 , a respons rate of 84 % . the present rate all aspect of the videoconfer and interact with remot site at 3.8 or lower . the lowest rate were for abil to see the remot site ( 3.0 ) and the use of the discuss ( 3.4 ) . we receiv 278 evalu from particip at the present site , an estim respons rate of about 55 % . the result indic no advers opinion of the effect of videoconferenc ( mean score 3.1 - 3.3 ) . the estim cost of videoconferenc one grand round to one site and four site were c$ 723 and c$ 1515 , respect . the studi confirm that videoconferenc round can provid satisfactori continu medic educ to commun specialist , which is an especi import consider as mainten of certif becom mandatori","ordered_present_kp":[9,98,124,238,362,437,283,1425,1516],"keyphrases":["videoconferenced grand rounds","tertiary care hospital","regional hospital","cardiology","remote sites","health-care professionals","five-point Likert scale","continuing medical education","certification","telemedicine"],"prmu":["P","P","P","P","P","P","P","P","P","U"]} {"id":"1846","title":"Semantic B2B integration: issues in ontology-based approaches","abstract":"Solving queries to support e-commerce transactions can involve retrieving and integrating information from multiple information resources. Often, users don't care which resources are used to answer their query. In such situations, the ideal solution would be to hide from the user the details of the resources involved in solving a particular query. An example would be providing seamless access to a set of heterogeneous electronic product catalogues. There are many problems that must be addressed before such a solution can be provided. In this paper, we discuss a number of these problems, indicate how we have addressed these and go on to describe the proof-of-concept demonstration system we have developed","tok_text":"semant b2b integr : issu in ontology-bas approach \n solv queri to support e-commerc transact can involv retriev and integr inform from multipl inform resourc . often , user do n't care which resourc are use to answer their queri . in such situat , the ideal solut would be to hide from the user the detail of the resourc involv in solv a particular queri . an exampl would be provid seamless access to a set of heterogen electron product catalogu . there are mani problem that must be address befor such a solut can be provid . in thi paper , we discuss a number of these problem , indic how we have address these and go on to describ the proof-of-concept demonstr system we have develop","ordered_present_kp":[74,57,135,411,28,0],"keyphrases":["semantic B2B integration","ontology-based approaches","queries","e-commerce transactions","multiple information resources","heterogeneous electronic product catalogues","information integration","information retrieval"],"prmu":["P","P","P","P","P","P","R","R"]} {"id":"1790","title":"Copyright of electronic publishing","abstract":"With the spreading of the Internet and the wide use of computers, electronic publishing is becoming an indispensable measure to gain knowledge and skills. Meanwhile, copyright is facing much more infringement than ever in this electronic environment. So, it is a key factor to effectively protect copyright of electronic publishing to foster the new publication fields. The paper analyzes the importance of copyright, the main causes for copyright infringement in electronic publishing, and presents viewpoints on the definition and application of fair use of a copyrighted work and thinking of some means to combat breach of copyright","tok_text":"copyright of electron publish \n with the spread of the internet and the wide use of comput , electron publish is becom an indispens measur to gain knowledg and skill . meanwhil , copyright is face much more infring than ever in thi electron environ . so , it is a key factor to effect protect copyright of electron publish to foster the new public field . the paper analyz the import of copyright , the main caus for copyright infring in electron publish , and present viewpoint on the definit and applic of fair use of a copyright work and think of some mean to combat breach of copyright","ordered_present_kp":[55,417,232,508,522],"keyphrases":["Internet","electronic environment","copyright infringement","fair use","copyrighted work","electronic publishing copyright","copyright protection"],"prmu":["P","P","P","P","P","R","R"]} {"id":"1534","title":"Generic simulation approach for multi-axis machining. Part 1: modeling methodology","abstract":"This paper presents a new methodology for analytically simulating multi-axis machining of complex sculptured surfaces. A generalized approach is developed for representing an arbitrary cutting edge design, and the local surface topology of a complex sculptured surface. A NURBS curve is used to represent the cutting edge profile. This approach offers the advantages of representing any arbitrary cutting edge design in a generic way, as well as providing standardized techniques for manipulating the location and orientation of the cutting edge. The local surface topology of the part is defined as those surfaces generated by previous tool paths in the vicinity of the current tool position. The local surface topology of the part is represented without using a computationally expensive CAD system. A systematic prediction technique is then developed to determine the instantaneous tool\/part interaction during machining. The methodology employed here determines the cutting edge in-cut segments by determining the intersection between the NURBS curve representation of the cutting edge and the defined local surface topology. These in-cut segments are then utilized for predicting instantaneous chip load, static and dynamic cutting forces, and tool deflection. Part 1 of this paper details the modeling methodology and demonstrates the capabilities of the simulation for machining a complex surface","tok_text":"gener simul approach for multi-axi machin . part 1 : model methodolog \n thi paper present a new methodolog for analyt simul multi-axi machin of complex sculptur surfac . a gener approach is develop for repres an arbitrari cut edg design , and the local surfac topolog of a complex sculptur surfac . a nurb curv is use to repres the cut edg profil . thi approach offer the advantag of repres ani arbitrari cut edg design in a gener way , as well as provid standard techniqu for manipul the locat and orient of the cut edg . the local surfac topolog of the part is defin as those surfac gener by previou tool path in the vicin of the current tool posit . the local surfac topolog of the part is repres without use a comput expens cad system . a systemat predict techniqu is then develop to determin the instantan tool \/ part interact dure machin . the methodolog employ here determin the cut edg in-cut segment by determin the intersect between the nurb curv represent of the cut edg and the defin local surfac topolog . these in-cut segment are then util for predict instantan chip load , static and dynam cut forc , and tool deflect . part 1 of thi paper detail the model methodolog and demonstr the capabl of the simul for machin a complex surfac","ordered_present_kp":[144,743,332,253,301],"keyphrases":["complex sculptured surfaces","surface topology","NURBS curve","cutting edge profile","systematic prediction","multiple axis machining","generic modeling","tool path specification","complex surface machining"],"prmu":["P","P","P","P","P","M","R","M","R"]} {"id":"1571","title":"The simulated emergence of distributed environmental control in evolving microcosms","abstract":"This work continues investigation into Gaia theory (Lovelock, The ages of Gaia, Oxford University Press, 1995) from an artificial life perspective (Downing, Proceedings of the 7th International Conference on Artificial Life, p. 90-99, MIT Press, 2000), with the aim of assessing the general compatibility of emergent distributed environmental control with conventional natural selection. Our earlier system, GUILD (Downing and Zvirinsky, Artificial Life, 5, p.291-318, 1999), displayed emergent regulation of the chemical environment by a population of metabolizing agents, but the chemical model underlying those results was trivial, essentially admitting all possible reactions at a single energy cost. The new model, METAMIC, utilizes abstract chemistries that are both (a) constrained to a small set of legal reactions, and (b) grounded in basic fundamental relationships between energy, entropy, and biomass synthesis\/breakdown. To explore the general phenomena of emergent homeostasis, we generate 100 different chemistries and use each as the basis for several METAMIC runs, as part of a Gaia hunt. This search discovers 20 chemistries that support microbial populations capable of regulating a physical environmental factor within their growth-optimal range, despite the extra metabolic cost. Case studies from the Gaia hunt illustrate a few simple mechanisms by which real biota might exploit the underlying chemistry to achieve some control over their physical environment. Although these results shed little light on the question of Gaia on Earth, they support the possibility of emergent environmental control at the microcosmic level","tok_text":"the simul emerg of distribut environment control in evolv microcosm \n thi work continu investig into gaia theori ( lovelock , the age of gaia , oxford univers press , 1995 ) from an artifici life perspect ( down , proceed of the 7th intern confer on artifici life , p. 90 - 99 , mit press , 2000 ) , with the aim of assess the gener compat of emerg distribut environment control with convent natur select . our earlier system , guild ( down and zvirinski , artifici life , 5 , p.291 - 318 , 1999 ) , display emerg regul of the chemic environ by a popul of metabol agent , but the chemic model underli those result wa trivial , essenti admit all possibl reaction at a singl energi cost . the new model , metam , util abstract chemistri that are both ( a ) constrain to a small set of legal reaction , and ( b ) ground in basic fundament relationship between energi , entropi , and biomass synthesi \/ breakdown . to explor the gener phenomena of emerg homeostasi , we gener 100 differ chemistri and use each as the basi for sever metam run , as part of a gaia hunt . thi search discov 20 chemistri that support microbi popul capabl of regul a physic environment factor within their growth-optim rang , despit the extra metabol cost . case studi from the gaia hunt illustr a few simpl mechan by which real biota might exploit the underli chemistri to achiev some control over their physic environ . although these result shed littl light on the question of gaia on earth , they support the possibl of emerg environment control at the microcosm level","ordered_present_kp":[4,52,392,556,580,944,1053,101,182,343],"keyphrases":["simulated emergence","evolving microcosms","Gaia theory","artificial life","emergent distributed environmental control","natural selection","metabolizing agents","chemical model","emergent homeostasis","Gaia hunt","GUILD system","METAMIC model","genetic algorithms","artificial chemistry","artificial metabolisms"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R","U","R","R"]} {"id":"161","title":"Electronic books: reports of their death have been exaggerated","abstract":"E-books will survive, but not in the consumer market - at least not until reading devices become much cheaper and much better in quality (which is not likely to happen soon). Library Journal's review of major events of the year 2001 noted that two requirements for the success of E-books were development of a sustainable business model and development of better reading devices. The E-book revolution has therefore become more of an evolution. We can look forward to further developments and advances in the future","tok_text":"electron book : report of their death have been exagger \n e-book will surviv , but not in the consum market - at least not until read devic becom much cheaper and much better in qualiti ( which is not like to happen soon ) . librari journal 's review of major event of the year 2001 note that two requir for the success of e-book were develop of a sustain busi model and develop of better read devic . the e-book revolut ha therefor becom more of an evolut . we can look forward to further develop and advanc in the futur","ordered_present_kp":[0,58,225],"keyphrases":["electronic books","E-books","Library Journal"],"prmu":["P","P","P"]} {"id":"1635","title":"Simple...But complex","abstract":"FlexPro 5.0, from Weisang and Co., is one of those products which aim to serve an often ignored range of data users: those who, in FlexPro's words, are interested in documenting, analysing and archiving data in the simplest way possible. The online help system is clearly designed to promote the product in this market segment, with a very clear introduction from first principles and a hands-on tutorial, and the live project to which it was applied was selected with this in mind","tok_text":"simpl ... but complex \n flexpro 5.0 , from weisang and co. , is one of those product which aim to serv an often ignor rang of data user : those who , in flexpro 's word , are interest in document , analys and archiv data in the simplest way possibl . the onlin help system is clearli design to promot the product in thi market segment , with a veri clear introduct from first principl and a hands-on tutori , and the live project to which it wa appli wa select with thi in mind","ordered_present_kp":[24,255,391],"keyphrases":["FlexPro 5.0","online help system","hands-on tutorial","data archiving","data analysis","data documentation"],"prmu":["P","P","P","R","M","R"]} {"id":"1670","title":"An integrated optimization model for train crew management","abstract":"Train crew management involves the development of a duty timetable for each of the drivers (crew) to cover a given train timetable in a rail transport organization. This duty timetable is spread over a certain period, known as the roster planning horizon. Train crew management may arise either from the planning stage, when the total number of crew and crew distributions are to be determined, or from the operating stage when the number of crew at each depot is known as input data. In this paper, we are interested in train crew management in the planning stage. In the literature, train crew management is decomposed into two stages: crew scheduling and crew rostering which are solved sequentially. We propose an integrated optimization model to solve both crew scheduling and crew rostering. The model enables us to generate either cyclic rosters or non-cyclic rosters. Numerical experiments are carried out over data sets arising from a practical application","tok_text":"an integr optim model for train crew manag \n train crew manag involv the develop of a duti timet for each of the driver ( crew ) to cover a given train timet in a rail transport organ . thi duti timet is spread over a certain period , known as the roster plan horizon . train crew manag may aris either from the plan stage , when the total number of crew and crew distribut are to be determin , or from the oper stage when the number of crew at each depot is known as input data . in thi paper , we are interest in train crew manag in the plan stage . in the literatur , train crew manag is decompos into two stage : crew schedul and crew roster which are solv sequenti . we propos an integr optim model to solv both crew schedul and crew roster . the model enabl us to gener either cyclic roster or non-cycl roster . numer experi are carri out over data set aris from a practic applic","ordered_present_kp":[3,26,86,163,248,617,634,783],"keyphrases":["integrated optimization model","train crew management","duty timetable","rail transport organization","roster planning horizon","crew scheduling","crew rostering","cyclic rosters","noncyclic rosters","integer programming"],"prmu":["P","P","P","P","P","P","P","P","M","U"]} {"id":"1792","title":"Database technology in digital libraries","abstract":"Database technology advancements have provided many opportunities for libraries. These advancements can bring the world closer together through information accessibility. Digital library projects have been established worldwide to, ultimately, fulfil the needs of end users through more efficiency and convenience. Resource sharing will continue to be the trend for libraries. Changes often create issues which need to be addressed. Issues relating to database technology and digital libraries are reviewed. Some of the major challenges in digital libraries and managerial issues are identified as well","tok_text":"databas technolog in digit librari \n databas technolog advanc have provid mani opportun for librari . these advanc can bring the world closer togeth through inform access . digit librari project have been establish worldwid to , ultim , fulfil the need of end user through more effici and conveni . resourc share will continu to be the trend for librari . chang often creat issu which need to be address . issu relat to databas technolog and digit librari are review . some of the major challeng in digit librari and manageri issu are identifi as well","ordered_present_kp":[0,21,157,173,256,299,517],"keyphrases":["database technology","digital libraries","information accessibility","digital library projects","end users","resource sharing","managerial issues","data quality","interoperability","metadata","user interface","query processing"],"prmu":["P","P","P","P","P","P","P","U","U","U","M","U"]} {"id":"1801","title":"Least load dispatching algorithm for parallel Web server nodes","abstract":"A least load dispatching algorithm for distributing requests to parallel Web server nodes is described. In this algorithm, the load offered to a node by a request is estimated based on the expected transfer time of the corresponding reply through the Internet. This loading information is then used by the algorithm to identify the least load node of the Web site. By using this algorithm, each request will always be sent for service at the earliest possible time. Performance comparison using NASA and ClarkNet access logs between the proposed algorithm and commonly used dispatching algorithms is performed. The results show that the proposed algorithm gives 10% higher throughput than that of the commonly used random and round-robin dispatching algorithms","tok_text":"least load dispatch algorithm for parallel web server node \n a least load dispatch algorithm for distribut request to parallel web server node is describ . in thi algorithm , the load offer to a node by a request is estim base on the expect transfer time of the correspond repli through the internet . thi load inform is then use by the algorithm to identifi the least load node of the web site . by use thi algorithm , each request will alway be sent for servic at the earliest possibl time . perform comparison use nasa and clarknet access log between the propos algorithm and commonli use dispatch algorithm is perform . the result show that the propos algorithm give 10 % higher throughput than that of the commonli use random and round-robin dispatch algorithm","ordered_present_kp":[0,34,291,241,526,683,735],"keyphrases":["least load dispatching algorithm","parallel Web server nodes","transfer time","Internet","ClarkNet access logs","throughput","round-robin dispatching algorithms","NASA access logs","random dispatching algorithms","World Wide Web server"],"prmu":["P","P","P","P","P","P","P","R","R","M"]} {"id":"1493","title":"Research into telehealth applications in speech-language pathology","abstract":"A literature review was conducted to investigate the extent to which telehealth has been researched within the domain of speech-language pathology and the outcomes of this research. A total of 13 studies were identified. Three early studies demonstrated that telehealth was feasible, although there was no discussion of the cost-effectiveness of this process in terms of patient outcomes. The majority of the subsequent studies indicated positive or encouraging outcomes resulting from telehealth. However, there were a number of shortcomings in the research, including a lack of cost-benefit information, failure to evaluate the technology itself, an absence of studies of the educational and informational aspects of telehealth in relation to speech-language pathology, and the use of telehealth in a limited range of communication disorders. Future research into the application of telehealth to speech-language pathology services must adopt a scientific approach, and have a well defined development and evaluation framework that addresses the effectiveness of the technique, patient outcomes and satisfaction, and the cost-benefit relationship","tok_text":"research into telehealth applic in speech-languag patholog \n a literatur review wa conduct to investig the extent to which telehealth ha been research within the domain of speech-languag patholog and the outcom of thi research . a total of 13 studi were identifi . three earli studi demonstr that telehealth wa feasibl , although there wa no discuss of the cost-effect of thi process in term of patient outcom . the major of the subsequ studi indic posit or encourag outcom result from telehealth . howev , there were a number of shortcom in the research , includ a lack of cost-benefit inform , failur to evalu the technolog itself , an absenc of studi of the educ and inform aspect of telehealth in relat to speech-languag patholog , and the use of telehealth in a limit rang of commun disord . futur research into the applic of telehealth to speech-languag patholog servic must adopt a scientif approach , and have a well defin develop and evalu framework that address the effect of the techniqu , patient outcom and satisfact , and the cost-benefit relationship","ordered_present_kp":[14,35,63,357,395,781],"keyphrases":["telehealth applications","speech-language pathology","literature review","cost-effectiveness","patient outcomes","communication disorders","telemedicine","cost-benefit analysis","patient satisfaction"],"prmu":["P","P","P","P","P","P","U","M","R"]} {"id":"1844","title":"A multi-agent system infrastructure for software component marketplace: an ontological perspective","abstract":"In this paper, we introduce a multi-agent system architecture and an implemented prototype for a software component marketplace. We emphasize the ontological perspective by discussing ontology modeling for the component marketplace, UML extensions for ontology modeling, and the idea of ontology transfer which makes the multi-agent system adapt itself to dynamically changing ontologies","tok_text":"a multi-ag system infrastructur for softwar compon marketplac : an ontolog perspect \n in thi paper , we introduc a multi-ag system architectur and an implement prototyp for a softwar compon marketplac . we emphas the ontolog perspect by discuss ontolog model for the compon marketplac , uml extens for ontolog model , and the idea of ontolog transfer which make the multi-ag system adapt itself to dynam chang ontolog","ordered_present_kp":[115,36,245,287,334,398,382],"keyphrases":["software component marketplace","multi-agent system architecture","ontology modeling","UML extensions","ontology transfer","adaptation","dynamically changing ontologies"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1637","title":"What's best practice for open access?","abstract":"The business of publishing journals is in transition. Nobody knows exactly how it will work in the future, but everybody knows that the electronic publishing revolution will ensure it won't work as it does now. This knowledge has provoked a growing sense of nervous anticipation among those concerned, some edgy and threatened by potential changes to their business, others excited by the prospect of change and opportunity. The paper discusses the open publishing model for dissemination of research","tok_text":"what 's best practic for open access ? \n the busi of publish journal is in transit . nobodi know exactli how it will work in the futur , but everybodi know that the electron publish revolut will ensur it wo n't work as it doe now . thi knowledg ha provok a grow sens of nervou anticip among those concern , some edgi and threaten by potenti chang to their busi , other excit by the prospect of chang and opportun . the paper discuss the open publish model for dissemin of research","ordered_present_kp":[25,165,45,437],"keyphrases":["open access","business","electronic publishing","open publishing model","journal publishing","research dissemination"],"prmu":["P","P","P","P","R","R"]} {"id":"1672","title":"Two issues in setting call centre staffing levels","abstract":"Motivated by a problem facing the Police Communication Centre in Auckland, New Zealand, we consider the setting of staffing levels in a call centre with priority customers. The choice of staffing level over any particular time period (e.g., Monday from 8 am-9 am) relies on accurate arrival rate information. The usual method for identifying the arrival rate based on historical data can, in some cases, lead to considerable errors in performance estimates for a given staffing level. We explain why, identify three potential causes of the difficulty, and describe a method for detecting and addressing such a problem","tok_text":"two issu in set call centr staf level \n motiv by a problem face the polic commun centr in auckland , new zealand , we consid the set of staf level in a call centr with prioriti custom . the choic of staf level over ani particular time period ( e.g. , monday from 8 am-9 am ) reli on accur arriv rate inform . the usual method for identifi the arriv rate base on histor data can , in some case , lead to consider error in perform estim for a given staf level . we explain whi , identifi three potenti caus of the difficulti , and describ a method for detect and address such a problem","ordered_present_kp":[16,68,90,101,168,289,421],"keyphrases":["call centre staffing levels","police communication centre","Auckland","New Zealand","priority customers","arrival rate information","performance estimates","forecast error","nonstationarity","conditional Poisson process"],"prmu":["P","P","P","P","P","P","P","M","U","U"]} {"id":"1536","title":"Connection management for QoS service on the Web","abstract":"The current Web service model treats all requests equivalently, both while being processed by servers and while being transmitted over the network. For some uses, such as multiple priority schemes, different levels of service are desirable. We propose application-level TCP connection management mechanisms for Web servers to provide two different levels of Web service, high and low service, by setting different time-outs for inactive TCP connections. We evaluated the performance of the mechanism under heavy and light loading conditions on the Web server. Our experiments show that, though heavy traffic saturates the network, high level class performance is improved by as much as 25-28%. Therefore, this mechanism can effectively provide QoS guaranteed services even in the absence of operating system and network supports","tok_text":"connect manag for qo servic on the web \n the current web servic model treat all request equival , both while be process by server and while be transmit over the network . for some use , such as multipl prioriti scheme , differ level of servic are desir . we propos application-level tcp connect manag mechan for web server to provid two differ level of web servic , high and low servic , by set differ time-out for inact tcp connect . we evalu the perform of the mechan under heavi and light load condit on the web server . our experi show that , though heavi traffic satur the network , high level class perform is improv by as much as 25 - 28 % . therefor , thi mechan can effect provid qo guarante servic even in the absenc of oper system and network support","ordered_present_kp":[0,53,283,402],"keyphrases":["connection management","Web service model","TCP connections","time-outs","Internet","quality of service","telecommunication traffic","client server system","Web transaction"],"prmu":["P","P","P","P","U","M","M","M","M"]} {"id":"163","title":"Boolean operators and the naive end-user: moving to AND","abstract":"Since so few end-users make use of Boolean searching, it is obvious that any effective solution needs to take this reality into account. The most important aspect of a technical solution should be that it does not require any effort on the part of users. What is clearly needed is for search engine designers and programmers to take account of the information-seeking behavior of Internet users. Users must be able to enter a series of words at random and have those words automatically treated as a carefully constructed Boolean AND search statement","tok_text":"boolean oper and the naiv end-us : move to and \n sinc so few end-us make use of boolean search , it is obviou that ani effect solut need to take thi realiti into account . the most import aspect of a technic solut should be that it doe not requir ani effort on the part of user . what is clearli need is for search engin design and programm to take account of the information-seek behavior of internet user . user must be abl to enter a seri of word at random and have those word automat treat as a care construct boolean and search statement","ordered_present_kp":[0,80,308,364,393],"keyphrases":["Boolean operators","Boolean searching","search engine design","information-seeking behavior","Internet","AND operator"],"prmu":["P","P","P","P","P","R"]} {"id":"1921","title":"An ACL for a dynamic system of agents","abstract":"In this article we present the design of an ACL for a dynamic system of agents. The ACL includes a set of conversation performatives extended with operations to register, create, and terminate agents. The main design goal at the agent-level is to provide only knowledge-level primitives that are well integrated with the dynamic nature of the system. This goal has been achieved by defining an anonymous interaction protocol which enables agents to request and supply knowledge without considering symbol-level issues concerning management of agent names, routing, and agent reachability. This anonymous interaction protocol exploits a distributed facilitator schema which is hidden at the agent-level and provides mechanisms for registering capabilities of agents and delivering requests according to the competence of agents. We present a formal specification of the ACL and of the underlying architecture, exploiting an algebra of actors, and illustrate it with the help of a graphical notation. This approach provides the basis for discussing dynamic primitives in ACL and for studying properties of dynamic multi agent systems, for example concerning the behavior of agents and the correctness of their conversation policies","tok_text":"an acl for a dynam system of agent \n in thi articl we present the design of an acl for a dynam system of agent . the acl includ a set of convers perform extend with oper to regist , creat , and termin agent . the main design goal at the agent-level is to provid onli knowledge-level primit that are well integr with the dynam natur of the system . thi goal ha been achiev by defin an anonym interact protocol which enabl agent to request and suppli knowledg without consid symbol-level issu concern manag of agent name , rout , and agent reachabl . thi anonym interact protocol exploit a distribut facilit schema which is hidden at the agent-level and provid mechan for regist capabl of agent and deliv request accord to the compet of agent . we present a formal specif of the acl and of the underli architectur , exploit an algebra of actor , and illustr it with the help of a graphic notat . thi approach provid the basi for discuss dynam primit in acl and for studi properti of dynam multi agent system , for exampl concern the behavior of agent and the correct of their convers polici","ordered_present_kp":[3,13,19,29,13,588,836,384],"keyphrases":["ACL","dynamic system of agents","dynamic system","system of agents","agents","anonymous interaction protocol","distributed facilitator","actors","Agent Communication Languages"],"prmu":["P","P","P","P","P","P","P","P","M"]} {"id":"1879","title":"On the distribution of Lachlan nonsplitting bases","abstract":"We say that a computably enumerable (c.e.) degree b is a Lachlan nonsplitting base (LNB), if there is a computably enumerable degree a such that a>b, and for any c.e. degrees w, v b , and for ani c.e . degre w , v < or = a , if a < or = wvvv b then either a < or = wv b or a < or = vv b. in thi paper we investig the relationship between bound and nonbound of lachlan nonsplit base and the high \/ low hierarchi . we prove that there is a non-low \/ sub 2\/ c.e . degre which bound no lachlan nonsplit base","ordered_present_kp":[141],"keyphrases":["computably enumerable degree","Lachlan nonsplitting bases distribution","Turing degrees"],"prmu":["P","R","M"]} {"id":"1752","title":"Non-nested multi-level solvers for finite element discretisations of mixed problems","abstract":"We consider a general framework for analysing the convergence of multi-grid solvers applied to finite element discretisations of mixed problems, both of conforming and nonconforming type. As a basic new feature. our approach allows to use different finite element discretisations on each level of the multi-grid hierarchy. Thus, in our multi-level approach, accurate higher order finite element discretisations can be combined with fast multi-level solvers based on lower order (nonconforming) finite element discretisations. This leads to the design of efficient multi-level solvers for higher order finite element discretisations","tok_text":"non-nest multi-level solver for finit element discretis of mix problem \n we consid a gener framework for analys the converg of multi-grid solver appli to finit element discretis of mix problem , both of conform and nonconform type . as a basic new featur . our approach allow to use differ finit element discretis on each level of the multi-grid hierarchi . thu , in our multi-level approach , accur higher order finit element discretis can be combin with fast multi-level solver base on lower order ( nonconform ) finit element discretis . thi lead to the design of effici multi-level solver for higher order finit element discretis","ordered_present_kp":[0,32,59,127,400,9],"keyphrases":["non-nested multi-level solvers","multi-level solvers","finite element discretisations","mixed problems","multi-grid solvers","higher order finite element discretisations"],"prmu":["P","P","P","P","P","P"]} {"id":"1717","title":"Responding to market trends with predictive segmentation [health care]","abstract":"Technology and technological advances have always been a part of healthcare, but often it's advances in treatment machinery and materials that get the attention. However, technology gains also occur behind the scenes in operations. One of the less glamorous but powerful technological advances available today is predictive segmentation, a phrase that means \"a new way to assess and view individuals in the market based on their health status and health needs.\" Sophisticated databases, data mining, neural networks and statistical capabilities have enabled the development of predictive segmentation techniques. These predictive models for healthcare can identify who is likely to need certain services and who is likely to become ill. They are a significant departure from various geographical and attitudinal segmentation methods that healthcare strategists have used in the past to gain a better understanding of their customers","tok_text":"respond to market trend with predict segment [ health care ] \n technolog and technolog advanc have alway been a part of healthcar , but often it 's advanc in treatment machineri and materi that get the attent . howev , technolog gain also occur behind the scene in oper . one of the less glamor but power technolog advanc avail today is predict segment , a phrase that mean \" a new way to assess and view individu in the market base on their health statu and health need . \" sophist databas , data mine , neural network and statist capabl have enabl the develop of predict segment techniqu . these predict model for healthcar can identifi who is like to need certain servic and who is like to becom ill . they are a signific departur from variou geograph and attitudin segment method that healthcar strategist have use in the past to gain a better understand of their custom","ordered_present_kp":[11,120,29,493,505],"keyphrases":["market trends","predictive segmentation","healthcare","data mining","neural networks"],"prmu":["P","P","P","P","P"]} {"id":"1884","title":"Observer-based strict positive real (SPR) feedback control system design","abstract":"Presents theory for stability analysis and design for a class of observer-based feedback control systems. Relaxation of the controllability and observability conditions imposed in the Yakubovich-Kalman-Popov lemma can be made for a class of nonlinear systems described by a linear time-invariant system with a feedback-connected cone-bounded nonlinear element. It is shown how a circle-criterion approach can be used to design an observer-based state feedback control which yields a closed-loop system with specified robustness characteristics. The approach is relevant for design with preservation of stability when a cone-bounded nonlinearity is introduced in the feedback loop. Important applications are to be found in nonlinear control with high robustness requirements","tok_text":"observer-bas strict posit real ( spr ) feedback control system design \n present theori for stabil analysi and design for a class of observer-bas feedback control system . relax of the control and observ condit impos in the yakubovich-kalman-popov lemma can be made for a class of nonlinear system describ by a linear time-invari system with a feedback-connect cone-bound nonlinear element . it is shown how a circle-criterion approach can be use to design an observer-bas state feedback control which yield a closed-loop system with specifi robust characterist . the approach is relev for design with preserv of stabil when a cone-bound nonlinear is introduc in the feedback loop . import applic are to be found in nonlinear control with high robust requir","ordered_present_kp":[48,91,223,280,310,343,409,472,509,541,360],"keyphrases":["control system design","stability analysis","Yakubovich-Kalman-Popov lemma","nonlinear systems","linear time-invariant system","feedback-connected cone-bounded nonlinear element","cone-bounded nonlinearity","circle-criterion approach","state feedback control","closed-loop system","robustness characteristics","observer-based strict positive real feedback control system"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"187","title":"Control of a thrust-vectored flying wing: a receding horizon - LPV approach","abstract":"This paper deals with the application of receding horizon methods to hover and forward flight models of an experimental tethered flying wing developed at Caltech. The dynamics of the system are representative of a vertical landing and take off aircraft, such as a Harrier around hover, or a thrust-vectored aircraft such as F18-HARV or X-31 in forward flight. The adopted control methodology is a hybrid of receding horizon techniques and control Lyapunov function (CLF)-based ideas. First, a CLF is generated using quasi-LPV methods and then, by using the CLF as the terminal cost in the receding horizon optimization, stability is guaranteed. The main advantage of this approach is that stability can be guaranteed without imposing constraints in the on-line optimization, allowing the problem to be solved in a more efficient manner. Models of the experimental set-up are obtained for the hover and forward flight modes. Numerical simulations for different time horizons are presented to illustrate the effectiveness of the discussed methods. Specifically, it is shown that a mere upper bound on the cost-to-go is not an appropriate choice for a terminal cost, when the horizon length is short. Simulation results are presented using experimentally verified model parameters","tok_text":"control of a thrust-vector fli wing : a reced horizon - lpv approach \n thi paper deal with the applic of reced horizon method to hover and forward flight model of an experiment tether fli wing develop at caltech . the dynam of the system are repres of a vertic land and take off aircraft , such as a harrier around hover , or a thrust-vector aircraft such as f18-harv or x-31 in forward flight . the adopt control methodolog is a hybrid of reced horizon techniqu and control lyapunov function ( clf)-base idea . first , a clf is gener use quasi-lpv method and then , by use the clf as the termin cost in the reced horizon optim , stabil is guarante . the main advantag of thi approach is that stabil can be guarante without impos constraint in the on-lin optim , allow the problem to be solv in a more effici manner . model of the experiment set-up are obtain for the hover and forward flight mode . numer simul for differ time horizon are present to illustr the effect of the discuss method . specif , it is shown that a mere upper bound on the cost-to-go is not an appropri choic for a termin cost , when the horizon length is short . simul result are present use experiment verifi model paramet","ordered_present_kp":[139,177,204,300,328,359,371,440,539,608,900,589],"keyphrases":["forward flight models","tethered flying wing","Caltech","Harrier around hover","thrust-vectored aircraft","F18-HARV","X-31","receding horizon techniques","quasi-LPV methods","terminal cost","receding horizon optimization","numerical simulations","thrust-vectored flying wing control","receding horizon-LPV approach","hover flight models","vertical landing take off aircraft","control Lyapunov function-based ideas","stability guarantee","nonlinear system"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","R","M","R","R","M","R","M"]} {"id":"1905","title":"Tool and process improvements from MFC control system technology","abstract":"A new approach to MFC calibration links the physical parameters of nitrogen to the physical characteristics of various process gases. This precludes the conventional need for surrogate gases. What results is a physics-based tuning algorithm and enhanced digital control system that enables rearranging and gas change of digital MFCs. The end result should be better process control through more accurate gas flow. The new method also decreases the number of MFC spare parts required to back up a fab","tok_text":"tool and process improv from mfc control system technolog \n a new approach to mfc calibr link the physic paramet of nitrogen to the physic characterist of variou process gase . thi preclud the convent need for surrog gase . what result is a physics-bas tune algorithm and enhanc digit control system that enabl rearrang and ga chang of digit mfc . the end result should be better process control through more accur ga flow . the new method also decreas the number of mfc spare part requir to back up a fab","ordered_present_kp":[29,82,253,279,380,415],"keyphrases":["MFC control system technology","calibration","tuning algorithm","digital control","process control","gas flow","process gas","surrogate gas","semiconductor fab","tool technology","mass flow controller","N\/sub 2\/"],"prmu":["P","P","P","P","P","P","R","R","M","R","M","U"]} {"id":"1597","title":"Application of heuristic methods for conformance test selection","abstract":"In this paper we focus on the test selection problem. It is modeled after a real-life problem that arises in telecommunication when one has to check the reliability of an application. We apply different metaheuristics, namely Reactive Tabu Search (RTS), Genetic Algorithms (GA) and Simulated Annealing (SA) to solve the problem. We propose some modifications to the conventional schemes including an adaptive neighbourhood sampling in RTS, an adaptive variable mutation rate in GA and an adaptive variable neighbourhood structure in SA. The performance of the algorithms is evaluated in different models for existing protocols. Computational results show that GA and SA can provide high-quality solutions in acceptable time compared to the results of a commercial software, which makes them applicable in practical test selection","tok_text":"applic of heurist method for conform test select \n in thi paper we focu on the test select problem . it is model after a real-lif problem that aris in telecommun when one ha to check the reliabl of an applic . we appli differ metaheurist , name reactiv tabu search ( rt ) , genet algorithm ( ga ) and simul anneal ( sa ) to solv the problem . we propos some modif to the convent scheme includ an adapt neighbourhood sampl in rt , an adapt variabl mutat rate in ga and an adapt variabl neighbourhood structur in sa . the perform of the algorithm is evalu in differ model for exist protocol . comput result show that ga and sa can provid high-qual solut in accept time compar to the result of a commerci softwar , which make them applic in practic test select","ordered_present_kp":[10,79,187,226,245,274,301,396,433,471],"keyphrases":["heuristic methods","test selection problem","reliability","metaheuristics","reactive Tabu search","genetic algorithms","simulated annealing","adaptive neighbourhood sampling","adaptive variable mutation rate","adaptive variable neighbourhood structure","telecommunication conformance test selection","ISDN protocol","GSM protocol"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","M","M"]} {"id":"1696","title":"Comments on \"Frequency decomposition and computing of ultrasound medical images with wavelet packets\"","abstract":"In this paper, errors and discrepancies in the subject paper [Cincotti et al. (2002)] are highlighted. A comment, concerning the axial resolution associated to the adopted processing procedure is also reported","tok_text":"comment on \" frequenc decomposit and comput of ultrasound medic imag with wavelet packet \" \n in thi paper , error and discrep in the subject paper [ cincotti et al . ( 2002 ) ] are highlight . a comment , concern the axial resolut associ to the adopt process procedur is also report","ordered_present_kp":[47,74,13,217],"keyphrases":["frequency decomposition","ultrasound medical images","wavelet packets","axial resolution","medical diagnostic imaging"],"prmu":["P","P","P","P","M"]} {"id":"1477","title":"Ecological interface design: progress and challenges","abstract":"Ecological interface design (EID) is a theoretical framework for designing human-computer interfaces for complex socio-technical systems. Its primary aim is to support knowledge workers in adapting to change and novelty. This literature review shows that in situations requiring problem solving, EID improves performance when compared with current design approaches in industry. EID has been applied to industry-scale problems in a broad variety of application domains (e.g., process control, aviation, computer network management, software engineering, medicine, command and control, and information retrieval) and has consistently led to the identification of new information requirements. An experimental evaluation of EID using a full-fidelity simulator with professional workers has yet to be conducted, although some are planned. Several significant challenges remain as obstacles to the confident use of EID in industry. Promising paths for addressing these outstanding issues are identified. Actual or potential applications of this research include improving the safety and productivity of complex socio-technical systems","tok_text":"ecolog interfac design : progress and challeng \n ecolog interfac design ( eid ) is a theoret framework for design human-comput interfac for complex socio-techn system . it primari aim is to support knowledg worker in adapt to chang and novelti . thi literatur review show that in situat requir problem solv , eid improv perform when compar with current design approach in industri . eid ha been appli to industry-scal problem in a broad varieti of applic domain ( e.g. , process control , aviat , comput network manag , softwar engin , medicin , command and control , and inform retriev ) and ha consist led to the identif of new inform requir . an experiment evalu of eid use a full-fidel simul with profession worker ha yet to be conduct , although some are plan . sever signific challeng remain as obstacl to the confid use of eid in industri . promis path for address these outstand issu are identifi . actual or potenti applic of thi research includ improv the safeti and product of complex socio-techn system","ordered_present_kp":[0,114,372,977],"keyphrases":["ecological interface design","human-computer interfaces","industry","productivity","complex social technical systems","user interface","human factors"],"prmu":["P","P","P","P","M","M","U"]} {"id":"1776","title":"Job rotation in an academic library: damned if you do and damned if you don't!","abstract":"This article considers job rotation-the systematic movement of employees from one job to another-as one of the many tools within the organizational development tool kit. There is a brief consideration of useful print and Internet literature on the subject as well as a discussion of the pros and cons of job rotation. The application of job rotation methods in Ryerson University Library, a small academic library, concludes the article in order to illustrate process and insights through example","tok_text":"job rotat in an academ librari : damn if you do and damn if you do n't ! \n thi articl consid job rotation-th systemat movement of employe from one job to another-a one of the mani tool within the organiz develop tool kit . there is a brief consider of use print and internet literatur on the subject as well as a discuss of the pro and con of job rotat . the applic of job rotat method in ryerson univers librari , a small academ librari , conclud the articl in order to illustr process and insight through exampl","ordered_present_kp":[0,16,196,389],"keyphrases":["job rotation","academic library","organizational development","Ryerson University Library","systematic employee movement"],"prmu":["P","P","P","P","R"]} {"id":"1733","title":"Computing grid unlocks research","abstract":"Under the UK government's spending review in 2000 the Office of Science and Technology was allocated Pounds 98m to establish a three year e-science research and development programme. The programme has a bold vision: to change the dynamic of the way science is undertaken. The term 'e-science' was introduced by John Taylor, director general of research councils in the Office of Science and Technology. He saw many areas of science becoming increasingly reliant on new ways of collaborative, multidisciplinary, interorganisation working. E-science is intended to capture these new modes of working. There are two major components to the programme: the science, and the infrastructure to support that science. The infrastructure is generally referred to as the Grid. The choice of name resonates with the idea of a future in which computing resources and storage, as well as expensive scientific facilities and software, can be accessed on demand, like electricity. Open source prototypes of the middleware are available and under development as part of the e-science programme and other international efforts","tok_text":"comput grid unlock research \n under the uk govern 's spend review in 2000 the offic of scienc and technolog wa alloc pound 98 m to establish a three year e-scienc research and develop programm . the programm ha a bold vision : to chang the dynam of the way scienc is undertaken . the term ' e-scienc ' wa introduc by john taylor , director gener of research council in the offic of scienc and technolog . he saw mani area of scienc becom increasingli reliant on new way of collabor , multidisciplinari , interorganis work . e-scienc is intend to captur these new mode of work . there are two major compon to the programm : the scienc , and the infrastructur to support that scienc . the infrastructur is gener refer to as the grid . the choic of name reson with the idea of a futur in which comput resourc and storag , as well as expens scientif facil and softwar , can be access on demand , like electr . open sourc prototyp of the middlewar are avail and under develop as part of the e-scienc programm and other intern effort","ordered_present_kp":[473,791,154,933,856,906],"keyphrases":["e-science","collaboration","computing resources","software","open source prototypes","middleware","UK programme","grid computing","scientific research"],"prmu":["P","P","P","P","P","P","R","R","R"]} {"id":"1818","title":"Sliding-mode control scheme for a class of continuous chemical reactors","abstract":"The synthesis of a robust control law for regulation control of a class of relative-degree-one nonlinear systems is presented. The control design is based on a sliding-mode uncertainty estimator, developed under a framework of algebraic-differential concepts. The closed-loop stability for the underlying closed-loop system is achieved via averaging techniques. Robustness of the proposed control scheme is proved in the face of noise measurements, model uncertainties and sustained disturbances. The performance of the proposed control law is illustrated with numerical simulations, comparing the proposed controller with a well tuned PI controller","tok_text":"sliding-mod control scheme for a class of continu chemic reactor \n the synthesi of a robust control law for regul control of a class of relative-degree-on nonlinear system is present . the control design is base on a sliding-mod uncertainti estim , develop under a framework of algebraic-differenti concept . the closed-loop stabil for the underli closed-loop system is achiev via averag techniqu . robust of the propos control scheme is prove in the face of nois measur , model uncertainti and sustain disturb . the perform of the propos control law is illustr with numer simul , compar the propos control with a well tune pi control","ordered_present_kp":[0,42,136,217,278,313,348,381,459,473,495],"keyphrases":["sliding-mode control scheme","continuous chemical reactors","relative-degree-one nonlinear systems","sliding-mode uncertainty estimator","algebraic-differential concepts","closed-loop stability","closed-loop system","averaging techniques","noise measurements","model uncertainties","sustained disturbances","robust control law synthesis"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1825","title":"Fuzzy logic controlled shunt active power filter for power quality improvement","abstract":"The simulation and experimental study of a fuzzy logic controlled, three-phase shunt active power filter to improve power quality by compensating harmonics and reactive power required by a nonlinear load is presented. The advantage of fuzzy control is that it is based on a linguistic description and does not require a mathematical model of the system. The fuzzy control scheme is realised on an inexpensive dedicated micro-controller (INTEL 8031) based system. The compensation process is based on sensing line currents only, an approach different from conventional methods, which require harmonics or reactive volt-ampere requirement of the load. The performance of the fuzzy logic controller is compared with a conventional PI controller. The dynamic behavior of the fuzzy controller is found to be better than the conventional PI controller. PWM pattern generation is based on carrierless hysteresis based current control to obtain the switching signals. Various simulation and experimental results are presented under steady state and transient conditions","tok_text":"fuzzi logic control shunt activ power filter for power qualiti improv \n the simul and experiment studi of a fuzzi logic control , three-phas shunt activ power filter to improv power qualiti by compens harmon and reactiv power requir by a nonlinear load is present . the advantag of fuzzi control is that it is base on a linguist descript and doe not requir a mathemat model of the system . the fuzzi control scheme is realis on an inexpens dedic micro-control ( intel 8031 ) base system . the compens process is base on sens line current onli , an approach differ from convent method , which requir harmon or reactiv volt-amper requir of the load . the perform of the fuzzi logic control is compar with a convent pi control . the dynam behavior of the fuzzi control is found to be better than the convent pi control . pwm pattern gener is base on carrierless hysteresi base current control to obtain the switch signal . variou simul and experiment result are present under steadi state and transient condit","ordered_present_kp":[130,238,446,818,847,904,49],"keyphrases":["power quality improvement","three-phase shunt active power filter","nonlinear load","micro-controller","PWM pattern generation","carrierless hysteresis based current control","switching signals","harmonics compensation","reactive power compensation","fuzzy logic control simulation","control performance"],"prmu":["P","P","P","P","P","P","P","R","R","R","R"]} {"id":"1860","title":"The art of the cross-sell [accounting software]","abstract":"With the market for accounting software nearing saturation, vendors are training resellers in the subtleties of the cross-sell. The rewards can be great. The key is knowing when to focus, and when to partner","tok_text":"the art of the cross-sel [ account softwar ] \n with the market for account softwar near satur , vendor are train resel in the subtleti of the cross-sel . the reward can be great . the key is know when to focu , and when to partner","ordered_present_kp":[27,113,15],"keyphrases":["cross-selling","accounting software","resellers"],"prmu":["P","P","P"]} {"id":"1512","title":"Mathematical aspects of computer-aided share trading","abstract":"We consider problems of statistical analysis of share prices and propose probabilistic characteristics to describe the price series. We discuss three methods of mathematical modelling of price series with given probabilistic characteristics","tok_text":"mathemat aspect of computer-aid share trade \n we consid problem of statist analysi of share price and propos probabilist characterist to describ the price seri . we discuss three method of mathemat model of price seri with given probabilist characterist","ordered_present_kp":[19,67,86,109,149,189],"keyphrases":["computer-aided share trading","statistical analysis","share price","probabilistic characteristics","price series","mathematical modelling"],"prmu":["P","P","P","P","P","P"]} {"id":"1557","title":"L\/sub p\/ boundedness of (C, 1) means of orthonormal expansions for general exponential weights","abstract":"Let I be a finite or infinite interval, and let W:I to (0, infinity ). Assume that W\/sup 2\/ is a weight, so that we may define orthonormal polynomials corresponding to W\/sup 2\/. For f :R to R, let s\/sub m\/ [f] denote the mth partial sum of the orthonormal expansion of f with respect to these polynomials. We investigate boundedness in weighted L\/sub p\/ spaces of the (C, 1) means 1\/n \/sub m=1\/ Sigma \/sup n\/s\/sub m\/[f]. The class of weights W\/sup 2\/ considered includes even and noneven exponential weights","tok_text":"l \/ sub p\/ bounded of ( c , 1 ) mean of orthonorm expans for gener exponenti weight \n let i be a finit or infinit interv , and let w : i to ( 0 , infin ) . assum that w \/ sup 2\/ is a weight , so that we may defin orthonorm polynomi correspond to w \/ sup 2\/. for f : r to r , let s \/ sub m\/ [ f ] denot the mth partial sum of the orthonorm expans of f with respect to these polynomi . we investig bounded in weight l \/ sub p\/ space of the ( c , 1 ) mean 1 \/ n \/sub m=1\/ sigma \/sup n \/ s \/ sub m\/[f ] . the class of weight w \/ sup 2\/ consid includ even and noneven exponenti weight","ordered_present_kp":[11,40,61,106,213,306],"keyphrases":["boundedness","orthonormal expansions","general exponential weights","infinite interval","orthonormal polynomials","mth partial sum","finite interval"],"prmu":["P","P","P","P","P","P","R"]} {"id":"147","title":"Embedded Linux and the law","abstract":"The rising popularity of Linux, combined with perceived cost savings, has spurred many embedded developers to consider a real-time Linux variant as an alternative to a traditional RTOS. The paper presents the legal implications for the proprietary parts of firmware","tok_text":"embed linux and the law \n the rise popular of linux , combin with perceiv cost save , ha spur mani embed develop to consid a real-tim linux variant as an altern to a tradit rto . the paper present the legal implic for the proprietari part of firmwar","ordered_present_kp":[0,125,201],"keyphrases":["embedded Linux","real-time Linux","legal implications","proprietary firmware"],"prmu":["P","P","P","R"]} {"id":"15","title":"Optimal and safe ship control as a multi-step matrix game","abstract":"The paper describes the process of the safe ship control in a collision situation using a differential game model with j participants. As an approximated model of the manoeuvring process, a model of a multi-step matrix game is adopted here. RISKTRAJ computer program is designed in the Matlab language in order to determine the ship's trajectory as a certain sequence of manoeuvres executed by altering the course and speed, in the online navigator decision support system. These considerations are illustrated with examples of a computer simulation of the safe ship's trajectories in real situation at sea when passing twelve of the encountered objects","tok_text":"optim and safe ship control as a multi-step matrix game \n the paper describ the process of the safe ship control in a collis situat use a differenti game model with j particip . as an approxim model of the manoeuvr process , a model of a multi-step matrix game is adopt here . risktraj comput program is design in the matlab languag in order to determin the ship 's trajectori as a certain sequenc of manoeuvr execut by alter the cours and speed , in the onlin navig decis support system . these consider are illustr with exampl of a comput simul of the safe ship 's trajectori in real situat at sea when pass twelv of the encount object","ordered_present_kp":[15,277,138,467,455],"keyphrases":["ship control","differential game","RISKTRAJ computer program","online navigation","decision support system","collision avoidance","multistep matrix game","trajectory tracking","optimal control"],"prmu":["P","P","P","P","P","M","M","M","R"]} {"id":"1613","title":"Current waveform control of a high-power-factor rectifier circuit for harmonic suppression of voltage and current in a distribution system","abstract":"This paper presents the input current waveform control of the rectifier circuit which realizes simultaneously the high input power factor and the harmonics suppression of the receiving-end voltage and the source current under the distorted receiving-end voltage. The proposed input current waveform includes the harmonic components which are in phase with the receiving-end voltage harmonics. The control parameter in the proposed waveform is designed by examining the characteristics of both the harmonic suppression effect in the distribution system and the input power factor of the rectifier circuit. The effectiveness of the proposed current waveform has been confirmed experimentally","tok_text":"current waveform control of a high-power-factor rectifi circuit for harmon suppress of voltag and current in a distribut system \n thi paper present the input current waveform control of the rectifi circuit which realiz simultan the high input power factor and the harmon suppress of the receiving-end voltag and the sourc current under the distort receiving-end voltag . the propos input current waveform includ the harmon compon which are in phase with the receiving-end voltag harmon . the control paramet in the propos waveform is design by examin the characterist of both the harmon suppress effect in the distribut system and the input power factor of the rectifi circuit . the effect of the propos current waveform ha been confirm experiment","ordered_present_kp":[152,232,287,316,340,152,458,30,111],"keyphrases":["high-power-factor rectifier circuit","distribution system","input current waveform control","input current waveform","high input power factor","receiving-end voltage","source current","distorted receiving-end voltage","receiving-end voltage harmonics","harmonic voltage suppression","harmonic current suppression","200 V","60 Hz","8 kVA","2 kW"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","U","U","U","U"]} {"id":"1656","title":"Impact of user satisfaction and trust on virtual team members","abstract":"Pressured by the growing need for fast response times, mass customization, and globalization, many organizations are turning to flexible organizational forms, such as virtual teams. Virtual teams consist of cooperative relationships supported by information technology to overcome limitations of time and\/or location. Virtual teams require their members to rely heavily on the use of information technology and trust in coworkers. This study investigates the impacts that the reliance on information technology (operationalized in our study via the user satisfaction construct) and trust have on the job satisfaction of virtual team members. The study findings reveal that both user satisfaction and trust are positively related to job satisfaction in virtual teams, while system use was not found to play a significant role. These findings emphasize that organizations seeking the benefits of flexible, IT-enabled virtual teams must consider both the level of trust among colleagues, and the users' satisfaction with the information technology on which virtual teams rely","tok_text":"impact of user satisfact and trust on virtual team member \n pressur by the grow need for fast respons time , mass custom , and global , mani organ are turn to flexibl organiz form , such as virtual team . virtual team consist of cooper relationship support by inform technolog to overcom limit of time and\/or locat . virtual team requir their member to reli heavili on the use of inform technolog and trust in cowork . thi studi investig the impact that the relianc on inform technolog ( operation in our studi via the user satisfact construct ) and trust have on the job satisfact of virtual team member . the studi find reveal that both user satisfact and trust are posit relat to job satisfact in virtual team , while system use wa not found to play a signific role . these find emphas that organ seek the benefit of flexibl , it-en virtual team must consid both the level of trust among colleagu , and the user ' satisfact with the inform technolog on which virtual team reli","ordered_present_kp":[260,29,10,568,38],"keyphrases":["user satisfaction","trust","virtual team members","information technology","job satisfaction","IT"],"prmu":["P","P","P","P","P","U"]} {"id":"1469","title":"The necessity of real-time-fact and fiction in digital reference systems","abstract":"Current discussions and trends in digital reference have emphasized the use of real-time digital reference services. Recent articles have questioned both the utility and use of asynchronous services such as e-mail. This article uses data from the AskERIC digital reference service to demonstrate that asynchronous services are not only useful and used, but may have greater utility than real-time systems","tok_text":"the necess of real-time-fact and fiction in digit refer system \n current discuss and trend in digit refer have emphas the use of real-tim digit refer servic . recent articl have question both the util and use of asynchron servic such as e-mail . thi articl use data from the asker digit refer servic to demonstr that asynchron servic are not onli use and use , but may have greater util than real-tim system","ordered_present_kp":[129,212,237,275],"keyphrases":["real-time digital reference services","asynchronous services","e-mail","AskERIC","personalized Internet-based service","digital library"],"prmu":["P","P","P","P","M","M"]} {"id":"1795","title":"New voice over Internet protocol technique with hierarchical data security protection","abstract":"The authors propose a voice over Internet protocol (VoIP) technique with a new hierarchical data security protection (HDSP) scheme. The proposed HDSP scheme can maintain the voice quality degraded from packet loss and preserve high data security. It performs both the data inter-leaving on the inter-frame of voice for achieving better error recovery of voices suffering from continuous packet loss, and the data encryption on the intra-frame of voice for achieving high data security, which are controlled by a random bit-string sequence generated from a chaotic system. To demonstrate the performance of the proposed HDSP scheme, we have successfully verified and analysed the proposed approach through software simulation and statistical measures on several test voices","tok_text":"new voic over internet protocol techniqu with hierarch data secur protect \n the author propos a voic over internet protocol ( voip ) techniqu with a new hierarch data secur protect ( hdsp ) scheme . the propos hdsp scheme can maintain the voic qualiti degrad from packet loss and preserv high data secur . it perform both the data inter-leav on the inter-fram of voic for achiev better error recoveri of voic suffer from continu packet loss , and the data encrypt on the intra-fram of voic for achiev high data secur , which are control by a random bit-str sequenc gener from a chaotic system . to demonstr the perform of the propos hdsp scheme , we have success verifi and analys the propos approach through softwar simul and statist measur on sever test voic","ordered_present_kp":[4,46,126,210,264,288,451,542,578,709,727],"keyphrases":["voice over Internet protocol","hierarchical data security protection","VoIP","HDSP scheme","packet loss","high data security","data encryption","random bit-string sequence","chaotic system","software simulation","statistical measures","data interleaving","packet voice communications"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","M","M"]} {"id":"1768","title":"Isogenous of the elliptic curves over the rationals","abstract":"An elliptic curve is a pair (E, O), where E is a smooth projective curve of genus 1 and O is a point of E, called the point at infinity. Every elliptic curve can be given by a Weierstrass equation E : y\/sup 2\/ + a\/sub 1\/xy + a\/sub 3\/y = x\/sup 3\/ + a\/sub 2\/x\/sup 2\/ + a\/sub 4\/x + a\/sub 6\/. Let Q be the set of rationals. E is said to be defined over Q if the coefficients a\/sub i\/, i = 1, 2, 3, 4, 6 are rationals and O is defined over Q. Let E\/Q be an elliptic curve and let E(Q)\/sub tors\/ be the torsion group of points of E defined over Q. The theorem of Mazur asserts that E(Q)\/sub tors\/ is one of the following 15 groups E(Q)\/sub tors\/ {Z\/mZ, Z\/mZ * Z\/2mZ, m, = 1, 2, ..., 10, 12, m = 1, 2, 3, 4. We say that an elliptic curve E'\/Q is isogenous to the elliptic curve E if there is an isogeny, i.e. a morphism phi : E to E' such that phi (O) = O, where O is the point at infinity. We give an explicit model of all elliptic curves for which E(Q)\/sub tors\/ is in the form Z\/mZ where m = 9,10,12 or Z\/2Z * Z\/2mZ where m = 4, according to Mazur's theorem. Moreover, for every family of such elliptic curves, we give an explicit model of all their isogenous curves with cyclic kernels consisting of rational points","tok_text":"isogen of the ellipt curv over the ration \n an ellipt curv is a pair ( e , o ) , where e is a smooth project curv of genu 1 and o is a point of e , call the point at infin . everi ellipt curv can be given by a weierstrass equat e : y \/ sup 2\/ + a \/ sub 1 \/ xy + a \/ sub 3 \/ y = x \/ sup 3\/ + a \/ sub 2 \/ x \/ sup 2\/ + a \/ sub 4 \/ x + a \/ sub 6\/. let q be the set of ration . e is said to be defin over q if the coeffici a \/ sub i\/ , i = 1 , 2 , 3 , 4 , 6 are ration and o is defin over q. let e \/ q be an ellipt curv and let e(q)\/sub tors\/ be the torsion group of point of e defin over q. the theorem of mazur assert that e(q)\/sub tors\/ is one of the follow 15 group e(q)\/sub tors\/ { z \/ mz , z \/ mz * z\/2mz , m , = 1 , 2 , ... , 10 , 12 , m = 1 , 2 , 3 , 4 . we say that an ellipt curv e'\/q is isogen to the ellipt curv e if there is an isogeni , i.e. a morphism phi : e to e ' such that phi ( o ) = o , where o is the point at infin . we give an explicit model of all ellipt curv for which e(q)\/sub tors\/ is in the form z \/ mz where m = 9,10,12 or z\/2z * z\/2mz where m = 4 , accord to mazur 's theorem . moreov , for everi famili of such ellipt curv , we give an explicit model of all their isogen curv with cyclic kernel consist of ration point","ordered_present_kp":[35,94,210,946,1085,1208],"keyphrases":["rationals","smooth projective curve","Weierstrass equation","explicit model","Mazur's theorem","cyclic kernels","elliptic curves isogenous"],"prmu":["P","P","P","P","P","P","R"]} {"id":"1843","title":"Supply chain infrastructures: system integration and information sharing","abstract":"The need for supply chain integration (SCI) methodologies has been increasing as a consequence of the globalization of production and sales, and the advancement of enabling information technologies. In this paper, we describe our experience with implementing and modeling SCIs. We present the integration architecture and the software components of our prototype implementation. We then discuss a variety of information sharing methodologies. Then, within the framework of a multi-echelon supply chain process model spanning multiple organizations, we summarize research on the benefits of intraorganizational knowledge sharing, and we discuss performance scalability","tok_text":"suppli chain infrastructur : system integr and inform share \n the need for suppli chain integr ( sci ) methodolog ha been increas as a consequ of the global of product and sale , and the advanc of enabl inform technolog . in thi paper , we describ our experi with implement and model sci . we present the integr architectur and the softwar compon of our prototyp implement . we then discuss a varieti of inform share methodolog . then , within the framework of a multi-echelon suppli chain process model span multipl organ , we summar research on the benefit of intraorganiz knowledg share , and we discuss perform scalabl","ordered_present_kp":[75,0,29,47,150,160,172,332,463,509,562,607],"keyphrases":["supply chain infrastructures","system integration","information sharing","supply chain integration","globalization","production","sales","software components","multi-echelon supply chain process model","multiple organizations","intraorganizational knowledge sharing","performance scalability"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1806","title":"Fast and efficient algorithm for the multiplierless realisation of linear DSP transforms","abstract":"A fast algorithm having a pseudopolynomial run-time and memory requirement in the worst case is developed to generate multiplierless architectures at all wordlengths for constant multiplications in linear DSP transforms. It is also re-emphasised that indefinitely reducing operators for multiplierless architectures is not sufficient to reduce the final chip area. For a major reduction, techniques like resource folding must be used. Simple techniques for improving the results are also presented","tok_text":"fast and effici algorithm for the multiplierless realis of linear dsp transform \n a fast algorithm have a pseudopolynomi run-tim and memori requir in the worst case is develop to gener multiplierless architectur at all wordlength for constant multipl in linear dsp transform . it is also re-emphasis that indefinit reduc oper for multiplierless architectur is not suffici to reduc the final chip area . for a major reduct , techniqu like resourc fold must be use . simpl techniqu for improv the result are also present","ordered_present_kp":[34,59,106,133,219,234,385,438],"keyphrases":["multiplierless realisation","linear DSP transforms","pseudopolynomial run-time","memory requirement","wordlengths","constant multiplications","final chip area","resource folding"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1494","title":"Where have all the PC makers gone?","abstract":"PC makers are dwindling. If you are planning to make a PC purchase soon, here are a few things to look out for before you buy","tok_text":"where have all the pc maker gone ? \n pc maker are dwindl . if you are plan to make a pc purchas soon , here are a few thing to look out for befor you buy","ordered_present_kp":[85,19],"keyphrases":["PC makers","PC purchase"],"prmu":["P","P"]} {"id":"1675","title":"An operations research approach to the problem of the sugar cane selection","abstract":"Selection for superior clones is the most important aspect of sugar cane improvement programs, and is a long and expensive process. While studies have investigated different components of selection independently, there has not been a whole system approach to improve the process. This study observes the problem as an integrated system, where if one parameter changes the state of the whole system changes. A computer based stochastic simulation model that accurately represents the selection was developed. The paper describes the simulation model, showing its accuracy as well as how a combination of dynamic programming and branch and bound can be applied to the model to optimise the selection system, giving a new application of these techniques. The model can be directly applied to any region targeted by sugar cane breeding programs or to other clonally propagated crops","tok_text":"an oper research approach to the problem of the sugar cane select \n select for superior clone is the most import aspect of sugar cane improv program , and is a long and expens process . while studi have investig differ compon of select independ , there ha not been a whole system approach to improv the process . thi studi observ the problem as an integr system , where if one paramet chang the state of the whole system chang . a comput base stochast simul model that accur repres the select wa develop . the paper describ the simul model , show it accuraci as well as how a combin of dynam program and branch and bound can be appli to the model to optimis the select system , give a new applic of these techniqu . the model can be directli appli to ani region target by sugar cane breed program or to other clonal propag crop","ordered_present_kp":[3,48,79,134,431,586,604,783,809],"keyphrases":["operations research approach","sugar cane selection","superior clones","improvement programs","computer based stochastic simulation model","dynamic programming","branch and bound","breeding programs","clonally propagated crops","agriculture"],"prmu":["P","P","P","P","P","P","P","P","P","U"]} {"id":"1630","title":"Digital-domain self-calibration technique for video-rate pipeline A\/D converters using Gaussian white noise","abstract":"A digital-domain self-calibration technique for video-rate pipeline A\/D converters based on a Gaussian white noise input signal is presented. The proposed algorithm is simple and efficient. A design example is shown to illustrate that the overall linearity of a pipeline ADC can be highly improved using this technique","tok_text":"digital-domain self-calibr techniqu for video-r pipelin a \/ d convert use gaussian white nois \n a digital-domain self-calibr techniqu for video-r pipelin a \/ d convert base on a gaussian white nois input signal is present . the propos algorithm is simpl and effici . a design exampl is shown to illustr that the overal linear of a pipelin adc can be highli improv use thi techniqu","ordered_present_kp":[0,40,178],"keyphrases":["digital-domain self-calibration technique","video-rate pipeline A\/D converters","Gaussian white noise input signal","pipeline ADC linearity"],"prmu":["P","P","P","R"]} {"id":"1589","title":"View from the top [workflow & content management]","abstract":"International law firm Linklaters has installed a global document and content management system that is accessible to clients and which has helped it move online","tok_text":"view from the top [ workflow & content manag ] \n intern law firm linklat ha instal a global document and content manag system that is access to client and which ha help it move onlin","ordered_present_kp":[49,65,31,177],"keyphrases":["content management","international law firm","Linklaters","online","document management"],"prmu":["P","P","P","P","R"]} {"id":"1574","title":"No-go areas? [content management]","abstract":"Alex Fry looks at how content management systems can be used to ensure website access for one important customer group, the disabled","tok_text":"no-go area ? [ content manag ] \n alex fri look at how content manag system can be use to ensur websit access for one import custom group , the disabl","ordered_present_kp":[143,54,95],"keyphrases":["content management systems","website access","disabled"],"prmu":["P","P","P"]} {"id":"1531","title":"Average optimization of the approximate solution of operator equations and its application","abstract":"In this paper, a definition of the optimization of operator equations in the average case setting is given. And the general result about the relevant optimization problem is obtained. This result is applied to the optimization of approximate solution of some classes of integral equations","tok_text":"averag optim of the approxim solut of oper equat and it applic \n in thi paper , a definit of the optim of oper equat in the averag case set is given . and the gener result about the relev optim problem is obtain . thi result is appli to the optim of approxim solut of some class of integr equat","ordered_present_kp":[38,7,124,282],"keyphrases":["optimization","operator equations","average case setting","integral equations","Gaussian measure","integral n-width"],"prmu":["P","P","P","P","U","M"]} {"id":"164","title":"Plug-ins for critical media literacy: a collaborative program","abstract":"Information literacy is important in academic and other libraries. The paper looks at whether it would be more useful to librarians and to instructors, as well as the students, to deal with information-literacy skill levels of students beginning their academic careers, rather than checking them at the end. Approaching the situation with an eye toward the broader scope of critical media literacy opens the discussion beyond a skills inventory to the broader range of intellectual activity","tok_text":"plug-in for critic media literaci : a collabor program \n inform literaci is import in academ and other librari . the paper look at whether it would be more use to librarian and to instructor , as well as the student , to deal with information-literaci skill level of student begin their academ career , rather than check them at the end . approach the situat with an eye toward the broader scope of critic media literaci open the discuss beyond a skill inventori to the broader rang of intellectu activ","ordered_present_kp":[57,12,38,180],"keyphrases":["critical media literacy","collaborative program","information literacy","instructors","academic libraries"],"prmu":["P","P","P","P","R"]} {"id":"1688","title":"Connecting the business without busting the budget","abstract":"The \"multi-channel content delivery\" model (MCCD) might be a new concept to you, but it is already beginning to replace traditional methods of business communications, print and content delivery, argues Darren Atkinson, CTO, FormScape","tok_text":"connect the busi without bust the budget \n the \" multi-channel content deliveri \" model ( mccd ) might be a new concept to you , but it is alreadi begin to replac tradit method of busi commun , print and content deliveri , argu darren atkinson , cto , formscap","ordered_present_kp":[49,252],"keyphrases":["multi-channel content delivery","FormScape","documents","distributed output management","business process management","archive","retrieval","content management"],"prmu":["P","P","U","U","M","U","U","M"]} {"id":"1549","title":"Riccati-based preconditioner for computing invariant subspaces of large matrices","abstract":"This paper introduces and analyzes the convergence properties of a method that computes an approximation to the invariant subspace associated with a group of eigenvalues of a large not necessarily diagonalizable matrix. The method belongs to the family of projection type methods. At each step, it refines the approximate invariant subspace using a linearized Riccati's equation which turns out to be the block analogue of the correction used in the Jacobi-Davidson method. The analysis conducted in this paper shows that the method converges at a rate quasi-quadratic provided that the approximate invariant subspace is close to the exact one. The implementation of the method based on multigrid techniques is also discussed and numerical experiments are reported","tok_text":"riccati-bas precondition for comput invari subspac of larg matric \n thi paper introduc and analyz the converg properti of a method that comput an approxim to the invari subspac associ with a group of eigenvalu of a larg not necessarili diagonaliz matrix . the method belong to the famili of project type method . at each step , it refin the approxim invari subspac use a linear riccati 's equat which turn out to be the block analogu of the correct use in the jacobi-davidson method . the analysi conduct in thi paper show that the method converg at a rate quasi-quadrat provid that the approxim invari subspac is close to the exact one . the implement of the method base on multigrid techniqu is also discuss and numer experi are report","ordered_present_kp":[0,36,54,200,236,291,460,675],"keyphrases":["Riccati-based preconditioner","invariant subspaces","large matrices","eigenvalues","diagonalizable matrix","projection type methods","Jacobi-Davidson method","multigrid techniques"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"159","title":"An intelligent system combining different resource-bounded reasoning techniques","abstract":"In this paper, PRIMES (Progressive Reasoning and Intelligent multiple MEthods System), a new architecture for resource-bounded reasoning that combines a form of progressive reasoning and the so-called multiple methods approach is presented. Each time-critical reasoning unit is designed in such a way that it delivers an approximate result in time whenever an overload or a failure prevents the system from producing the most accurate result. Indeed, reasoning units use approximate processing based on two salient features. First, an incremental processing unit constructs an approximate solution quickly and then refines it incrementally. Second, a multiple methods approach proposes different alternatives to solve the problem, each of them being selected according to the available resources. In allowing several resource-bounded reasoning paradigms to be combined, we hope to extend their actual scope to cover more real-world application domains","tok_text":"an intellig system combin differ resource-bound reason techniqu \n in thi paper , prime ( progress reason and intellig multipl method system ) , a new architectur for resource-bound reason that combin a form of progress reason and the so-cal multipl method approach is present . each time-crit reason unit is design in such a way that it deliv an approxim result in time whenev an overload or a failur prevent the system from produc the most accur result . inde , reason unit use approxim process base on two salient featur . first , an increment process unit construct an approxim solut quickli and then refin it increment . second , a multipl method approach propos differ altern to solv the problem , each of them be select accord to the avail resourc . in allow sever resource-bound reason paradigm to be combin , we hope to extend their actual scope to cover more real-world applic domain","ordered_present_kp":[33,109,283,479,81,89],"keyphrases":["resource-bounded reasoning techniques","PRIMES","progressive reasoning","intelligent multiple methods system","time-critical reasoning unit","approximate processing","complex systems","real-time performance"],"prmu":["P","P","P","P","P","P","M","U"]} {"id":"1648","title":"Spatial solutions [office furniture]","abstract":"Take the stress out of the office by considering the design of furniture and staff needs, before major buying decisions","tok_text":"spatial solut [ offic furnitur ] \n take the stress out of the offic by consid the design of furnitur and staff need , befor major buy decis","ordered_present_kp":[16,105,130],"keyphrases":["office furniture","staff needs","buying decisions"],"prmu":["P","P","P"]} {"id":"1926","title":"Simulation of ecological and economical structural-type functions","abstract":"An optimization approach to the simulation of ecological and economical structural-type functions is proposed. A methodology for construction of such functions is created in an explicit analytical form","tok_text":"simul of ecolog and econom structural-typ function \n an optim approach to the simul of ecolog and econom structural-typ function is propos . a methodolog for construct of such function is creat in an explicit analyt form","ordered_present_kp":[20,0,200],"keyphrases":["simulation","economical structural-type functions","explicit analytical form","optimisation approach"],"prmu":["P","P","P","M"]} {"id":"1710","title":"VONNA(HBP): a multimedia learning package on hotel budget planning","abstract":"In this paper, a new learning package, VONNA(HBP), which provides an interactive and online environment for novices to study and practice hotel budget planning, is introduced. Its design philosophy will be discussed thoughtfully with special focus on how to make use of the multimedia and Internet. According to literatures, learning packages are faced to be more effective in delivering teaching material. Researchers indicate that students using a self-paced learning package score higher than in a traditional classroom setting. Moreover, the learning package provides different scenarios for students to explore themselves in a practical environment and is more cost effective and systematic than lectures. Currently, most learning packages in hotel education are not implemented using multimedia with Internet access. Our paper describes a new learning package that fills the gaps. VONNA(HBP) requires participants to investigate operational budgets on various areas such as sales levels, payroll, inventory level, promotion strategies, and facilities planning, etc. Eventually, the students\/novices are required to practice their skills in a comprehensive case about a hypothetical hotel. They need to solve managerial problems by a combination of budgetary planning on human resources, staff training programmes, facilities' maintenance and replacement, or promotion schemes. Analytical tools are available for students\/novices to judge an appropriate decision in handling constrained resources","tok_text":"vonna(hbp ): a multimedia learn packag on hotel budget plan \n in thi paper , a new learn packag , vonna(hbp ) , which provid an interact and onlin environ for novic to studi and practic hotel budget plan , is introduc . it design philosophi will be discuss thought with special focu on how to make use of the multimedia and internet . accord to literatur , learn packag are face to be more effect in deliv teach materi . research indic that student use a self-pac learn packag score higher than in a tradit classroom set . moreov , the learn packag provid differ scenario for student to explor themselv in a practic environ and is more cost effect and systemat than lectur . current , most learn packag in hotel educ are not implement use multimedia with internet access . our paper describ a new learn packag that fill the gap . vonna(hbp ) requir particip to investig oper budget on variou area such as sale level , payrol , inventori level , promot strategi , and facil plan , etc . eventu , the student \/ novic are requir to practic their skill in a comprehens case about a hypothet hotel . they need to solv manageri problem by a combin of budgetari plan on human resourc , staff train programm , facil ' mainten and replac , or promot scheme . analyt tool are avail for student \/ novic to judg an appropri decis in handl constrain resourc","ordered_present_kp":[15,0,42,324,455,706,905,918,927,945,967,1113,1163,1179],"keyphrases":["VONNA(HBP)","multimedia learning package","hotel budget planning","Internet","self-paced learning package","hotel education","sales","payroll","inventory","promotion strategies","facilities planning","managerial problems","human resources","staff training programmes","interactive online environment","teaching material delivery","facility maintenance","facility replacement","constrained resource handling"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","M","R","R","R"]} {"id":"1755","title":"Theta functions with harmonic coefficients over number fields","abstract":"We investigate theta functions attached to quadratic forms over a number field K. We establish a functional equation by regarding the theta functions as specializations of symplectic theta functions. By applying a differential operator to the functional equation, we show how theta functions with harmonic coefficients over K behave under modular transformations","tok_text":"theta function with harmon coeffici over number field \n we investig theta function attach to quadrat form over a number field k. we establish a function equat by regard the theta function as special of symplect theta function . by appli a differenti oper to the function equat , we show how theta function with harmon coeffici over k behav under modular transform","ordered_present_kp":[20,239,346,41,93,144,202],"keyphrases":["harmonic coefficients","number fields","quadratic forms","functional equation","symplectic theta functions","differential operator","modular transformations"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1883","title":"Analysis of exclusively kinetic two-link underactuated mechanical systems","abstract":"Analysis of exclusively kinetic two-link underactuated mechanical systems is undertaken. It is first shown that such systems are not full-state feedback linearizable around any equilibrium point. Also, the equilibrium points for which the system is small-time locally controllable (STLC) is at most a one-dimensional submanifold. A concept less restrictive than STLC, termed the small-time local output controllability (STLOC) is introduced, the satisfaction of which guarantees that a chosen configuration output can be controlled at its desired value. It is shown that the class of systems considered is STLOC, if the inertial coupling between the input and output is nonzero. Also, in such a case, the system is nonminimum phase. An example section illustrates all the results presented","tok_text":"analysi of exclus kinet two-link underactu mechan system \n analysi of exclus kinet two-link underactu mechan system is undertaken . it is first shown that such system are not full-stat feedback lineariz around ani equilibrium point . also , the equilibrium point for which the system is small-tim local control ( stlc ) is at most a one-dimension submanifold . a concept less restrict than stlc , term the small-tim local output control ( stloc ) is introduc , the satisfact of which guarante that a chosen configur output can be control at it desir valu . it is shown that the class of system consid is stloc , if the inerti coupl between the input and output is nonzero . also , in such a case , the system is nonminimum phase . an exampl section illustr all the result present","ordered_present_kp":[11,214,333,406,712],"keyphrases":["exclusively kinetic two-link underactuated mechanical systems","equilibrium points","one-dimensional submanifold","small-time local output controllability","nonminimum phase","small-time locally controllable system"],"prmu":["P","P","P","P","P","R"]} {"id":"180","title":"Problems with my PDA","abstract":"Tom Berry has lost his PDA, and now he has an even better understanding of the risks and benefits of working on the move","tok_text":"problem with my pda \n tom berri ha lost hi pda , and now he ha an even better understand of the risk and benefit of work on the move","ordered_present_kp":[16,96,105],"keyphrases":["PDA","risks","benefits","mobile technology"],"prmu":["P","P","P","U"]} {"id":"1902","title":"Engineering plug-in software components to support collaborative work","abstract":"Many software applications require co-operative work support, including collaborative editing, group awareness, versioning, messaging and automated notification and co-ordination agents. Most approaches hard-code such facilities into applications, with fixed functionality and limited ability to reuse groupware implementations. We describe our recent work in seamlessly adding such capabilities to component-based applications via a set of collaborative work-supporting plug-in software components. We describe a variety of applications of this technique, along with descriptions of the novel architecture, user interface adaptation and implementation techniques for the collaborative work-supporting components that we have developed. We report on our experiences to date with this method of supporting collaborative work enhancement of component-based systems, and discuss the advantages of our approach over conventional techniques","tok_text":"engin plug-in softwar compon to support collabor work \n mani softwar applic requir co-op work support , includ collabor edit , group awar , version , messag and autom notif and co-ordin agent . most approach hard-cod such facil into applic , with fix function and limit abil to reus groupwar implement . we describ our recent work in seamlessli ad such capabl to component-bas applic via a set of collabor work-support plug-in softwar compon . we describ a varieti of applic of thi techniqu , along with descript of the novel architectur , user interfac adapt and implement techniqu for the collabor work-support compon that we have develop . we report on our experi to date with thi method of support collabor work enhanc of component-bas system , and discuss the advantag of our approach over convent techniqu","ordered_present_kp":[61,83,111,127,140,150,161,6,283],"keyphrases":["plug-in software components","software applications","co-operative work support","collaborative editing","group awareness","versioning","messaging","automated notification","groupware","collaborative work tools"],"prmu":["P","P","P","P","P","P","P","P","P","M"]} {"id":"1590","title":"Holding on [workflow & content management]","abstract":"Marc Fresko of Cornwell Management Consultants says 'think ahead' when developing your electronic records management policy","tok_text":"hold on [ workflow & content manag ] \n marc fresko of cornwel manag consult say ' think ahead ' when develop your electron record manag polici","ordered_present_kp":[54,114],"keyphrases":["Cornwell Management Consultants","electronic records management policy"],"prmu":["P","P"]} {"id":"1629","title":"Robot trajectory control using neural networks","abstract":"The use of a new type of neural network (NN) for controlling the trajectory of a robot is discussed. A control system is described which comprises an NN-based controller and a fixed-gain feedback controller. The NN-based controller employs a modified recurrent NN, the weights of which are obtained by training another NN to identify online the inverse dynamics of the robot. The work has confirmed the superiority of the proposed NN-based control system in rejecting large disturbances","tok_text":"robot trajectori control use neural network \n the use of a new type of neural network ( nn ) for control the trajectori of a robot is discuss . a control system is describ which compris an nn-base control and a fixed-gain feedback control . the nn-base control employ a modifi recurr nn , the weight of which are obtain by train anoth nn to identifi onlin the invers dynam of the robot . the work ha confirm the superior of the propos nn-base control system in reject larg disturb","ordered_present_kp":[0,29,146,211],"keyphrases":["robot trajectory control","neural networks","control system","fixed-gain feedback controller","neural network-based controller","modified recurrent neural network","neural network training","robot inverse dynamics","large disturbance rejection","robot manipulators","time-varying nonlinear multivariable plant","fourth-order Runge-Kutta algorithm"],"prmu":["P","P","P","P","M","R","R","R","R","M","U","U"]} {"id":"1528","title":"DEVS simulation of distributed intrusion detection systems","abstract":"An intrusion detection system (IDS) attempts to identify unauthorized use, misuse, and abuse of computer and network systems. As intrusions become more sophisticated, dealing with them moves beyond the scope of one IDS. The need arises for systems to cooperate with one another, to manage diverse attacks across networks. The feature of recent attacks is that the packet delivery is moderately slow, and the attack sources and attack targets are distributed. These attacks are called \"stealthy attacks.\" To detect these attacks, the deployment of distributed IDSs is needed. In such an environment, the ability of an IDS to share advanced information about these attacks is especially important. In this research, the IDS model exploits blacklist facts to detect the attacks that are based on either slow or highly distributed packets. To maintain the valid blacklist facts in the knowledge base of each IDS, the model should communicate with the other IDSs. When attack level goes beyond the interaction threshold, ID agents send interaction messages to ID agents in other hosts. Each agent model is developed as an interruptible atomic-expert model in which the expert system is embedded as a model component","tok_text":"dev simul of distribut intrus detect system \n an intrus detect system ( id ) attempt to identifi unauthor use , misus , and abus of comput and network system . as intrus becom more sophist , deal with them move beyond the scope of one id . the need aris for system to cooper with one anoth , to manag divers attack across network . the featur of recent attack is that the packet deliveri is moder slow , and the attack sourc and attack target are distribut . these attack are call \" stealthi attack . \" to detect these attack , the deploy of distribut idss is need . in such an environ , the abil of an id to share advanc inform about these attack is especi import . in thi research , the id model exploit blacklist fact to detect the attack that are base on either slow or highli distribut packet . to maintain the valid blacklist fact in the knowledg base of each id , the model should commun with the other idss . when attack level goe beyond the interact threshold , id agent send interact messag to id agent in other host . each agent model is develop as an interrupt atomic-expert model in which the expert system is embed as a model compon","ordered_present_kp":[23,72,23,1106,13],"keyphrases":["distributed intrusion detection system","intrusion detection system","intrusions","IDS","expert system","cooperative intrusion detection","warning threshold"],"prmu":["P","P","P","P","P","R","M"]} {"id":"1470","title":"When reference works are not books-the new edition of the Guide to Reference Books","abstract":"The author considers the history of the Guide to Reference Books (GRB) and its importance in librarianship. He discusses the ways in which the new edition is taking advantage of changing times. GRB has become a cornerstone of the literature of U.S. librarianship. The biggest change GRB will undergo to become GRS (Guide to Reference Sources) will be designing it primarily as a Web product","tok_text":"when refer work are not books-th new edit of the guid to refer book \n the author consid the histori of the guid to refer book ( grb ) and it import in librarianship . he discuss the way in which the new edit is take advantag of chang time . grb ha becom a cornerston of the literatur of u.s. librarianship . the biggest chang grb will undergo to becom gr ( guid to refer sourc ) will be design it primarili as a web product","ordered_present_kp":[5,49,92,151,128,128,357,412],"keyphrases":["reference works","Guide to Reference Books","history","GRB","GRS","librarianship","Guide to Reference Sources","Web product","Internet"],"prmu":["P","P","P","P","P","P","P","P","U"]} {"id":"1734","title":"Going electronic [auditing]","abstract":"A study group examines the issues auditors face in gathering electronic information as evidence and its impact on the audit","tok_text":"go electron [ audit ] \n a studi group examin the issu auditor face in gather electron inform as evid and it impact on the audit","ordered_present_kp":[14,77],"keyphrases":["auditing","electronic information","assurance standards","audit evidence"],"prmu":["P","P","U","R"]} {"id":"1771","title":"Quadratic Gauss sums over finite commutative rings","abstract":"This article explicitly determines the quadratic Gauss sum over finite commutative rings","tok_text":"quadrat gauss sum over finit commut ring \n thi articl explicitli determin the quadrat gauss sum over finit commut ring","ordered_present_kp":[0,23],"keyphrases":["quadratic Gauss sum","finite commutative rings"],"prmu":["P","P"]} {"id":"1867","title":"Optimization-based design of fixed-order controllers for command following","abstract":"For discrete-time scalar systems, we propose an approach for designing feedback controllers of fixed order to minimize an upper bound on the peak magnitude of the tracking error to a given command input. The work makes use of linear programming to design over a class of closed-loop systems proposed for the rejection of non-zero initial conditions and bounded disturbances. We incorporate performance robustness in the form of a guaranteed upper bound on the peak magnitude of the tracking error under plant coprime factor uncertainty","tok_text":"optimization-bas design of fixed-ord control for command follow \n for discrete-tim scalar system , we propos an approach for design feedback control of fix order to minim an upper bound on the peak magnitud of the track error to a given command input . the work make use of linear program to design over a class of closed-loop system propos for the reject of non-zero initi condit and bound disturb . we incorpor perform robust in the form of a guarante upper bound on the peak magnitud of the track error under plant coprim factor uncertainti","ordered_present_kp":[0,27,49,70,132,214,274,315,413,445,518],"keyphrases":["optimization-based design","fixed-order controllers","command following","discrete-time scalar systems","feedback controllers","tracking error","linear programming","closed-loop systems","performance robustness","guaranteed upper bound","coprime factor uncertainty"],"prmu":["P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1822","title":"Single-phase half-bridge converter topology for power quality compensation","abstract":"A high power factor half-bridge rectifier with neutral point switch clamped scheme is proposed. Three power switches are employed in the proposed rectifier. Two PWM control schemes are used to draw a sinusoidal line current with low current distortion. The control signals of the power switches are derived from the DC link voltage balance compensator, line current controller and DC link voltage regulator. The hysteresis current control scheme is employed to track the line current command. The proposed control scheme and the circuit configuration can be applied to the active power filter to eliminate the harmonic currents and compensate the reactive power generated from the nonlinear load. Analytical and experimental results are included to illustrate the validity and effectiveness of the proposed control scheme","tok_text":"single-phas half-bridg convert topolog for power qualiti compens \n a high power factor half-bridg rectifi with neutral point switch clamp scheme is propos . three power switch are employ in the propos rectifi . two pwm control scheme are use to draw a sinusoid line current with low current distort . the control signal of the power switch are deriv from the dc link voltag balanc compens , line current control and dc link voltag regul . the hysteresi current control scheme is employ to track the line current command . the propos control scheme and the circuit configur can be appli to the activ power filter to elimin the harmon current and compens the reactiv power gener from the nonlinear load . analyt and experiment result are includ to illustr the valid and effect of the propos control scheme","ordered_present_kp":[111,215,43,252,283,359,391,416,443,556],"keyphrases":["power quality compensation","neutral point switch clamped scheme","PWM control schemes","sinusoidal line current","current distortion","DC link voltage balance compensator","line current controller","DC link voltage regulator","hysteresis current control scheme","circuit configuration","single-phase half-bridge rectifier topology","power switches control signals","line current command tracking","harmonic currents elimination"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R","R","R"]} {"id":"1709","title":"Development of computer-mediated teaching resources for tourism distance education: the University of Otago model","abstract":"This article presents a qualitative account of the development of computer-mediated tourism distance learning resources. A distance learning model was developed at the Centre for Tourism, University of Otago (New Zealand) in 1998-1999. The article reviews the development of this Internet-based learning resource explaining the design and development of programme links (providing study information for students) and paper links (course material and learning features). The design of course material is reviewed with emphasis given to consistency of presentation between papers. The template for course material is described and illustrated and the article concludes with an overview of important design considerations","tok_text":"develop of computer-medi teach resourc for tourism distanc educ : the univers of otago model \n thi articl present a qualit account of the develop of computer-medi tourism distanc learn resourc . a distanc learn model wa develop at the centr for tourism , univers of otago ( new zealand ) in 1998 - 1999 . the articl review the develop of thi internet-bas learn resourc explain the design and develop of programm link ( provid studi inform for student ) and paper link ( cours materi and learn featur ) . the design of cours materi is review with emphasi given to consist of present between paper . the templat for cours materi is describ and illustr and the articl conclud with an overview of import design consider","ordered_present_kp":[149,70,342,457,403],"keyphrases":["University of Otago","computer-mediated tourism distance learning resources","Internet-based learning resource","programme links","paper links"],"prmu":["P","P","P","P","P"]} {"id":"1550","title":"On the convergence of the Bermudez-Moreno algorithm with constant parameters","abstract":"A. Bermudez and C. Moreno (1981) presented a duality numerical algorithm for solving variational inequalities of the second kind. The performance of this algorithm strongly depends on the choice of two constant parameters. Assuming a further hypothesis of the inf-sup type, we present here a convergence theorem that improves on the one presented by A. Bermudez and C. Moreno. We prove that the convergence is linear, and we give the expression of the asymptotic error constant and the explicit form of the optimal parameters, as a function of some constants related to the variational inequality. Finally, we present some numerical examples that confirm the theoretical results","tok_text":"on the converg of the bermudez-moreno algorithm with constant paramet \n a. bermudez and c. moreno ( 1981 ) present a dualiti numer algorithm for solv variat inequ of the second kind . the perform of thi algorithm strongli depend on the choic of two constant paramet . assum a further hypothesi of the inf-sup type , we present here a converg theorem that improv on the one present by a. bermudez and c. moreno . we prove that the converg is linear , and we give the express of the asymptot error constant and the explicit form of the optim paramet , as a function of some constant relat to the variat inequ . final , we present some numer exampl that confirm the theoret result","ordered_present_kp":[22,117,150,334,481,534,53],"keyphrases":["Bermudez-Moreno algorithm","constant parameters","duality numerical algorithm","variational inequalities","convergence theorem","asymptotic error constant","optimal parameters"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1515","title":"p-Bezier curves, spirals, and sectrix curves","abstract":"We elucidate the connection between Bezier curves in polar coordinates, also called p-Bezier or focal Bezier curves, and certain families of spirals and sectrix curves. p-Bezier curves are the analogue in polar coordinates of nonparametric Bezier curves in Cartesian coordinates. Such curves form a subset of rational Bezier curves characterized by control points on radial directions regularly spaced with respect to the polar angle, and weights equal to the inverse of the polar radius. We show that this subset encompasses several classical sectrix curves, which solve geometrically the problem of dividing an angle into equal spans, and also spirals defining the trajectories of particles in central fields. First, we identify as p-Bezier curves a family of sinusoidal spirals that includes Tschirnhausen's cubic. Second, the trisectrix of Maclaurin and their generalizations, called arachnidas. Finally, a special class of epi spirals that encompasses the trisectrix of Delanges","tok_text":"p-bezier curv , spiral , and sectrix curv \n we elucid the connect between bezier curv in polar coordin , also call p-bezier or focal bezier curv , and certain famili of spiral and sectrix curv . p-bezier curv are the analogu in polar coordin of nonparametr bezier curv in cartesian coordin . such curv form a subset of ration bezier curv character by control point on radial direct regularli space with respect to the polar angl , and weight equal to the invers of the polar radiu . we show that thi subset encompass sever classic sectrix curv , which solv geometr the problem of divid an angl into equal span , and also spiral defin the trajectori of particl in central field . first , we identifi as p-bezier curv a famili of sinusoid spiral that includ tschirnhausen 's cubic . second , the trisectrix of maclaurin and their gener , call arachnida . final , a special class of epi spiral that encompass the trisectrix of delang","ordered_present_kp":[0,16,29,89,127,319,351,368,418,599,663,728,773,841,880,794],"keyphrases":["p-Bezier curves","spirals","sectrix curves","polar coordinates","focal Bezier curves","rational Bezier curves","control points","radial directions","polar angle","equal spans","central fields","sinusoidal spirals","cubic","trisectrix","arachnidas","epi spirals","geometry","angle division","particle trajectories"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","U","M","R"]} {"id":"1651","title":"H-matrix approximation for the operator exponential with applications","abstract":"We previously developed a data-sparse and accurate approximation to parabolic solution operators in the case of a rather general elliptic part given by a strongly P-positive operator . Also a class of matrices (H-matrices) has been analysed which are data-sparse and allow an approximate matrix arithmetic with almost linear complexity. In particular, the matrix-vector\/matrix-matrix product with such matrices as well as the computation of the inverse have linear-logarithmic cost. In this paper, we apply the H-matrix techniques to approximate the exponent of an elliptic operator. Starting with the Dunford-Cauchy representation for the operator exponent, we then discretise the integral by the exponentially convergent quadrature rule involving a short sum of resolvents. The latter are approximated by the H-matrices. Our algorithm inherits a two-level parallelism with respect to both the computation of resolvents and the treatment of different time values. In the case of smooth data (coefficients, boundaries), we prove the linear-logarithmic complexity of the method","tok_text":"h-matrix approxim for the oper exponenti with applic \n we previous develop a data-spars and accur approxim to parabol solut oper in the case of a rather gener ellipt part given by a strongli p-posit oper . also a class of matric ( h-matric ) ha been analys which are data-spars and allow an approxim matrix arithmet with almost linear complex . in particular , the matrix-vector \/ matrix-matrix product with such matric as well as the comput of the invers have linear-logarithm cost . in thi paper , we appli the h-matrix techniqu to approxim the expon of an ellipt oper . start with the dunford-cauchi represent for the oper expon , we then discretis the integr by the exponenti converg quadratur rule involv a short sum of resolv . the latter are approxim by the h-matric . our algorithm inherit a two-level parallel with respect to both the comput of resolv and the treatment of differ time valu . in the case of smooth data ( coeffici , boundari ) , we prove the linear-logarithm complex of the method","ordered_present_kp":[0,26,321,110,588,670,182],"keyphrases":["H-matrix approximation","operator exponential","parabolic solution operators","strongly P-positive operator","almost linear complexity","Dunford-Cauchy representation","exponentially convergent quadrature rule","data-sparse approximation"],"prmu":["P","P","P","P","P","P","P","R"]} {"id":"1614","title":"A transmission line fault-location system using the wavelet transform","abstract":"This paper describes the locating system of line-to-ground faults on a power transmission line by using a wavelet transform. The possibility of the location with the surge generated by a fault has been theoretically proposed. In order to make the method practicable, the authors realize very fast processors. They design the wavelet transform and location chips, and construct a very fast fault location system by processing the measured data in parallel. This system is realized by a computer with three FPGA processor boards on a PCI bus. The processors are controlled by UNIX and the system has a graphical user interface with an X window system","tok_text":"a transmiss line fault-loc system use the wavelet transform \n thi paper describ the locat system of line-to-ground fault on a power transmiss line by use a wavelet transform . the possibl of the locat with the surg gener by a fault ha been theoret propos . in order to make the method practic , the author realiz veri fast processor . they design the wavelet transform and locat chip , and construct a veri fast fault locat system by process the measur data in parallel . thi system is realiz by a comput with three fpga processor board on a pci bu . the processor are control by unix and the system ha a graphic user interfac with an x window system","ordered_present_kp":[42,100,516,542,580,605,635],"keyphrases":["wavelet transform","line-to-ground faults","FPGA processor boards","PCI bus","UNIX","graphic user interface","X window system","power transmission line fault-location system","computer simulation","fault surge generation"],"prmu":["P","P","P","P","P","P","P","R","M","R"]} {"id":"1898","title":"Design of a stroke dependent damper for the front axle suspension of a truck using multibody system dynamics and numerical optimization","abstract":"A stroke dependent damper is designed for the front axle suspension of a truck. The damper supplies extra damping for inward deflections rising above 4 cm. In this way the damper should reduce extreme suspension deflections without deteriorating the comfort of the truck. But the question is which stroke dependent damping curve yields the best compromise between suspension deflection working space and comfort. Therefore an optimization problem is defined to minimize the maximum inward suspension deflection subject to constraints on the chassis acceleration for three typical road undulations. The optimization problem is solved using sequential linear programming (SLP) and multibody dynamics simulation software. Several optimization runs have been carried out for a small two degree of freedom vehicle model and a large full-scale model of the truck semi-trailer combination. The results show that the stroke dependent damping can reduce large deflections at incidental road disturbances, but that the optimum stroke dependent damping curve is related to the acceleration bound. By means of vehicle model simulation and numerical optimization we have been able to quantify this trade-off between suspension deflection working space and truck comfort","tok_text":"design of a stroke depend damper for the front axl suspens of a truck use multibodi system dynam and numer optim \n a stroke depend damper is design for the front axl suspens of a truck . the damper suppli extra damp for inward deflect rise abov 4 cm . in thi way the damper should reduc extrem suspens deflect without deterior the comfort of the truck . but the question is which stroke depend damp curv yield the best compromis between suspens deflect work space and comfort . therefor an optim problem is defin to minim the maximum inward suspens deflect subject to constraint on the chassi acceler for three typic road undul . the optim problem is solv use sequenti linear program ( slp ) and multibodi dynam simul softwar . sever optim run have been carri out for a small two degre of freedom vehicl model and a larg full-scal model of the truck semi-trail combin . the result show that the stroke depend damp can reduc larg deflect at incident road disturb , but that the optimum stroke depend damp curv is relat to the acceler bound . by mean of vehicl model simul and numer optim we have been abl to quantifi thi trade-off between suspens deflect work space and truck comfort","ordered_present_kp":[12,41,74,101,220,287,586,617,660,821,844,26,940,1025,1052,1169],"keyphrases":["stroke dependent damper","damping","front axle suspension","multibody system dynamics","numerical optimization","inward deflections","extreme suspension deflections","chassis acceleration","road undulations","sequential linear programming","full-scale model","truck semi-trailer combination","incidental road disturbances","acceleration bound","vehicle model simulation","truck comfort"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1865","title":"Towards the globalisation of the IS\/IT function","abstract":"The IS\/IT function has recently emerged from the peripheral aspects of the finance department to the centre of critical organisational change. There is an increasing dependency on its activities as systems extend beyond supporting the internal efficiency of the organisation to augmenting global performance. The growth of wide and local networks has resulted in communication possibilities that were not possible a few years ago. E-commerce challenges the achievements of the IS\/IT function and is very prominent in the globalisation of modern organisations. The complexity and diversity of electronic exchange is also well documented (Hackney et al., 2000). This has a number of impacts on the development and implementation of IS\/IT solutions for organisations involved in international trade. It is a conjecture that the IS\/IT function is critically important for the alignment of the business to meet the demands of global competition, through building internal marketing strategies and creating knowledge based communities. There is clear evidence that IS\/IT can lead to improved business performance and potentially for sustained competitive advantage. This is obviously true through the advent of new and emerging technologies such as the Internet","tok_text":"toward the globalis of the is \/ it function \n the is \/ it function ha recent emerg from the peripher aspect of the financ depart to the centr of critic organis chang . there is an increas depend on it activ as system extend beyond support the intern effici of the organis to augment global perform . the growth of wide and local network ha result in commun possibl that were not possibl a few year ago . e-commerc challeng the achiev of the is \/ it function and is veri promin in the globalis of modern organis . the complex and divers of electron exchang is also well document ( hackney et al . , 2000 ) . thi ha a number of impact on the develop and implement of is \/ it solut for organis involv in intern trade . it is a conjectur that the is \/ it function is critic import for the align of the busi to meet the demand of global competit , through build intern market strategi and creat knowledg base commun . there is clear evid that is \/ it can lead to improv busi perform and potenti for sustain competit advantag . thi is obvious true through the advent of new and emerg technolog such as the internet","ordered_present_kp":[11,27,404,539,701,857,890,1100],"keyphrases":["globalisation","IS\/IT function","e-commerce","electronic exchange","international trade","internal marketing strategies","knowledge based communities","Internet","local area networks","wide area networks"],"prmu":["P","P","P","P","P","P","P","P","M","M"]} {"id":"1820","title":"Fast, accurate and stable simulation of power electronic systems using virtual resistors and capacitors","abstract":"Simulation of power electronic circuits remains a problem due to the high level of stiffness brought about by the modelling of switches as biresistors i.e. very low turn-on resistance and very high turn-off resistance. The merits and drawbacks of two modelling methods that address this problem are discussed. A modelling solution for ensuring numerically stable, accurate and fast simulation of power electronic systems is proposed. The solution enables easy connectivity between power electronic elements in the simulation model. It involves the modelling of virtual capacitance at switching nodes to soften voltage discontinuity due to the switch current suddenly going to zero. Undesirable ringing effects that may arise due to the interaction between the virtual capacitance and circuit inductance are eliminated by modelling virtual damping resistors in parallel to inductors that are adjacent to switching elements. A midpoint configuration method is also introduced for modelling shunt capacitors. A DC traction system is simulated using this modelling strategy and the results are included. Simulation results obtained using this modelling strategy are validated by comparison with the established mesh analysis technique of modelling. The simulation performance is also compared with the Power System Blockset commercial software","tok_text":"fast , accur and stabl simul of power electron system use virtual resistor and capacitor \n simul of power electron circuit remain a problem due to the high level of stiff brought about by the model of switch as biresistor i.e. veri low turn-on resist and veri high turn-off resist . the merit and drawback of two model method that address thi problem are discuss . a model solut for ensur numer stabl , accur and fast simul of power electron system is propos . the solut enabl easi connect between power electron element in the simul model . it involv the model of virtual capacit at switch node to soften voltag discontinu due to the switch current suddenli go to zero . undesir ring effect that may aris due to the interact between the virtual capacit and circuit induct are elimin by model virtual damp resistor in parallel to inductor that are adjac to switch element . a midpoint configur method is also introduc for model shunt capacitor . a dc traction system is simul use thi model strategi and the result are includ . simul result obtain use thi model strategi are valid by comparison with the establish mesh analysi techniqu of model . the simul perform is also compar with the power system blockset commerci softwar","ordered_present_kp":[58,236,260,584,680,948,1113],"keyphrases":["virtual resistors","turn-on resistance","high turn-off resistance","switching nodes","ringing effects","DC traction system","mesh analysis technique","power electronic systems simulation","virtual capacitors","switch modelling","voltage discontinuity softening","Power System Blockset software","computer simulation"],"prmu":["P","P","P","P","P","P","P","R","R","R","R","R","M"]} {"id":"1653","title":"The best circulant preconditioners for Hermitian Toeplitz systems.II. The multiple-zero case","abstract":"For pt.I. see SIAM J. Numer. Anal., vol. 38, p. 876-896. Circulant-type preconditioners have been proposed previously for ill-conditioned Hermitian Toeplitz systems that are generated by nonnegative continuous functions with a zero of even order. The proposed circulant preconditioners can be constructed without requiring explicit knowledge of the generating functions. It was shown that the spectra of the preconditioned matrices are uniformly bounded except for a fixed number of outliers and that all eigenvalues are uniformly bounded away from zero. Therefore the conjugate gradient method converges linearly when applied to solving the circulant preconditioned systems. Previously it was claimed that this result can be extended to the case where the generating functions have multiple zeros. The main aim of this paper is to give a complete convergence proof of the method for this class of generating functions","tok_text":"the best circul precondition for hermitian toeplitz system . ii . the multiple-zero case \n for pt . i. see siam j. numer . anal . , vol . 38 , p. 876 - 896 . circulant-typ precondition have been propos previous for ill-condit hermitian toeplitz system that are gener by nonneg continu function with a zero of even order . the propos circul precondition can be construct without requir explicit knowledg of the gener function . it wa shown that the spectra of the precondit matric are uniformli bound except for a fix number of outlier and that all eigenvalu are uniformli bound away from zero . therefor the conjug gradient method converg linearli when appli to solv the circul precondit system . previous it wa claim that thi result can be extend to the case where the gener function have multipl zero . the main aim of thi paper is to give a complet converg proof of the method for thi class of gener function","ordered_present_kp":[9,33,70,270,410,463,548,608],"keyphrases":["circulant preconditioners","Hermitian Toeplitz systems","multiple-zero case","nonnegative continuous functions","generating functions","preconditioned matrices","eigenvalues","conjugate gradient method"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1616","title":"Pitch post-processing technique based on robust statistics","abstract":"A novel pitch post-processing technique based on robust statistics is proposed. Performances in terms of pitch error rates and pitch contours show the superiority of the proposed method compared with the median filtering technique. Further improvement is achieved through incorporating an uncertainty term in the robust statistics model","tok_text":"pitch post-process techniqu base on robust statist \n a novel pitch post-process techniqu base on robust statist is propos . perform in term of pitch error rate and pitch contour show the superior of the propos method compar with the median filter techniqu . further improv is achiev through incorpor an uncertainti term in the robust statist model","ordered_present_kp":[0,36,143,164,233,303],"keyphrases":["pitch post-processing technique","robust statistics","pitch error rates","pitch contours","median filtering","uncertainty term","speech quality","speech communications"],"prmu":["P","P","P","P","P","P","U","U"]} {"id":"1552","title":"Stability of Runge-Kutta methods for delay integro-differential equations","abstract":"We study stability of Runge-Kutta (RK) methods for delay integro-differential equations with a constant delay on the basis of the linear equation du\/dt = Lu(t) + Mu(t- tau ) + K integral \/sub t- tau \/\/sup t\/ u( theta )d theta , where L, M, K are constant complex matrices. In particular, we show that the same result as in the case K = 0 (Koto, 1994) holds for this test equation, i.e., every A-stable RK method preserves the delay-independent stability of the exact solution whenever a step-size of the form h = tau \/m is used, where m is a positive integer","tok_text":"stabil of runge-kutta method for delay integro-differenti equat \n we studi stabil of runge-kutta ( rk ) method for delay integro-differenti equat with a constant delay on the basi of the linear equat du \/ dt = lu(t ) + mu(t- tau ) + k integr \/sub t- tau \/\/sup t\/ u ( theta ) d theta , where l , m , k are constant complex matric . in particular , we show that the same result as in the case k = 0 ( koto , 1994 ) hold for thi test equat , i.e. , everi a-stabl rk method preserv the delay-independ stabil of the exact solut whenev a step-siz of the form h = tau \/m is use , where m is a posit integ","ordered_present_kp":[10,33,153,0],"keyphrases":["stability","Runge-Kutta methods","delay integro-differential equations","constant delay"],"prmu":["P","P","P","P"]} {"id":"1517","title":"Minimizing blossoms under symmetric linear constraints","abstract":"In this paper, we show that there exists a close dependence between the control polygon of a polynomial and the minimum of its blossom under symmetric linear constraints. We consider a given minimization problem P, for which a unique solution will be a point delta on the Bezier curve. For the minimization function f, two sufficient conditions exist that ensure the uniqueness of the solution, namely, the concavity of the control polygon of the polynomial and the characteristics of the Polya frequency-control polygon where the minimum coincides with a critical point of the polynomial. The use of the blossoming theory provides us with a useful geometrical interpretation of the minimization problem. In addition, this minimization approach leads us to a new method of discovering inequalities about the elementary symmetric polynomials","tok_text":"minim blossom under symmetr linear constraint \n in thi paper , we show that there exist a close depend between the control polygon of a polynomi and the minimum of it blossom under symmetr linear constraint . we consid a given minim problem p , for which a uniqu solut will be a point delta on the bezier curv . for the minim function f , two suffici condit exist that ensur the uniqu of the solut , name , the concav of the control polygon of the polynomi and the characterist of the polya frequency-control polygon where the minimum coincid with a critic point of the polynomi . the use of the blossom theori provid us with a use geometr interpret of the minim problem . in addit , thi minim approach lead us to a new method of discov inequ about the elementari symmetr polynomi","ordered_present_kp":[115,136,20,298,411,485,550,632,737,753],"keyphrases":["symmetric linear constraints","control polygon","polynomial","Bezier curve","concavity","Polya frequency-control polygon","critical point","geometrical interpretation","inequalities","elementary symmetric polynomials","blossom minimization"],"prmu":["P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1693","title":"Healthy, wealthy and wise? [health sector document management]","abstract":"NHS spending will rise from Pounds 65.4bn in 2002 to Pounds 87.2bn in 2006, and by 2008, spending will total Pounds 105.6bn. David Tyler looks at how the health sector is already beginning to exploit IT, and particularly document management, to improve service and cut costs","tok_text":"healthi , wealthi and wise ? [ health sector document manag ] \n nh spend will rise from pound 65.4bn in 2002 to pound 87.2bn in 2006 , and by 2008 , spend will total pound 105.6bn . david tyler look at how the health sector is alreadi begin to exploit it , and particularli document manag , to improv servic and cut cost","ordered_present_kp":[64,45,249],"keyphrases":["document management","NHS spending","IT","ScanSoft PaperPort"],"prmu":["P","P","P","U"]} {"id":"182","title":"Phase transition for parking blocks, Brownian excursion and coalescence","abstract":"In this paper, we consider hashing with linear probing for a hashing table with m places, n items (nor=2, and the function max({x\/sub 1\/,...,x\/sub n\/} intersection A) is partial recursive, it is easily seen that A is recursive. In this paper, we weaken this hypothesis in various ways (and similarly for \"min\" in place of \"max\") and investigate what effect this has on the complexity of A. We discover a sharp contrast between retraceable and co-retraceable sets, and we characterize sets which are the union of a recursive set and a co-r.e., retraceable set. Most of our proofs are noneffective. Several open questions are raised","tok_text":"max and min limit \n if a contain in omega , n > or=2 , and the function max({x \/ sub 1\/, ... ,x \/ sub n\/ } intersect a ) is partial recurs , it is easili seen that a is recurs . in thi paper , we weaken thi hypothesi in variou way ( and similarli for \" min \" in place of \" max \" ) and investig what effect thi ha on the complex of a. we discov a sharp contrast between retrac and co-retrac set , and we character set which are the union of a recurs set and a co-r. . , retrac set . most of our proof are noneffect . sever open question are rais","ordered_present_kp":[8,320,383,442],"keyphrases":["min limiters","complexity","retraceable sets","recursive set","max limiters"],"prmu":["P","P","P","P","R"]} {"id":"1716","title":"The vibration reliability of poppet and contoured actuator valves","abstract":"The problem of selecting the shape of the actuator valve (the final control valve) itself is discussed; the solution to this problem will permit appreciable dynamic loads to be eliminated from the moving elements of the steam distribution system of steam turbines under all operating conditions","tok_text":"the vibrat reliabl of poppet and contour actuat valv \n the problem of select the shape of the actuat valv ( the final control valv ) itself is discuss ; the solut to thi problem will permit appreci dynam load to be elimin from the move element of the steam distribut system of steam turbin under all oper condit","ordered_present_kp":[33,231,251,277,4],"keyphrases":["vibration reliability","contoured actuator valves","moving elements","steam distribution system","steam turbines","actuator valve shape selection","poppet actuator valves","dynamic loads elimination"],"prmu":["P","P","P","P","P","R","R","R"]} {"id":"1753","title":"Risk theory with a nonlinear dividend barrier","abstract":"In the framework of classical risk theory we investigate a surplus process in the presence of a nonlinear dividend barrier and derive equations for two characteristics of such a process, the probability of survival and the expected sum of discounted dividend payments. Number-theoretic solution techniques are developed for approximating these quantities and numerical illustrations are given for exponential claim sizes and a parabolic dividend barrier","tok_text":"risk theori with a nonlinear dividend barrier \n in the framework of classic risk theori we investig a surplu process in the presenc of a nonlinear dividend barrier and deriv equat for two characterist of such a process , the probabl of surviv and the expect sum of discount dividend payment . number-theoret solut techniqu are develop for approxim these quantiti and numer illustr are given for exponenti claim size and a parabol dividend barrier","ordered_present_kp":[0,19,102,225,265,293,367,395,422],"keyphrases":["risk theory","nonlinear dividend barrier","surplus process","probability of survival","discounted dividend payments","number-theoretic solution","numerical illustrations","exponential claim sizes","parabolic dividend barrier"],"prmu":["P","P","P","P","P","P","P","P","P"]} {"id":"1885","title":"Analysis of nonlinear time-delay systems using modules over non-commutative rings","abstract":"The theory of non-commutative rings is introduced to provide a basis for the study of nonlinear control systems with time delays. The left Ore ring of non-commutative polynomials defined over the field of a meromorphic function is suggested as the framework for such a study. This approach is then generalized to a broader class of nonlinear systems with delays that are called generalized Roesser systems. Finally, the theory is applied to analyze nonlinear time-delay systems. A weak observability is defined and characterized, generalizing the well-known linear result. Properties of closed submodules are then developed to obtain a result on the accessibility of such systems","tok_text":"analysi of nonlinear time-delay system use modul over non-commut ring \n the theori of non-commut ring is introduc to provid a basi for the studi of nonlinear control system with time delay . the left ore ring of non-commut polynomi defin over the field of a meromorph function is suggest as the framework for such a studi . thi approach is then gener to a broader class of nonlinear system with delay that are call gener roesser system . final , the theori is appli to analyz nonlinear time-delay system . a weak observ is defin and character , gener the well-known linear result . properti of close submodul are then develop to obtain a result on the access of such system","ordered_present_kp":[11,43,148,195,258,415,508],"keyphrases":["nonlinear time-delay systems","modules","nonlinear control systems","left Ore ring","meromorphic function","generalized Roesser systems","weak observability","noncommutative rings","noncommutative polynomials"],"prmu":["P","P","P","P","P","P","P","M","M"]} {"id":"1920","title":"To commit or not to commit: modeling agent conversations for action","abstract":"Conversations are sequences of messages exchanged among interacting agents. For conversations to be meaningful, agents ought to follow commonly known specifications limiting the types of messages that can be exchanged at any point in the conversation. These specifications are usually implemented using conversation policies (which are rules of inference) or conversation protocols (which are predefined conversation templates). In this article we present a semantic model for specifying conversations using conversation policies. This model is based on the principles that the negotiation and uptake of shared social commitments entail the adoption of obligations to action, which indicate the actions that agents have agreed to perform. In the same way, obligations are retracted based on the negotiation to discharge their corresponding shared social commitments. Based on these principles, conversations are specified as interaction specifications that model the ideal sequencing of agent participations negotiating the execution of actions in a joint activity. These specifications not only specify the adoption and discharge of shared commitments and obligations during an activity, but also indicate the commitments and obligations that are required (as preconditions) or that outlive a joint activity (as postconditions). We model the Contract Net Protocol as an example of the specification of conversations in a joint activity","tok_text":"to commit or not to commit : model agent convers for action \n convers are sequenc of messag exchang among interact agent . for convers to be meaning , agent ought to follow commonli known specif limit the type of messag that can be exchang at ani point in the convers . these specif are usual implement use convers polici ( which are rule of infer ) or convers protocol ( which are predefin convers templat ) . in thi articl we present a semant model for specifi convers use convers polici . thi model is base on the principl that the negoti and uptak of share social commit entail the adopt of oblig to action , which indic the action that agent have agre to perform . in the same way , oblig are retract base on the negoti to discharg their correspond share social commit . base on these principl , convers are specifi as interact specif that model the ideal sequenc of agent particip negoti the execut of action in a joint activ . these specif not onli specifi the adopt and discharg of share commit and oblig dure an activ , but also indic the commit and oblig that are requir ( as precondit ) or that outliv a joint activ ( as postcondit ) . we model the contract net protocol as an exampl of the specif of convers in a joint activ","ordered_present_kp":[106,188,334,353,561,391],"keyphrases":["interacting agents","specifications","rules of inference","conversation protocols","conversation templates","social commitments","autonomous agents","speech acts","software agents"],"prmu":["P","P","P","P","P","P","M","U","M"]} {"id":"1673","title":"Mission planning for regional surveillance","abstract":"The regional surveillance problem discussed involves formulating a flight route for an aircraft to scan a given geographical region. Aerial surveillance is conducted using a synthetic aperture radar device mounted on the aircraft to compose a complete, high-resolution image of the region. Two models for determining an optimised flight route are described, the first employing integer programming and the second, genetic algorithms. A comparison of the solution optimality in terms of the total distance travelled, and model efficiency of the two techniques in terms of their required CPU times, is made in order to identify the conditions under which it is appropriate to apply each model","tok_text":"mission plan for region surveil \n the region surveil problem discuss involv formul a flight rout for an aircraft to scan a given geograph region . aerial surveil is conduct use a synthet apertur radar devic mount on the aircraft to compos a complet , high-resolut imag of the region . two model for determin an optimis flight rout are describ , the first employ integ program and the second , genet algorithm . a comparison of the solut optim in term of the total distanc travel , and model effici of the two techniqu in term of their requir cpu time , is made in order to identifi the condit under which it is appropri to appli each model","ordered_present_kp":[0,17,85,147,179,251,311,362,393,431,458],"keyphrases":["mission planning","regional surveillance","flight route","aerial surveillance","synthetic aperture radar device","high-resolution image","optimised flight route","integer programming","genetic algorithms","solution optimality","total distance travelled","geographical region scanning"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","R"]} {"id":"1636","title":"SPARC ignites scholarly publishing","abstract":"During the past several years, initiatives which bring together librarians, researchers, university administrators and independent publishers have re-invigorated the scholarly publishing marketplace. These initiatives take advantage of electronic technology and show great potential for restoring science to scientists. The author outlines SPARC (the Scholarly Publishing and Academic Resources Coalition), an initiative to make scientific journals more accessible","tok_text":"sparc ignit scholarli publish \n dure the past sever year , initi which bring togeth librarian , research , univers administr and independ publish have re-invigor the scholarli publish marketplac . these initi take advantag of electron technolog and show great potenti for restor scienc to scientist . the author outlin sparc ( the scholarli publish and academ resourc coalit ) , an initi to make scientif journal more access","ordered_present_kp":[59,0,331],"keyphrases":["SPARC","initiative","Scholarly Publishing and Academic Resources Coalition","electronic publishing","scientific journal access"],"prmu":["P","P","P","R","R"]} {"id":"1572","title":"Ant colony optimization and stochastic gradient descent","abstract":"We study the relationship between the two techniques known as ant colony optimization (ACO) and stochastic gradient descent. More precisely, we show that some empirical ACO algorithms approximate stochastic gradient descent in the space of pheromones, and we propose an implementation of stochastic gradient descent that belongs to the family of ACO algorithms. We then use this insight to explore the mutual contributions of the two techniques","tok_text":"ant coloni optim and stochast gradient descent \n we studi the relationship between the two techniqu known as ant coloni optim ( aco ) and stochast gradient descent . more precis , we show that some empir aco algorithm approxim stochast gradient descent in the space of pheromon , and we propos an implement of stochast gradient descent that belong to the famili of aco algorithm . we then use thi insight to explor the mutual contribut of the two techniqu","ordered_present_kp":[0,21,198,269],"keyphrases":["ant colony optimization","stochastic gradient descent","empirical ACO algorithms","pheromones","combinatorial optimization","heuristic","reinforcement learning","social insects","swarm intelligence","artificial life","local search algorithms"],"prmu":["P","P","P","P","M","U","U","U","U","U","M"]} {"id":"1537","title":"Technology on social issues of videoconferencing on the Internet: a survey","abstract":"Constant advances in audio\/video compression, the development of the multicast protocol as well as fast improvement in computing devices (e.g. higher speed, larger memory) have set forth the opportunity to have resource demanding videoconferencing (VC) sessions on the Internet. Multicast is supported by the multicast backbone (Mbone), which is a special portion of the Internet where this protocol is being deployed. Mbone VC tools are steadily emerging and the user population is growing fast. VC is a fascinating application that has the potential to greatly impact the way we remotely communicate and work. Yet, the adoption of VC is not as fast as one could have predicted. Hence, it is important to examine the factors that affect a widespread adoption of VC. This paper examines the enabling technology and the social issues. It discusses the achievements and identifies the future challenges. It suggests an integration of many emerging multimedia tools into VC in order to enhance its versatility for more effectiveness","tok_text":"technolog on social issu of videoconferenc on the internet : a survey \n constant advanc in audio \/ video compress , the develop of the multicast protocol as well as fast improv in comput devic ( e.g. higher speed , larger memori ) have set forth the opportun to have resourc demand videoconferenc ( vc ) session on the internet . multicast is support by the multicast backbon ( mbone ) , which is a special portion of the internet where thi protocol is be deploy . mbone vc tool are steadili emerg and the user popul is grow fast . vc is a fascin applic that ha the potenti to greatli impact the way we remot commun and work . yet , the adopt of vc is not as fast as one could have predict . henc , it is import to examin the factor that affect a widespread adopt of vc . thi paper examin the enabl technolog and the social issu . it discuss the achiev and identifi the futur challeng . it suggest an integr of mani emerg multimedia tool into vc in order to enhanc it versatil for more effect","ordered_present_kp":[28,50,135,358,378,922,13],"keyphrases":["social issues","videoconferencing","Internet","multicast protocol","multicast backbone","Mbone","multimedia","data compression"],"prmu":["P","P","P","P","P","P","P","M"]} {"id":"162","title":"International news sites in English","abstract":"Web access to news sites all over the world allows us the opportunity to have an electronic news stand readily available and stocked with a variety of foreign (to us) news sites. A large number of currently available foreign sites are English-language publications or English language versions of non-North American sites. These sites are quite varied in terms of quality, coverage, and style. Finding them can present a challenge. Using them effectively requires critical-thinking skills that are a part of media awareness or digital literacy","tok_text":"intern news site in english \n web access to news site all over the world allow us the opportun to have an electron news stand readili avail and stock with a varieti of foreign ( to us ) news site . a larg number of current avail foreign site are english-languag public or english languag version of non-north american site . these site are quit vari in term of qualiti , coverag , and style . find them can present a challeng . use them effect requir critical-think skill that are a part of media awar or digit literaci","ordered_present_kp":[30,0,246,451,491,505],"keyphrases":["international news sites","Web access","English-language publications","critical-thinking skills","media awareness","digital literacy","non North American sites"],"prmu":["P","P","P","P","P","P","M"]} {"id":"1793","title":"The paradigm of viral communication","abstract":"The IIW Institute of Information Management (www.IIW.de) is dealing with commercial applications of digital technologies, such as the Internet, digital printing, and many more. A study which has been carried out by the institute, identifies viral messages as a new paradigm of communication, mostly found in the area of Direct Marketing, and - who wonders - mainly within the USA. Viral messages underlie certain principles: (1) prospects and customers of the idea are offered a technology platform providing a possibility to send a message to a majority of persons; (2) there is an emotional or pecuniary incentive to participate. Ideally, niches of needs and market vacua are filled with funny ideas; (3) also, the recipients are facing emotional or pecuniary incentives to contact a majority of further recipients - this induces a snowball effect and the message is spread virally; and (4) the customer is activated as an \"ambassador\" of the piece of information, for instance promoting a product or a company. It is evident that there has been a long lasting history of what we call \"word-of-mouth\" ever since, however bundles of digital technologies empower the viral communication paradigm","tok_text":"the paradigm of viral commun \n the iiw institut of inform manag ( www.iiw.d ) is deal with commerci applic of digit technolog , such as the internet , digit print , and mani more . a studi which ha been carri out by the institut , identifi viral messag as a new paradigm of commun , mostli found in the area of direct market , and - who wonder - mainli within the usa . viral messag underli certain principl : ( 1 ) prospect and custom of the idea are offer a technolog platform provid a possibl to send a messag to a major of person ; ( 2 ) there is an emot or pecuniari incent to particip . ideal , nich of need and market vacua are fill with funni idea ; ( 3 ) also , the recipi are face emot or pecuniari incent to contact a major of further recipi - thi induc a snowbal effect and the messag is spread viral ; and ( 4 ) the custom is activ as an \" ambassador \" of the piec of inform , for instanc promot a product or a compani . it is evid that there ha been a long last histori of what we call \" word-of-mouth \" ever sinc , howev bundl of digit technolog empow the viral commun paradigm","ordered_present_kp":[1071,91,240,140,311],"keyphrases":["commercial applications","Internet","viral messages","direct marketing","viral communication paradigm","e-mails","business","computer virus"],"prmu":["P","P","P","P","P","U","U","U"]} {"id":"1845","title":"Business data management for business-to-business electronic commerce","abstract":"Business-to-business electronic commerce (B2B EC) opens up new possibilities for trade. For example, new business partners from around the globe can be found, their offers can be compared, even complex negotiations can be conducted electronically, and a contract can be drawn up and fulfilled via an electronic marketplace. However, sophisticated data management is required to provide such facilities. In this paper, the results of a multi-national project on creating a business-to-business electronic marketplace for small and medium-sized enterprises are presented. Tools for information discovery, protocol-based negotiations, and monitored contract enactment are provided and based on a business data repository. The repository integrates heterogeneous business data with business communication. Specific problems such as multilingual nature, data ownership, and traceability of contracts and related negotiations are addressed and it is shown that the present approach provides efficient business data management for B2B EC","tok_text":"busi data manag for business-to-busi electron commerc \n business-to-busi electron commerc ( b2b ec ) open up new possibl for trade . for exampl , new busi partner from around the globe can be found , their offer can be compar , even complex negoti can be conduct electron , and a contract can be drawn up and fulfil via an electron marketplac . howev , sophist data manag is requir to provid such facil . in thi paper , the result of a multi-n project on creat a business-to-busi electron marketplac for small and medium-s enterpris are present . tool for inform discoveri , protocol-bas negoti , and monitor contract enact are provid and base on a busi data repositori . the repositori integr heterogen busi data with busi commun . specif problem such as multilingu natur , data ownership , and traceabl of contract and relat negoti are address and it is shown that the present approach provid effici busi data manag for b2b ec","ordered_present_kp":[20,0,323,504,436,556,575,601,649,694,719,775,796],"keyphrases":["business data management","business-to-business electronic commerce","electronic marketplace","multi-national project","small and medium-sized enterprises","information discovery","protocol-based negotiations","monitored contract enactment","business data repository","heterogeneous business data","business communication","data ownership","traceability","multilingual system"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","M"]} {"id":"1800","title":"Multi-output regression using a locally regularised orthogonal least-squares algorithm","abstract":"The paper considers data modelling using multi-output regression models. A locally regularised orthogonal least-squares (LROLS) algorithm is proposed for constructing sparse multi-output regression models that generalise well. By associating each regressor in the regression model with an individual regularisation parameter, the ability of the multi-output orthogonal least-squares (OLS) model selection to produce a parsimonious model with a good generalisation performance is greatly enhanced","tok_text":"multi-output regress use a local regularis orthogon least-squar algorithm \n the paper consid data model use multi-output regress model . a local regularis orthogon least-squar ( lrol ) algorithm is propos for construct spars multi-output regress model that generalis well . by associ each regressor in the regress model with an individu regularis paramet , the abil of the multi-output orthogon least-squar ( ol ) model select to produc a parsimoni model with a good generalis perform is greatli enhanc","ordered_present_kp":[108,27,93,219,439],"keyphrases":["locally regularised orthogonal least-squares algorithm","data modelling","multi-output regression models","sparse multi-output regression models","parsimonious model","nonlinear system modelling","LROLS algorithm"],"prmu":["P","P","P","P","P","M","R"]} {"id":"1492","title":"A systematic review of the efficacy of telemedicine for making diagnostic and management decisions","abstract":"We conducted a systematic review of the literature to evaluate the efficacy of telemedicine for making diagnostic and management decisions in three classes of application: office\/hospital-based, store-and-forward, and home-based telemedicine. We searched the MEDLINE, EMBASE, CINAHL and HealthSTAR databases and printed resources, and interviewed investigators in the field. We excluded studies where the service did not historically require face-to-face encounters (e.g. radiology or pathology diagnosis). A total of 58 articles met the inclusion criteria. The articles were summarized and graded for the quality and direction of the evidence. There were very few high-quality studies. The strongest evidence for the efficacy of telemedicine for diagnostic and management decisions came from the specialties of psychiatry and dermatology. There was also reasonable evidence that general medical history and physical examinations performed via telemedicine had relatively good sensitivity and specificity. Other specialties in which some evidence for efficacy existed were cardiology and certain areas of ophthalmology. Despite the widespread use of telemedicine in most major medical specialties, there is strong evidence in only a few of them that the diagnostic and management decisions provided by telemedicine are comparable to face-to-face care","tok_text":"a systemat review of the efficaci of telemedicin for make diagnost and manag decis \n we conduct a systemat review of the literatur to evalu the efficaci of telemedicin for make diagnost and manag decis in three class of applic : offic \/ hospital-bas , store-and-forward , and home-bas telemedicin . we search the medlin , embas , cinahl and healthstar databas and print resourc , and interview investig in the field . we exclud studi where the servic did not histor requir face-to-fac encount ( e.g. radiolog or patholog diagnosi ) . a total of 58 articl met the inclus criteria . the articl were summar and grade for the qualiti and direct of the evid . there were veri few high-qual studi . the strongest evid for the efficaci of telemedicin for diagnost and manag decis came from the specialti of psychiatri and dermatolog . there wa also reason evid that gener medic histori and physic examin perform via telemedicin had rel good sensit and specif . other specialti in which some evid for efficaci exist were cardiolog and certain area of ophthalmolog . despit the widespread use of telemedicin in most major medic specialti , there is strong evid in onli a few of them that the diagnost and manag decis provid by telemedicin are compar to face-to-fac care","ordered_present_kp":[37,313,322,330,341,800,815,1013,1043],"keyphrases":["telemedicine","MEDLINE","EMBASE","CINAHL","HealthSTAR","psychiatry","dermatology","cardiology","ophthalmology","medical diagnosis","management decision making","literature review"],"prmu":["P","P","P","P","P","P","P","P","P","R","R","R"]} {"id":"1556","title":"Regularity of some 'incomplete' Pal-type interpolation problems","abstract":"In this paper the regularity of nine Pal-type interpolation problems is proved. In the literature interpolation on the zeros of the pair W\/sub n\/\/sup ( alpha )\/(z) = (z + alpha )\/sup n\/ + (1 + alpha z)\/sup n\/, v\/sub n\/\/sup ( alpha )\/(z) = (z + alpha )\/sup n\/ - (1 + alpha z)\/sup n\/ with 0 < alpha < 1 has been studied. Here the nodes form a subset of these sets of zeros","tok_text":"regular of some ' incomplet ' pal-typ interpol problem \n in thi paper the regular of nine pal-typ interpol problem is prove . in the literatur interpol on the zero of the pair w \/ sub n\/\/sup ( alpha ) \/(z ) = ( z + alpha ) \/sup n\/ + ( 1 + alpha z)\/sup n\/ , v \/ sub n\/\/sup ( alpha ) \/(z ) = ( z + alpha ) \/sup n\/ - ( 1 + alpha z)\/sup n\/ with 0 < alpha < 1 ha been studi . here the node form a subset of these set of zero","ordered_present_kp":[30,159],"keyphrases":["Pal-type interpolation problems","zeros"],"prmu":["P","P"]} {"id":"1513","title":"Solution of the reconstruction problem of a source function in the coagulation-fragmentation equation","abstract":"We study the problem of reconstructing a source function in the kinetic coagulation-fragmentation equation. The study is based on optimal control methods, the solvability theory of operator equations, and the use of iteration algorithms","tok_text":"solut of the reconstruct problem of a sourc function in the coagulation-fragment equat \n we studi the problem of reconstruct a sourc function in the kinet coagulation-fragment equat . the studi is base on optim control method , the solvabl theori of oper equat , and the use of iter algorithm","ordered_present_kp":[149,205,232,250,278],"keyphrases":["kinetic coagulation-fragmentation equation","optimal control methods","solvability","operator equations","iteration algorithms","source function reconstruction"],"prmu":["P","P","P","P","P","R"]} {"id":"1657","title":"Breaking the myths of rewards: an exploratory study of attitudes about knowledge sharing","abstract":"Many CEO and managers understand the importance of knowledge sharing among their employees and are eager to introduce the knowledge management paradigm in their organizations. However little is known about the determinants of the individual's knowledge sharing behavior. The purpose of this study is to develop an understanding of the factors affecting the individual's knowledge sharing behavior in the organizational context. The research model includes various constructs based on social exchange theory, self-efficacy, and theory of reasoned action. Research results from the field survey of 467 employees of four large, public organizations show that expected associations and contribution are the major determinants of the individual's attitude toward knowledge sharing. Expected rewards, believed by many to be the most important motivating factor for knowledge sharing, are not significantly related to the attitude toward knowledge sharing. As expected, positive attitude toward knowledge sharing is found to lead to positive intention to share knowledge and, finally, to actual knowledge sharing behaviors","tok_text":"break the myth of reward : an exploratori studi of attitud about knowledg share \n mani ceo and manag understand the import of knowledg share among their employe and are eager to introduc the knowledg manag paradigm in their organ . howev littl is known about the determin of the individu 's knowledg share behavior . the purpos of thi studi is to develop an understand of the factor affect the individu 's knowledg share behavior in the organiz context . the research model includ variou construct base on social exchang theori , self-efficaci , and theori of reason action . research result from the field survey of 467 employe of four larg , public organ show that expect associ and contribut are the major determin of the individu 's attitud toward knowledg share . expect reward , believ by mani to be the most import motiv factor for knowledg share , are not significantli relat to the attitud toward knowledg share . as expect , posit attitud toward knowledg share is found to lead to posit intent to share knowledg and , final , to actual knowledg share behavior","ordered_present_kp":[65,191,506,530,550,644,18],"keyphrases":["rewards","knowledge sharing","knowledge management","social exchange theory","self-efficacy","theory of reasoned action","public organizations","strategic management"],"prmu":["P","P","P","P","P","P","P","M"]} {"id":"1861","title":"Technology in distance education: a global perspective to alternative delivery mechanisms","abstract":"Technology is providing a positive impact on delivery mechanisms employed in distance education at the university level. Some institutions are incorporating distance education as a way to extend the classroom. Other institutions are investigating new delivery mechanisms, which support a revised perspective on education. These latter institutions are revising their processes for interacting with students, and taking a more \"learner centered\" approach to the delivery of education. This article discusses the impact of technology on the delivery mechanisms employed in distance education. A framework is proposed here, which presents a description of alternative modes of generic delivery mechanisms. It is suggested that those institutions, which adopt a delivery mechanism employing an asynchronous mode, can gain the most benefit from technology. This approach seems to represent the only truly innovative use of technology in distance education. The approach creates a student-oriented environment while maintaining high levels of interaction, both of which are factors that contribute to student satisfaction with their overall educational experience","tok_text":"technolog in distanc educ : a global perspect to altern deliveri mechan \n technolog is provid a posit impact on deliveri mechan employ in distanc educ at the univers level . some institut are incorpor distanc educ as a way to extend the classroom . other institut are investig new deliveri mechan , which support a revis perspect on educ . these latter institut are revis their process for interact with student , and take a more \" learner center \" approach to the deliveri of educ . thi articl discuss the impact of technolog on the deliveri mechan employ in distanc educ . a framework is propos here , which present a descript of altern mode of gener deliveri mechan . it is suggest that those institut , which adopt a deliveri mechan employ an asynchron mode , can gain the most benefit from technolog . thi approach seem to repres the onli truli innov use of technolog in distanc educ . the approach creat a student-ori environ while maintain high level of interact , both of which are factor that contribut to student satisfact with their overal educ experi","ordered_present_kp":[13,1015,30,747],"keyphrases":["distance education","global perspective","asynchronous mode","student satisfaction","educational technology","university education","learner centered approach"],"prmu":["P","P","P","P","R","R","R"]} {"id":"1824","title":"Parallel operation of capacity-limited three-phase four-wire active power filters","abstract":"Three-phase four-wire active power filters (APFs) are presented that can be paralleled to enlarge the system capacity and reliability. The APF employs the PWM four-leg voltage-source inverter. A decoupling control approach for the leg connected to the neutral line is proposed such that the switching of all legs has no interaction. Functions of the proposed APF include compensation of reactive power, harmonic current, unbalanced power and zero-sequence current of the load. The objective is to achieve unity power factor, balanced line current and zero neutral-line current. Compensation of all components is capacity-limited, co-operating with the cascaded load current sensing scheme. Multiple APFs can be paralleled to share the load power without requiring any control interconnection. In addition to providing the theoretic bases and detailed design of the APFs, two 6 kVA APFs are implemented. The effectiveness of the proposed method is validated with experimental results","tok_text":"parallel oper of capacity-limit three-phas four-wir activ power filter \n three-phas four-wir activ power filter ( apf ) are present that can be parallel to enlarg the system capac and reliabl . the apf employ the pwm four-leg voltage-sourc invert . a decoupl control approach for the leg connect to the neutral line is propos such that the switch of all leg ha no interact . function of the propos apf includ compens of reactiv power , harmon current , unbalanc power and zero-sequ current of the load . the object is to achiev uniti power factor , balanc line current and zero neutral-lin current . compens of all compon is capacity-limit , co-oper with the cascad load current sens scheme . multipl apf can be parallel to share the load power without requir ani control interconnect . in addit to provid the theoret base and detail design of the apf , two 6 kva apf are implement . the effect of the propos method is valid with experiment result","ordered_present_kp":[17,0,213,251,528,549,573,858],"keyphrases":["parallel operation","capacity-limited three-phase four-wire active power filters","PWM four-leg voltage-source inverter","decoupling control approach","unity power factor","balanced line current","zero neutral-line current","6 kVA","leg switching","control design","reactive power compensation","harmonic current compensation","unbalanced power compensation","zero-sequence load current compensation","load power sharing","control performance"],"prmu":["P","P","P","P","P","P","P","P","R","R","R","R","R","R","R","M"]} {"id":"1476","title":"The perceived utility of human and automated aids in a visual detection task","abstract":"Although increases in the use of automation have occurred across society, research has found that human operators often underutilize (disuse) and overly rely on (misuse) automated aids (Parasuraman-Riley (1997)). Nearly 275 Cameron University students participated in 1 of 3 experiments performed to examine the effects of perceived utility (Dzindolet et al. (2001)) on automation use in a visual detection task and to compare reliance on automated aids with reliance on humans. Results revealed a bias for human operators to rely on themselves. Although self-report data indicate a bias toward automated aids over human aids, performance data revealed that participants were more likely to disuse automated aids than to disuse human aids. This discrepancy was accounted for by assuming human operators have a \"perfect automation\" schema. Actual or potential applications of this research include the design of future automated decision aids and training procedures for operators relying on such aids","tok_text":"the perceiv util of human and autom aid in a visual detect task \n although increas in the use of autom have occur across societi , research ha found that human oper often underutil ( disus ) and overli reli on ( misus ) autom aid ( parasuraman-riley ( 1997 ) ) . nearli 275 cameron univers student particip in 1 of 3 experi perform to examin the effect of perceiv util ( dzindolet et al . ( 2001 ) ) on autom use in a visual detect task and to compar relianc on autom aid with relianc on human . result reveal a bia for human oper to reli on themselv . although self-report data indic a bia toward autom aid over human aid , perform data reveal that particip were more like to disus autom aid than to disus human aid . thi discrep wa account for by assum human oper have a \" perfect autom \" schema . actual or potenti applic of thi research includ the design of futur autom decis aid and train procedur for oper reli on such aid","ordered_present_kp":[30,45,154,868,30],"keyphrases":["automated aids","automation","visual detection task","human operators","automated decision aids","social process"],"prmu":["P","P","P","P","P","U"]} {"id":"1732","title":"Community spirit","abstract":"IT companies that contribute volunteers, resources or funding to charities and local groups not only make a real difference to their communities but also add value to their businesses. So says a new coalition of IT industry bodies formed to raise awareness of the options for community involvement, promote the business case, and publicise examples of best practice. The BCS, Intellect (formed from the merger of the Computing Services and Software Association and the Federation of the Electronics Industry) and the Worshipful Company of Information Technologists plan to run advisory seminars and provide guidelines on how companies of all sizes can transform their local communities using their specialist IT skills and resources while reaping business benefits","tok_text":"commun spirit \n it compani that contribut volunt , resourc or fund to chariti and local group not onli make a real differ to their commun but also add valu to their busi . so say a new coalit of it industri bodi form to rais awar of the option for commun involv , promot the busi case , and publicis exampl of best practic . the bc , intellect ( form from the merger of the comput servic and softwar associ and the feder of the electron industri ) and the worship compani of inform technologist plan to run advisori seminar and provid guidelin on how compani of all size can transform their local commun use their specialist it skill and resourc while reap busi benefit","ordered_present_kp":[16,657,310],"keyphrases":["IT companies","best practice","business benefits","volunteer staff","resource contribution","charity projects","community projects","staff development"],"prmu":["P","P","P","M","R","M","M","U"]} {"id":"1777","title":"Midlife career choices: how are they different from other career choices?","abstract":"It was 1963 when Candy Start began working in libraries. Libraries seemed to be a refuge from change, a dependable environment devoted primarily to preservation. She was mistaken. Technological changes in every decade of her experience have affected how and where she used her MLS. Far from a static refuge, libraries have proven to be spaceships loaded with precious cargo hurtling into the unknown. The historian in the author says that perhaps libraries have always been like this. This paper looks at a midlife decision point and the choice that this librarian made to move from a point of lessening productivity and interest to one of increasing challenge and contribution. It is a personal narrative of midlife experience from one librarian's point of view. Since writing this article, Candy's career has followed more changes. After selling the WINGS TM system, she has taken her experiences and vision to another library vendor, Gaylord Information Systems, where she serves as a senior product strategist","tok_text":"midlif career choic : how are they differ from other career choic ? \n it wa 1963 when candi start began work in librari . librari seem to be a refug from chang , a depend environ devot primarili to preserv . she wa mistaken . technolog chang in everi decad of her experi have affect how and where she use her ml . far from a static refug , librari have proven to be spaceship load with preciou cargo hurtl into the unknown . the historian in the author say that perhap librari have alway been like thi . thi paper look at a midlif decis point and the choic that thi librarian made to move from a point of lessen product and interest to one of increas challeng and contribut . it is a person narr of midlif experi from one librarian 's point of view . sinc write thi articl , candi 's career ha follow more chang . after sell the wing tm system , she ha taken her experi and vision to anoth librari vendor , gaylord inform system , where she serv as a senior product strategist","ordered_present_kp":[0,112,226,612],"keyphrases":["midlife career choices","libraries","technological changes","productivity"],"prmu":["P","P","P","P"]} {"id":"1819","title":"Structural interpretation of matched pole-zero discretisation","abstract":"Deals with matched pole-zero discretisation, which has been used in practice for hand calculations in the digital redesign of continuous-time systems but available only in the transfer-function form. Since this form is inconvenient for characterising the time-domain properties of sampled-data loops and for computerising the design of such systems, a state-space formulation is developed. Under the new interpretation, the matched pole-zero model is shown to be structurally identical to a hold-equivalent discrete-time model, where the generalised hold takes integral part, thus unifying the most widely used discretisation approaches. An algorithm for obtaining the generalised hold function is presented. The hold-equivalent structure of the matched pole-zero model clarifies several discrete-time system properties, such as controllability and observability, and their preservation or loss with a matched pole-zero discretisation. With the proposed formulation, the matched pole-zero, hold-equivalent, and mapping models can now all be constructed with a single schematic model","tok_text":"structur interpret of match pole-zero discretis \n deal with match pole-zero discretis , which ha been use in practic for hand calcul in the digit redesign of continuous-tim system but avail onli in the transfer-funct form . sinc thi form is inconveni for characteris the time-domain properti of sampled-data loop and for computeris the design of such system , a state-spac formul is develop . under the new interpret , the match pole-zero model is shown to be structur ident to a hold-equival discrete-tim model , where the generalis hold take integr part , thu unifi the most wide use discretis approach . an algorithm for obtain the generalis hold function is present . the hold-equival structur of the match pole-zero model clarifi sever discrete-tim system properti , such as control and observ , and their preserv or loss with a match pole-zero discretis . with the propos formul , the match pole-zero , hold-equival , and map model can now all be construct with a singl schemat model","ordered_present_kp":[0,22,158,271,295,362,480,780,792],"keyphrases":["structural interpretation","matched pole-zero discretisation","continuous-time systems","time-domain properties","sampled-data loops","state-space formulation","hold-equivalent discrete-time model","controllability","observability","closed-loop system","digital simulations"],"prmu":["P","P","P","P","P","P","P","P","P","M","M"]} {"id":"186","title":"The diameter of a long-range percolation graph","abstract":"We consider the following long-range percolation model: an undirected graph with the node set {0, 1, . . . , N}\/sup d\/, has edges (x, y) selected with probability approximately= beta \/||x - y||\/sup s\/ if ||x - y|| > 1, and with probability 1 if ||x - y|| = 1, for some parameters beta , s > 0. This model was introduced by who obtained bounds on the diameter of this graph for the one-dimensional case d = 1 and for various values of s, but left cases s = 1, 2 open. We show that, with high probability, the diameter of this graph is Theta (log N\/log log N) when s = d, and, for some constants 0 < eta \/sub 1\/ < eta \/sub 2\/ < 1, it is at most N\/sup eta 2\/ when s = 2d, and is at least N\/sup eta 1\/ when d = 1, s = 2, beta < 1 or when s > 2d. We also provide a simple proof that the diameter is at most log\/sup O(1)\/ N with high probability, when d < s < 2d, established previously in Benjamini and Berger (2001)","tok_text":"the diamet of a long-rang percol graph \n we consid the follow long-rang percol model : an undirect graph with the node set { 0 , 1 , . . . , n}\/sup d\/ , ha edg ( x , y ) select with probabl approximately= beta \/||x - y||\/sup s\/ if ||x - y|| > 1 , and with probabl 1 if ||x - y|| = 1 , for some paramet beta , s > 0 . thi model wa introduc by who obtain bound on the diamet of thi graph for the one-dimension case d = 1 and for variou valu of s , but left case s = 1 , 2 open . we show that , with high probabl , the diamet of thi graph is theta ( log n \/ log log n ) when s = d , and , for some constant 0 < eta \/sub 1\/ < eta \/sub 2\/ < 1 , it is at most n \/ sup eta 2\/ when s = 2d , and is at least n \/ sup eta 1\/ when d = 1 , s = 2 , beta < 1 or when s > 2d . we also provid a simpl proof that the diamet is at most log \/ sup o(1)\/ n with high probabl , when d < s < 2d , establish previous in benjamini and berger ( 2001 )","ordered_present_kp":[62,90,182,26],"keyphrases":["percolation","long-range percolation model","undirected graph","probability","positive probability","networks","random graph"],"prmu":["P","P","P","P","M","U","M"]} {"id":"1904","title":"Component support in PLT scheme","abstract":"PLT Scheme (DrScheme and MzScheme) supports the Component Object Model (COM) standard with two pieces of software. The first piece is MzCOM, a COM class that makes a Scheme evaluator available to COM clients. With MzCOM, programmers can embed Scheme code in programs written in mainstream languages such as C++ or Visual BASIC. Some applications can also be used as MzCOM clients. The other piece of component-support software is MysterX, which makes COM classes available to PLT Scheme programs. When needed, MysterX uses a programmable Web browser to display COM objects. We describe the technical issues encountered in building these two systems and sketch some applications","tok_text":"compon support in plt scheme \n plt scheme ( drscheme and mzscheme ) support the compon object model ( com ) standard with two piec of softwar . the first piec is mzcom , a com class that make a scheme evalu avail to com client . with mzcom , programm can emb scheme code in program written in mainstream languag such as c++ or visual basic . some applic can also be use as mzcom client . the other piec of component-support softwar is mysterx , which make com class avail to plt scheme program . when need , mysterx use a programm web browser to display com object . we describ the technic issu encount in build these two system and sketch some applic","ordered_present_kp":[18,80,162,531],"keyphrases":["PLT Scheme","Component Object Model","MzCOM","Web browser","reuse"],"prmu":["P","P","P","P","U"]} {"id":"1596","title":"Wavelet collocation methods for a first kind boundary integral equation in acoustic scattering","abstract":"In this paper we consider a wavelet algorithm for the piecewise constant collocation method applied to the boundary element solution of a first kind integral equation arising in acoustic scattering. The conventional stiffness matrix is transformed into the corresponding matrix with respect to wavelet bases, and it is approximated by a compressed matrix. Finally, the stiffness matrix is multiplied by diagonal preconditioners such that the resulting matrix of the system of linear equations is well conditioned and sparse. Using this matrix, the boundary integral equation can be solved effectively","tok_text":"wavelet colloc method for a first kind boundari integr equat in acoust scatter \n in thi paper we consid a wavelet algorithm for the piecewis constant colloc method appli to the boundari element solut of a first kind integr equat aris in acoust scatter . the convent stiff matrix is transform into the correspond matrix with respect to wavelet base , and it is approxim by a compress matrix . final , the stiff matrix is multipli by diagon precondition such that the result matrix of the system of linear equat is well condit and spars . use thi matrix , the boundari integr equat can be solv effect","ordered_present_kp":[132,106,177,39,64,266,497],"keyphrases":["boundary integral equation","acoustic scattering","wavelet algorithm","piecewise constant collocation","boundary element solution","stiffness matrix","linear equations","first kind integral operators","wavelet transform","computational complexity"],"prmu":["P","P","P","P","P","P","P","M","R","U"]} {"id":"1697","title":"Exact frequency-domain reconstruction for thermoacoustic tomography. II. Cylindrical geometry","abstract":"For pt. I see ibid., vol. 21, no. 7, p. 823-8 (2002). Microwave-induced thermoacoustic tomography (TAT) in a cylindrical configuration is developed to image biological tissue. Thermoacoustic signals are acquired by scanning a flat ultrasonic transducer. Using a new expansion of a spherical wave in cylindrical coordinates, we apply the Fourier and Hankel transforms to TAT and obtain an exact frequency-domain reconstruction method. The effect of discrete spatial sampling on image quality is analyzed. An aliasing-proof reconstruction method is proposed. Numerical and experimental results are included","tok_text":"exact frequency-domain reconstruct for thermoacoust tomographi . ii . cylindr geometri \n for pt . i see ibid . , vol . 21 , no . 7 , p. 823 - 8 ( 2002 ) . microwave-induc thermoacoust tomographi ( tat ) in a cylindr configur is develop to imag biolog tissu . thermoacoust signal are acquir by scan a flat ultrason transduc . use a new expans of a spheric wave in cylindr coordin , we appli the fourier and hankel transform to tat and obtain an exact frequency-domain reconstruct method . the effect of discret spatial sampl on imag qualiti is analyz . an aliasing-proof reconstruct method is propos . numer and experiment result are includ","ordered_present_kp":[6,300,39,70,555,406],"keyphrases":["frequency-domain reconstruction","thermoacoustic tomography","cylindrical geometry","flat ultrasonic transducer","Hankel transform","aliasing-proof reconstruction method","medical diagnostic imaging","discrete spatial sampling effect","ultrasound imaging","spherical wave expansion"],"prmu":["P","P","P","P","P","P","M","R","M","R"]} {"id":"1489","title":"An eight-year study of Internet-based remote medical counselling","abstract":"We carried out a prospective study of an Internet-based remote counselling service. A total of 15,456 Internet users visited the Web site over eight years. From these, 1500 users were randomly selected for analysis. Medical counselling had been granted to 901 of the people requesting it (60%). One hundred and sixty-four physicians formed project groups to process the requests and responded using email. The distribution of patients using the service was similar to the availability of the Internet: 78% were from the European Union, North America and Australia. Sixty-seven per cent of the patients lived in urban areas and the remainder were residents of remote rural areas with limited local medical coverage. Sixty-five per cent of the requests were about problems of internal medicine and 30% of the requests concerned surgical issues. The remaining 5% of the patients sought information about recent developments, such as molecular medicine or aviation medicine. During the project, our portal became inaccessible five times, and counselling was not possible on 44 days. There was no hacking of the Web site. Internet-based medical counselling is a helpful addition to conventional practice","tok_text":"an eight-year studi of internet-bas remot medic counsel \n we carri out a prospect studi of an internet-bas remot counsel servic . a total of 15,456 internet user visit the web site over eight year . from these , 1500 user were randomli select for analysi . medic counsel had been grant to 901 of the peopl request it ( 60 % ) . one hundr and sixty-four physician form project group to process the request and respond use email . the distribut of patient use the servic wa similar to the avail of the internet : 78 % were from the european union , north america and australia . sixty-seven per cent of the patient live in urban area and the remaind were resid of remot rural area with limit local medic coverag . sixty-f per cent of the request were about problem of intern medicin and 30 % of the request concern surgic issu . the remain 5 % of the patient sought inform about recent develop , such as molecular medicin or aviat medicin . dure the project , our portal becam inaccess five time , and counsel wa not possibl on 44 day . there wa no hack of the web site . internet-bas medic counsel is a help addit to convent practic","ordered_present_kp":[23,148,172,421,621,662,813,962],"keyphrases":["Internet-based remote medical counselling","Internet users","Web site","email","urban areas","remote rural areas","surgical issues","portal","telemedicine","medical education"],"prmu":["P","P","P","P","P","P","P","P","U","M"]} {"id":"1730","title":"Meeting of minds","abstract":"Technical specialists need to think about their role in IT projects and how they communicate with end-users and other participants to ensure they contribute fully as team members. It is especially important to communicate and document trade-offs that may have to be made, including the rationale behind them, so that if requirements change, the impact and decisions can be readily communicated to the stakeholders","tok_text":"meet of mind \n technic specialist need to think about their role in it project and how they commun with end-us and other particip to ensur they contribut fulli as team member . it is especi import to commun and document trade-off that may have to be made , includ the rational behind them , so that if requir chang , the impact and decis can be readili commun to the stakehold","ordered_present_kp":[15,68,92,104],"keyphrases":["technical specialists","IT projects","communication","end-users"],"prmu":["P","P","P","P"]} {"id":"1775","title":"Are we there yet?: facing the never-ending speed and change of technology in midlife","abstract":"This essay is a personal reflection on entering librarianship in middle age at a time when the profession, like society in general, is experiencing rapidly accelerating change. Much of this change is due to the increased use of computers and information technologies in the library setting. These aids in the production, collection, storage, retrieval, and dissemination of the collective information, knowledge, and sometimes wisdom of the past and the contemporary world can exhilarate or burden depending on one's worldview, the organization, and the flexibility of the workplace. This writer finds herself working in a library where everyone is expected continually to explore and use new ways of working and providing library service to a campus and a wider community. No time is spent in reflecting on what was, but all efforts are to anticipate and prepare for what will be","tok_text":"are we there yet ? : face the never-end speed and chang of technolog in midlif \n thi essay is a person reflect on enter librarianship in middl age at a time when the profess , like societi in gener , is experienc rapidli acceler chang . much of thi chang is due to the increas use of comput and inform technolog in the librari set . these aid in the product , collect , storag , retriev , and dissemin of the collect inform , knowledg , and sometim wisdom of the past and the contemporari world can exhilar or burden depend on one 's worldview , the organ , and the flexibl of the workplac . thi writer find herself work in a librari where everyon is expect continu to explor and use new way of work and provid librari servic to a campu and a wider commun . no time is spent in reflect on what wa , but all effort are to anticip and prepar for what will be","ordered_present_kp":[120,137,284,295,393,379,370,360],"keyphrases":["librarianship","middle age","computers","information technologies","collection","storage","retrieval","dissemination","changing technology"],"prmu":["P","P","P","P","P","P","P","P","R"]} {"id":"1788","title":"Resolving Web user on the fly","abstract":"Identity authentication systems and procedures are rapidly becoming central issues in the practice and study of information systems development and security. Requirements for Web transaction security (WTS) include strong authentication of a user, non-repudiation and encryption of all traffic. In this paper, we present an effective mechanism involving two different channels, which addresses the prime concerns involved in the security of electronic commerce transactions (ECT) viz. user authentication and non-repudiation. Although the product is primarily targeted to provide a fillip to transactions carried out over the Web, this product can also be effectively used for non-Internet transactions that are carried out where user authentication is required","tok_text":"resolv web user on the fli \n ident authent system and procedur are rapidli becom central issu in the practic and studi of inform system develop and secur . requir for web transact secur ( wt ) includ strong authent of a user , non-repudi and encrypt of all traffic . in thi paper , we present an effect mechan involv two differ channel , which address the prime concern involv in the secur of electron commerc transact ( ect ) viz . user authent and non-repudi . although the product is primarili target to provid a fillip to transact carri out over the web , thi product can also be effect use for non-internet transact that are carri out where user authent is requir","ordered_present_kp":[29,122,167,242,257,393],"keyphrases":["identity authentication systems","information systems development","Web transaction security","encryption","traffic","electronic commerce transactions","information systems security","nonrepudiation"],"prmu":["P","P","P","P","P","P","R","U"]} {"id":"1474","title":"Contrast sensitivity in a dynamic environment: effects of target conditions and visual impairment","abstract":"Contrast sensitivity was determined as a function of target velocity (0 degrees -120 degrees \/s) over a variety of viewing conditions. In Experiment 1, measurements of dynamic contrast sensitivity were determined for observers as a function of target velocity for letter stimuli. Significant main effects were found for target velocity, target size, and target duration, but significant interactions among the variables indicated especially pronounced adverse effects of increasing target velocity for small targets and brief durations. In Experiment 2, the effects of simulated cataracts were determined. Although the simulated impairment had no effect on traditional acuity scores, dynamic contrast sensitivity was markedly reduced. Results are discussed in terms of dynamic contrast sensitivity as a useful composite measure of visual functioning that may provide a better overall picture of an individual's visual functioning than does traditional static acuity, dynamic acuity, or contrast sensitivity alone. The measure of dynamic contrast sensitivity may increase understanding of the practical effects of various conditions, such as aging or disease, on the visual system, or it may allow improved prediction of individuals' performance in visually dynamic situations","tok_text":"contrast sensit in a dynam environ : effect of target condit and visual impair \n contrast sensit wa determin as a function of target veloc ( 0 degre -120 degre \/s ) over a varieti of view condit . in experi 1 , measur of dynam contrast sensit were determin for observ as a function of target veloc for letter stimuli . signific main effect were found for target veloc , target size , and target durat , but signific interact among the variabl indic especi pronounc advers effect of increas target veloc for small target and brief durat . in experi 2 , the effect of simul cataract were determin . although the simul impair had no effect on tradit acuiti score , dynam contrast sensit wa markedli reduc . result are discuss in term of dynam contrast sensit as a use composit measur of visual function that may provid a better overal pictur of an individu 's visual function than doe tradit static acuiti , dynam acuiti , or contrast sensit alon . the measur of dynam contrast sensit may increas understand of the practic effect of variou condit , such as age or diseas , on the visual system , or it may allow improv predict of individu ' perform in visual dynam situat","ordered_present_kp":[0,21,47,65,221,126,370,388,647,1054,1061],"keyphrases":["contrast sensitivity","dynamic environment","target conditions","visual impairment","target velocity","dynamic contrast sensitivity","target size","target duration","acuity scores","aging","disease"],"prmu":["P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1695","title":"Medical image computing at the Institute of Mathematics and Computer Science in Medicine, University Hospital Hamburg-Eppendorf","abstract":"The author reviews the history of medical image computing at his institute, summarizes the achievements, sketches some of the difficulties encountered, and draws conclusions that might be of interest especially to people new to the field. The origin and history section provides a chronology of this work, emphasizing the milestones reached during the past three decades. In accordance with the author's group's focus on imaging, the paper is accompanied by many pictures, some of which, he thinks, are of historical value","tok_text":"medic imag comput at the institut of mathemat and comput scienc in medicin , univers hospit hamburg-eppendorf \n the author review the histori of medic imag comput at hi institut , summar the achiev , sketch some of the difficulti encount , and draw conclus that might be of interest especi to peopl new to the field . the origin and histori section provid a chronolog of thi work , emphas the mileston reach dure the past three decad . in accord with the author 's group 's focu on imag , the paper is accompani by mani pictur , some of which , he think , are of histor valu","ordered_present_kp":[25,77,563,219],"keyphrases":["Institute of Mathematics and Computer Science in Medicine","University Hospital Hamburg-Eppendorf","difficulties encountered","historical value","medical image computing history","medical diagnostic imaging","work chronology"],"prmu":["P","P","P","P","R","M","R"]} {"id":"1569","title":"An interactive self-replicator implemented in hardware","abstract":"Self-replicating loops presented to date are essentially worlds unto themselves, inaccessible to the observer once the replication process is launched. We present the design of an interactive self-replicating loop of arbitrary size, wherein the user can physically control the loop's replication and induce its destruction. After introducing the BioWall, a reconfigurable electronic wall for bio-inspired applications, we describe the design of our novel loop and delineate its hardware implementation in the wall","tok_text":"an interact self-repl implement in hardwar \n self-repl loop present to date are essenti world unto themselv , inaccess to the observ onc the replic process is launch . we present the design of an interact self-repl loop of arbitrari size , wherein the user can physic control the loop 's replic and induc it destruct . after introduc the biowal , a reconfigur electron wall for bio-inspir applic , we describ the design of our novel loop and delin it hardwar implement in the wall","ordered_present_kp":[3,196,338,349,378,451,12],"keyphrases":["interactive self-replicator","self-replication","interactive self-replicating loop","BioWall","reconfigurable electronic wall","bio-inspired applications","hardware implementation","field programmable gate array","cellular automata","reconfigurable computing","artificial life"],"prmu":["P","P","P","P","P","P","P","U","U","M","U"]} {"id":"179","title":"Document-based workflow modeling: a case-based reasoning approach","abstract":"A workflow model is useful for business process analysis. A well-built workflow can help a company streamline its internal processes by reducing overhead. The results of workflow modeling need to be managed as information assets in a systematic fashion. Reusing these results is likely to enhance the quality of the modeling. Therefore, this paper proposes a document-based workflow modeling mechanism, which employs a case-based reasoning (CBR) technique for the effective reuse of design outputs. A repository is proposed to support this CBR process. A real-life case is illustrated to demonstrate the usefulness of our approach","tok_text":"document-bas workflow model : a case-bas reason approach \n a workflow model is use for busi process analysi . a well-built workflow can help a compani streamlin it intern process by reduc overhead . the result of workflow model need to be manag as inform asset in a systemat fashion . reus these result is like to enhanc the qualiti of the model . therefor , thi paper propos a document-bas workflow model mechan , which employ a case-bas reason ( cbr ) techniqu for the effect reus of design output . a repositori is propos to support thi cbr process . a real-lif case is illustr to demonstr the use of our approach","ordered_present_kp":[0,32,87,143,248],"keyphrases":["document-based workflow modeling","case-based reasoning","business process analysis","company","information assets","design output reuse"],"prmu":["P","P","P","P","P","R"]} {"id":"184","title":"On the expected value of the minimum assignment","abstract":"The minimum k-assignment of an m*n matrix X is the minimum sum of k entries of X, no two of which belong to the same row or column. Coppersmith and Sorkin conjectured that if X is generated by choosing each entry independently from the exponential distribution with mean 1, then the expected value of its minimum k-assignment is given by an explicit formula, which has been proven only in a few cases. In this paper we describe our efforts to prove the Coppersmith-Sorkin conjecture by considering the more general situation where the entries x\/sub ij\/ of X are chosen independently from different distributions. In particular, we require that x\/sub ij\/ be chosen from the exponential distribution with mean 1\/r\/sub i\/c\/sub j\/. We conjecture an explicit formula for the expected value of the minimum k-assignment of such X and give evidence for this formula","tok_text":"on the expect valu of the minimum assign \n the minimum k-assign of an m*n matrix x is the minimum sum of k entri of x , no two of which belong to the same row or column . coppersmith and sorkin conjectur that if x is gener by choos each entri independ from the exponenti distribut with mean 1 , then the expect valu of it minimum k-assign is given by an explicit formula , which ha been proven onli in a few case . in thi paper we describ our effort to prove the coppersmith-sorkin conjectur by consid the more gener situat where the entri x \/ sub ij\/ of x are chosen independ from differ distribut . in particular , we requir that x \/ sub ij\/ be chosen from the exponenti distribut with mean 1 \/ r \/ sub i \/ c \/ sub j\/. we conjectur an explicit formula for the expect valu of the minimum k-assign of such x and give evid for thi formula","ordered_present_kp":[47,261],"keyphrases":["minimum k-assignment","exponential distribution","m * n matrix","rational function","bipartite graph"],"prmu":["P","P","M","U","U"]} {"id":"1906","title":"Integrated process control using an in situ sensor for etch","abstract":"The migration to tighter geometries and more complex process sequence integration schemes requires having the ability to compensate for upstream deviations from target specifications. Doing so ensures that-downstream process sequences operate on work-in-progress that is well within control. Because point-of-use visibility of work-in-progress quality has become of paramount concern in the industry's drive to reduce scrap and improve yield, controlling trench depth has assumed greater importance. An integrated, interferometric based, rate monitor for etch-to-depth and spacer etch applications has been developed for controlling this parameter. This article demonstrates that the integrated rate monitor, using polarization and digital signal processing, enhances control etch-to-depth processes and can also be implemented as a predictive endpoint in a wafer manufacturing environment for dual damascene trench etch and spacer etch applications","tok_text":"integr process control use an in situ sensor for etch \n the migrat to tighter geometri and more complex process sequenc integr scheme requir have the abil to compens for upstream deviat from target specif . do so ensur that-downstream process sequenc oper on work-in-progress that is well within control . becaus point-of-us visibl of work-in-progress qualiti ha becom of paramount concern in the industri 's drive to reduc scrap and improv yield , control trench depth ha assum greater import . an integr , interferometr base , rate monitor for etch-to-depth and spacer etch applic ha been develop for control thi paramet . thi articl demonstr that the integr rate monitor , use polar and digit signal process , enhanc control etch-to-depth process and can also be implement as a predict endpoint in a wafer manufactur environ for dual damascen trench etch and spacer etch applic","ordered_present_kp":[0,680,690,803,832,564,96,191,313,335],"keyphrases":["integrated process control","complex process sequence integration schemes","target specifications","point-of-use visibility","work-in-progress quality","spacer etch applications","polarization","digital signal processing","wafer manufacturing environment","dual damascene trench etch","interferometric in situ etch sensor","process predictive endpoint","IC geometry","upstream deviation compensation","downstream process sequences","scrap reduction","yield improvement","trench depth control","interferometry","integrated etch rate monitor"],"prmu":["P","P","P","P","P","P","P","P","P","P","R","R","M","R","M","M","R","R","U","R"]} {"id":"1594","title":"Training multilayer perceptrons via minimization of sum of ridge functions","abstract":"Motivated by the problem of training multilayer perceptrons in neural networks, we consider the problem of minimizing E(x)= Sigma \/sub i=1\/\/sup n\/ f\/sub i\/( xi \/sub i\/.x), where xi \/sub i\/ in R\/sup S\/, 1or= 0} are investigated, where || . ||\/sub p\/ is the usual vector norm in C\/sup n\/ resp. R\/sup n\/, for p epsilon [1, o infinity ]. Moreover, formulae for the first three right derivatives D\/sub +\/\/sup k\/||s(t)||\/sub p\/, k = 1, 2,3 are determined. These formulae are applied to vibration problems by computing the best upper bounds on ||s(t)||\/sub p\/ in certain classes of bounds. These results cannot be obtained by the methods used so far. The systematic use of the differential calculus for vector norms, as done here for the first time, could lead to major advances also in other branches of mathematics and other sciences","tok_text":"differenti calculu for p-norm of complex-valu vector function with applic \n for complex-valu n-dimension vector function t to s(t ) , suppos to be suffici smooth , the differenti properti of the map t to ||s(t)||\/sub p\/ at everi point t = t \/ sub 0\/ epsilon r \/ sub 0\/\/sup + \/:= { t epsilon r | t > or= 0 } are investig , where || . ||\/sub p\/ is the usual vector norm in c \/ sup n\/ resp . r \/ sup n\/ , for p epsilon [ 1 , o infin ] . moreov , formula for the first three right deriv d \/ sub + \/\/sup k\/||s(t)||\/sub p\/ , k = 1 , 2,3 are determin . these formula are appli to vibrat problem by comput the best upper bound on ||s(t)||\/sub p\/ in certain class of bound . these result can not be obtain by the method use so far . the systemat use of the differenti calculu for vector norm , as done here for the first time , could lead to major advanc also in other branch of mathemat and other scienc","ordered_present_kp":[0,46,195,573,356],"keyphrases":["differential calculus","vector functions","mapping","vector norms","vibration problems"],"prmu":["P","P","P","P","P"]} {"id":"1511","title":"Efficient algorithms for stiff elliptic problems with large parameters","abstract":"We consider a finite element approximation and iteration algorithms for solving stiff elliptic boundary value problems with large parameters in front of a higher derivative. The convergence rate of the algorithms is independent of the spread in coefficients and a discretization parameter","tok_text":"effici algorithm for stiff ellipt problem with larg paramet \n we consid a finit element approxim and iter algorithm for solv stiff ellipt boundari valu problem with larg paramet in front of a higher deriv . the converg rate of the algorithm is independ of the spread in coeffici and a discret paramet","ordered_present_kp":[74,101,125,47,192,0,211],"keyphrases":["efficient algorithms","large parameters","finite element approximation","iteration algorithms","stiff elliptic boundary value problems","higher derivative","convergence rate"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1863","title":"Information systems project failure: a comparative study of two countries","abstract":"Many organizations, regardless of size, engage in at least one, and often many information system projects each year. Many of these projects consume massive amounts of resources, and may cost as little as a few thousand dollars to ten, and even hundreds of millions of dollars. Needless to say, the investment of time and resources into these ventures are of significant concern to chief information officers (CIOs), executives staff members, project managers, and others in leadership positions. This paper describes the results of a survey performed between Australia and the United States regarding factors leading to IS project failure. The findings suggest that, among other things, end user involvement and executive management leadership are key indicators influencing IS project failure","tok_text":"inform system project failur : a compar studi of two countri \n mani organ , regardless of size , engag in at least one , and often mani inform system project each year . mani of these project consum massiv amount of resourc , and may cost as littl as a few thousand dollar to ten , and even hundr of million of dollar . needless to say , the invest of time and resourc into these ventur are of signific concern to chief inform offic ( cio ) , execut staff member , project manag , and other in leadership posit . thi paper describ the result of a survey perform between australia and the unit state regard factor lead to is project failur . the find suggest that , among other thing , end user involv and execut manag leadership are key indic influenc is project failur","ordered_present_kp":[0,570,588,685,705],"keyphrases":["information systems project failure","Australia","United States","end user involvement","executive management leadership"],"prmu":["P","P","P","P","P"]} {"id":"1826","title":"Modeling shape and topology of low-resolution density maps of biological macromolecules","abstract":"We develop an efficient way of representing the geometry and topology of volumetric datasets of biological structures from medium to low resolution, aiming at storing and querying them in a database framework. We make use of a new vector quantization algorithm to select the points within the macromolecule that best approximate the probability density function of the original volume data. Connectivity among points is obtained with the use of the alpha shapes theory. This novel data representation has a number of interesting characteristics, such as (1) it allows us to automatically segment and quantify a number of important structural features from low-resolution maps, such as cavities and channels, opening the possibility of querying large collections of maps on the basis of these quantitative structural features; (2) it provides a compact representation in terms of size; (3) it contains a subset of three-dimensional points that optimally quantify the densities of medium resolution data; and (4) a general model of the geometry and topology of the macromolecule (as opposite to a spatially unrelated bunch of voxels) is easily obtained by the use of the alpha shapes theory","tok_text":"model shape and topolog of low-resolut densiti map of biolog macromolecul \n we develop an effici way of repres the geometri and topolog of volumetr dataset of biolog structur from medium to low resolut , aim at store and queri them in a databas framework . we make use of a new vector quantiz algorithm to select the point within the macromolecul that best approxim the probabl densiti function of the origin volum data . connect among point is obtain with the use of the alpha shape theori . thi novel data represent ha a number of interest characterist , such as ( 1 ) it allow us to automat segment and quantifi a number of import structur featur from low-resolut map , such as caviti and channel , open the possibl of queri larg collect of map on the basi of these quantit structur featur ; ( 2 ) it provid a compact represent in term of size ; ( 3 ) it contain a subset of three-dimension point that optim quantifi the densiti of medium resolut data ; and ( 4 ) a gener model of the geometri and topolog of the macromolecul ( as opposit to a spatial unrel bunch of voxel ) is easili obtain by the use of the alpha shape theori","ordered_present_kp":[115,16,139,159,237,278,27,54,0,370,503,634,681,692,422,813,878,935,969,402,472],"keyphrases":["modeling","topology","low-resolution density maps","biological macromolecules","geometry","volumetric datasets","biological structures","database framework","vector quantization algorithm","probability density function","original volume data","connectivity","alpha shapes theory","data representation","structural features","cavities","channels","compact representation","three-dimensional points","medium resolution data","general model"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1748","title":"On a general constitutive description for the inelastic and failure behavior of fibrous laminates. I. Lamina theory","abstract":"It is well known that a structural design with isotropic materials can only be accomplished based on a stress failure criterion. This is, however, generally not true with laminated composites. Only when the laminate is subjected to an in-plane load, can the ultimate failure of the laminate correspond to its last-ply failure, and hence a stress failure criterion may be sufficient to detect the maximum load that can be sustained by the laminate. Even in such a case, the load shared by each lamina in the laminate cannot be correctly determined if the lamina instantaneous stiffness matrix is inaccurately provided, since the lamina is always statically indeterminate in the laminate. If, however, the laminate is subjected to a lateral load, its ultimate failure occurs before last-ply failure and use of the stress failure criterion is no longer sufficient; an additional critical deflection or curvature condition must also be employed. This necessitates development of an efficient constitutive relationship for laminated composites in order that the laminate strains\/deflections up to ultimate failure can be accurately calculated. A general constitutive description for the thermomechanical response of a fibrous laminate up to ultimate failure with applications to various fibrous laminates is presented in the two papers. The constitutive relationship is obtained by combining classical lamination theory with a recently developed bridging micromechanics model, through a layer-by-layer analysis. This paper focuses on lamina analysis","tok_text":"on a gener constitut descript for the inelast and failur behavior of fibrou lamin . i. lamina theori \n it is well known that a structur design with isotrop materi can onli be accomplish base on a stress failur criterion . thi is , howev , gener not true with lamin composit . onli when the lamin is subject to an in-plan load , can the ultim failur of the lamin correspond to it last-pli failur , and henc a stress failur criterion may be suffici to detect the maximum load that can be sustain by the lamin . even in such a case , the load share by each lamina in the lamin can not be correctli determin if the lamina instantan stiff matrix is inaccur provid , sinc the lamina is alway static indetermin in the lamin . if , howev , the lamin is subject to a later load , it ultim failur occur befor last-pli failur and use of the stress failur criterion is no longer suffici ; an addit critic deflect or curvatur condit must also be employ . thi necessit develop of an effici constitut relationship for lamin composit in order that the lamin strain \/ deflect up to ultim failur can be accur calcul . a gener constitut descript for the thermomechan respons of a fibrou lamin up to ultim failur with applic to variou fibrou lamin is present in the two paper . the constitut relationship is obtain by combin classic lamin theori with a recent develop bridg micromechan model , through a layer-by-lay analysi . thi paper focus on lamina analysi","ordered_present_kp":[5,50,69,87,127,148,196,313,618,758,379,265,1036,1135,1384,1354],"keyphrases":["general constitutive description","failure behavior","fibrous laminates","lamina theory","structural design","isotropic materials","stress failure criterion","composites","in-plane load","last-ply failure","instantaneous stiffness matrix","lateral load","laminate strains","thermomechanical response","micromechanics model","layer-by-layer analysis","inelastic behavior","critical deflection condition","critical curvature condition","laminate deflections","multidirectional tape laminae","woven fabric composites","braided fabric composites","knitted fabric reinforced composites","elastoplasticity","elastic-viscoplasticity"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","P","R","R","R","R","M","M","M","M","U","U"]} {"id":"1570","title":"Self-reproduction in three-dimensional reversible cellular space","abstract":"Due to inevitable power dissipation, it is said that nano-scaled computing devices should perform their computing processes in a reversible manner. This will be a large problem in constructing three-dimensional nano-scaled functional objects. Reversible cellular automata (RCA) are used for modeling physical phenomena such as power dissipation, by studying the dissipation of garbage signals. We construct a three-dimensional self-inspective self-reproducing reversible cellular automaton by extending the two-dimensional version SR\/sub 8\/. It can self-reproduce various patterns in three-dimensional reversible cellular space without dissipating garbage signals","tok_text":"self-reproduct in three-dimension revers cellular space \n due to inevit power dissip , it is said that nano-sc comput devic should perform their comput process in a revers manner . thi will be a larg problem in construct three-dimension nano-sc function object . revers cellular automata ( rca ) are use for model physic phenomena such as power dissip , by studi the dissip of garbag signal . we construct a three-dimension self-inspect self-reproduc revers cellular automaton by extend the two-dimension version sr \/ sub 8\/. it can self-reproduc variou pattern in three-dimension revers cellular space without dissip garbag signal","ordered_present_kp":[0,103,72,263,18],"keyphrases":["self-reproduction","three-dimensional reversible cellular space","power dissipation","nano-scaled computing devices","reversible cellular automata","3D self-inspective self-reproducing cellular automata","artificial life"],"prmu":["P","P","P","P","P","M","U"]} {"id":"1535","title":"Hot controllers","abstract":"Over the last few years, the semiconductor industry has put much emphasis on ways to improve the accuracy of thermal mass flow controllers (TMFCs). Although issues involving TMFC mounting orientation and pressure effects have received much attention, little has been done to address the effect of changes in ambient temperature or process gas temperature. Scientists and engineers at Qualiflow have succeeded to solve the problem using a temperature correction algorithm for digital TMFCs. Using an in situ environmental temperature compensation technique, we calculated correction factors for the temperature effect and obtained satisfactory results with both the traditional sensor and the new, improved thin-film sensors","tok_text":"hot control \n over the last few year , the semiconductor industri ha put much emphasi on way to improv the accuraci of thermal mass flow control ( tmfc ) . although issu involv tmfc mount orient and pressur effect have receiv much attent , littl ha been done to address the effect of chang in ambient temperatur or process ga temperatur . scientist and engin at qualiflow have succeed to solv the problem use a temperatur correct algorithm for digit tmfc . use an in situ environment temperatur compens techniqu , we calcul correct factor for the temperatur effect and obtain satisfactori result with both the tradit sensor and the new , improv thin-film sensor","ordered_present_kp":[119,411,464],"keyphrases":["thermal mass flow controller","temperature correction algorithm","in situ environmental temperature compensation","semiconductor manufacturing","process gas flow"],"prmu":["P","P","P","M","R"]} {"id":"160","title":"Taming the paper tiger [paperwork organization]","abstract":"Generally acknowledged as a critical problem for many information professionals, the massive flow of documents, paper trails, and information needs efficient and dependable approaches for processing and storing and finding items and information","tok_text":"tame the paper tiger [ paperwork organ ] \n gener acknowledg as a critic problem for mani inform profession , the massiv flow of document , paper trail , and inform need effici and depend approach for process and store and find item and inform","ordered_present_kp":[23,89],"keyphrases":["paperwork organization","information professionals","information processing","information storage","information retrieval"],"prmu":["P","P","R","M","M"]} {"id":"1671","title":"Cane railway scheduling via constraint logic programming: labelling order and constraints in a real-life application","abstract":"In Australia, cane transport is the largest unit cost in the manufacturing of raw sugar, making up around 35% of the total manufacturing costs. Producing efficient schedules for the cane railways can result in significant cost savings. The paper presents a study using constraint logic programming (CLP) to solve the cane transport scheduling problem. Tailored heuristic labelling order and constraints strategies are proposed and encouraging results of application to several test problems and one real-life case are presented. The preliminary results demonstrate that CLP can be used as an effective tool for solving the cane transport scheduling problem, with a potential decrease in development costs of the scheduling system. It can also be used as an efficient tool for rescheduling tasks which the existing cane transport scheduling system cannot perform well","tok_text":"cane railway schedul via constraint logic program : label order and constraint in a real-lif applic \n in australia , cane transport is the largest unit cost in the manufactur of raw sugar , make up around 35 % of the total manufactur cost . produc effici schedul for the cane railway can result in signific cost save . the paper present a studi use constraint logic program ( clp ) to solv the cane transport schedul problem . tailor heurist label order and constraint strategi are propos and encourag result of applic to sever test problem and one real-lif case are present . the preliminari result demonstr that clp can be use as an effect tool for solv the cane transport schedul problem , with a potenti decreas in develop cost of the schedul system . it can also be use as an effici tool for reschedul task which the exist cane transport schedul system can not perform well","ordered_present_kp":[0,25,117,178,217,307,434,458],"keyphrases":["cane railway scheduling","constraint logic programming","cane transport","raw sugar","total manufacturing costs","cost savings","heuristic labelling order","constraints strategies"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1634","title":"Maple 8 keeps everyone happy","abstract":"The author is impressed with the upgrade to the mathematics package Maple 8, finding it genuinely useful to scientists and educators. The developments Waterloo Maple class as revolutionary include a student calculus package, and Maplets. The first provides a high-level command set for calculus exploration and plotting (removing the need to work with, say, plot primitives). The second is a package for hand-coding custom graphical user interfaces (GUIs) using elements such as check boxes, radio buttons, slider bars and pull-down menus. When called, a Maplet launches a runtime Java environment that pops up a window-analogous to a Java applet-to perform a programmed routine, if required passing the result back to the Maple worksheet","tok_text":"mapl 8 keep everyon happi \n the author is impress with the upgrad to the mathemat packag mapl 8 , find it genuin use to scientist and educ . the develop waterloo mapl class as revolutionari includ a student calculu packag , and maplet . the first provid a high-level command set for calculu explor and plot ( remov the need to work with , say , plot primit ) . the second is a packag for hand-cod custom graphic user interfac ( gui ) use element such as check box , radio button , slider bar and pull-down menu . when call , a maplet launch a runtim java environ that pop up a window-analog to a java applet-to perform a program routin , if requir pass the result back to the mapl worksheet","ordered_present_kp":[199,256,283,428,228,543],"keyphrases":["student calculus package","Maplet","high-level command set","calculus exploration","GUIs","runtime Java environment","Maple 8 mathematics package","calculus plotting"],"prmu":["P","P","P","P","P","P","R","R"]} {"id":"1729","title":"Maintaining e-commerce","abstract":"E-commerce over the Web has created a relatively new type of information system. So it is hardly surprising that little attention has been given to the maintenance of such systems-and even less to attempting to develop them with future maintenance in mind. But there are various ways e-commerce systems can be developed to reduce future maintenance","tok_text":"maintain e-commerc \n e-commerc over the web ha creat a rel new type of inform system . so it is hardli surpris that littl attent ha been given to the mainten of such systems-and even less to attempt to develop them with futur mainten in mind . but there are variou way e-commerc system can be develop to reduc futur mainten","ordered_present_kp":[],"keyphrases":["e-commerce systems maintenance","Web systems"],"prmu":["R","R"]} {"id":"1847","title":"Conceptual modeling and specification generation for B2B business processes based on ebXML","abstract":"In order to support dynamic setup of business processes among independent organizations, a formal standard schema for describing the business processes is basically required. The ebXML framework provides such a specification schema called BPSS (Business Process Specification Schema) which is available in two standalone representations: a UML version, and an XML version. The former, however, is not intended for the direct creation of business process specifications, but for defining specification elements and their relationships required for creating an ebXML-compliant business process specification. For this reason, it is very important to support conceptual modeling that is well organized and directly matched with major modeling concepts. This paper deals with how to represent and manage B2B business processes using UML-compliant diagrams. The major challenge is to organize UML diagrams in a natural way that is well suited to the business process meta-model and then to transform the diagrams into an XML version. This paper demonstrates the usefulness of conceptually modeling business processes by prototyping a business process editor tool called ebDesigner","tok_text":"conceptu model and specif gener for b2b busi process base on ebxml \n in order to support dynam setup of busi process among independ organ , a formal standard schema for describ the busi process is basic requir . the ebxml framework provid such a specif schema call bpss ( busi process specif schema ) which is avail in two standalon represent : a uml version , and an xml version . the former , howev , is not intend for the direct creation of busi process specif , but for defin specif element and their relationship requir for creat an ebxml-compli busi process specif . for thi reason , it is veri import to support conceptu model that is well organ and directli match with major model concept . thi paper deal with how to repres and manag b2b busi process use uml-compli diagram . the major challeng is to organ uml diagram in a natur way that is well suit to the busi process meta-model and then to transform the diagram into an xml version . thi paper demonstr the use of conceptu model busi process by prototyp a busi process editor tool call ebdesign","ordered_present_kp":[36,61,0,19,142,272,764,1050,1020],"keyphrases":["conceptual modeling","specification generation","B2B business processes","ebXML","formal standard schema","Business Process Specification Schema","UML-compliant diagrams","business process editor","ebDesigner","meta model"],"prmu":["P","P","P","P","P","P","P","P","P","M"]} {"id":"1802","title":"Novel TCP congestion control scheme and its performance evaluation","abstract":"A novel self-tuning proportional and derivative (ST-PD) control based TCP congestion control scheme is proposed. The new scheme approaches the congestion control problem from a control-theoretical perspective and overcomes several Important limitations associated with existing TCP congestion control schemes, which are heuristic based. In the proposed scheme, a PD controller is employed to keep the buffer occupancy of the bottleneck node on the connection path at an ideal operating level, and it adjusts the TCP window accordingly. The control gains of the PD controller are tuned online by a fuzzy logic controller based on the perceived bandwidth-delay product of the TCP connection. This scheme gives ST-PD TCP several advantages over current TCP implementations. These include rapid response to bandwidth variations, insensitivity to buffer sizes, and significant improvement of TCP throughput over lossy links by decoupling congestion control and error control functions of TCP","tok_text":"novel tcp congest control scheme and it perform evalu \n a novel self-tun proport and deriv ( st-pd ) control base tcp congest control scheme is propos . the new scheme approach the congest control problem from a control-theoret perspect and overcom sever import limit associ with exist tcp congest control scheme , which are heurist base . in the propos scheme , a pd control is employ to keep the buffer occup of the bottleneck node on the connect path at an ideal oper level , and it adjust the tcp window accordingli . the control gain of the pd control are tune onlin by a fuzzi logic control base on the perceiv bandwidth-delay product of the tcp connect . thi scheme give st-pd tcp sever advantag over current tcp implement . these includ rapid respons to bandwidth variat , insensit to buffer size , and signific improv of tcp throughput over lossi link by decoupl congest control and error control function of tcp","ordered_present_kp":[6,40,212,365,398,418,441,577,617,850],"keyphrases":["TCP congestion control scheme","performance evaluation","control-theoretical perspective","PD controller","buffer occupancy","bottleneck node","connection path","fuzzy logic controller","bandwidth-delay product","lossy links","self-tuning proportional-derivative control"],"prmu":["P","P","P","P","P","P","P","P","P","P","M"]} {"id":"1490","title":"Client satisfaction in a feasibility study comparing face-to-face interviews with telepsychiatry","abstract":"We carried out a pilot study comparing satisfaction levels between psychiatric patients seen face to face (FTF) and those seen via videoconference. Patients who consented were randomly assigned to one of two groups. One group received services in person (FTF from the visiting psychiatrist) while the other was seen using videoconferencing at 128 kbit\/s. One psychiatrist provided all the FTF and videoconferencing assessment and follow-up visits. A total of 24 subjects were recruited. Three of the subjects (13%) did not attend their appointments and two subjects in each group were lost to follow-up. Thus there were nine in the FTF group and eight in the videoconferencing group. The two groups were similar in most respects. Patient satisfaction with the services was assessed using the Client Satisfaction Questionnaire (CSQ-8), completed four months after the initial consultation. The mean scores were 25.3 in the FTF group and 21.6 in the videoconferencing group. Although there was a trend in favour of the FTF service, the difference was not significant. Patient satisfaction is only one component of evaluation. The efficacy of telepsychiatry must also be measured relative to that of conventional, FTF care before policy makers can decide how extensively telepsychiatry should be implemented","tok_text":"client satisfact in a feasibl studi compar face-to-fac interview with telepsychiatri \n we carri out a pilot studi compar satisfact level between psychiatr patient seen face to face ( ftf ) and those seen via videoconfer . patient who consent were randomli assign to one of two group . one group receiv servic in person ( ftf from the visit psychiatrist ) while the other wa seen use videoconferenc at 128 kbit \/ s. one psychiatrist provid all the ftf and videoconferenc assess and follow-up visit . a total of 24 subject were recruit . three of the subject ( 13 % ) did not attend their appoint and two subject in each group were lost to follow-up . thu there were nine in the ftf group and eight in the videoconferenc group . the two group were similar in most respect . patient satisfact with the servic wa assess use the client satisfact questionnair ( csq-8 ) , complet four month after the initi consult . the mean score were 25.3 in the ftf group and 21.6 in the videoconferenc group . although there wa a trend in favour of the ftf servic , the differ wa not signific . patient satisfact is onli one compon of evalu . the efficaci of telepsychiatri must also be measur rel to that of convent , ftf care befor polici maker can decid how extens telepsychiatri should be implement","ordered_present_kp":[0,43,70,208,824],"keyphrases":["client satisfaction","face-to-face interviews","telepsychiatry","videoconference","Client Satisfaction Questionnaire","psychiatric patient satisfaction","human factors","telemedicine","128 kbit\/s"],"prmu":["P","P","P","P","P","R","U","U","M"]} {"id":"1791","title":"The pedagogy of on-line learning: a report from the University of the Highlands and Islands Millennium Institute","abstract":"Authoritative sources concerned with computer-aided learning, resource-based learning and on-line learning and teaching are generally agreed that, in addition to subject matter expertise and technical support, the quality of the learning materials and the learning experiences of students are critically dependent on the application of pedagogically sound theories of learning and teaching and principles of course design. The University of the Highlands and Islands Project (UHIMI) is developing \"on-line learning\" on a large scale. These developments have been accompanied by a comprehensive programme of staff development. A major emphasis of the programme is concerned with ensuring that course developers and tutors are pedagogically aware. This paper reviews (i) what is meant by \"on-line learning\" in the UHIMI context (ii) the theories of learning and teaching and principles of course design that inform the staff development programme and (iii) a review of progress to date","tok_text":"the pedagogi of on-lin learn : a report from the univers of the highland and island millennium institut \n authorit sourc concern with computer-aid learn , resource-bas learn and on-lin learn and teach are gener agre that , in addit to subject matter expertis and technic support , the qualiti of the learn materi and the learn experi of student are critic depend on the applic of pedagog sound theori of learn and teach and principl of cours design . the univers of the highland and island project ( uhimi ) is develop \" on-lin learn \" on a larg scale . these develop have been accompani by a comprehens programm of staff develop . a major emphasi of the programm is concern with ensur that cours develop and tutor are pedagog awar . thi paper review ( i ) what is meant by \" on-lin learn \" in the uhimi context ( ii ) the theori of learn and teach and principl of cours design that inform the staff develop programm and ( iii ) a review of progress to date","ordered_present_kp":[4,134,155,195,263,455,616],"keyphrases":["pedagogy","computer-aided learning","resource-based learning","teaching","technical support","University of the Highlands and Islands Project","staff development","online learning","educational course design","distance education","Internet"],"prmu":["P","P","P","P","P","P","P","M","M","U","U"]} {"id":"1887","title":"Doubly invariant equilibria of linear discrete-time games","abstract":"The notion of doubly invariant (DI) equilibrium is introduced. The concept extends controlled and robustly controlled invariance notions to the context of two-person dynamic games. Each player tries to keep the state in a region of state space independently of the actions of the rival player. The paper gives existence conditions, criteria and algorithms for the determination of DI equilibria of linear dynamic games in discrete time. Two examples illustrate the results. The first one is in the area of fault-tolerant controller synthesis. The second is an application to macroeconomics","tok_text":"doubli invari equilibria of linear discrete-tim game \n the notion of doubli invari ( di ) equilibrium is introduc . the concept extend control and robustli control invari notion to the context of two-person dynam game . each player tri to keep the state in a region of state space independ of the action of the rival player . the paper give exist condit , criteria and algorithm for the determin of di equilibria of linear dynam game in discret time . two exampl illustr the result . the first one is in the area of fault-toler control synthesi . the second is an applic to macroeconom","ordered_present_kp":[0,28,147,196,269,341,516,574],"keyphrases":["doubly invariant equilibria","linear discrete-time games","robustly controlled invariance","two-person dynamic games","state space","existence conditions","fault-tolerant controller synthesis","macroeconomics"],"prmu":["P","P","P","P","P","P","P","P"]} {"id":"1714","title":"Hordes: a multicast based protocol for anonymity","abstract":"With widespread acceptance of the Internet as a public medium for communication and information retrieval, there has been rising concern that the personal privacy of users can be eroded by cooperating network entities. A technical solution to maintaining privacy is to provide anonymity. We present a protocol for initiator anonymity called Hordes, which uses forwarding mechanisms similar to those used in previous protocols for sending data, but is the first protocol to make use of multicast routing to anonymously receive data. We show this results in shorter transmission latencies and requires less work of the protocol participants, in terms of the messages processed. We also present a comparison of the security and anonymity of Hordes with previous protocols, using the first quantitative definition of anonymity and unlinkability. Our analysis shows that Hordes provides anonymity in a degree similar to that of Crowds and Onion Routing, but also that Hordes has numerous performance advantages","tok_text":"hord : a multicast base protocol for anonym \n with widespread accept of the internet as a public medium for commun and inform retriev , there ha been rise concern that the person privaci of user can be erod by cooper network entiti . a technic solut to maintain privaci is to provid anonym . we present a protocol for initi anonym call hord , which use forward mechan similar to those use in previou protocol for send data , but is the first protocol to make use of multicast rout to anonym receiv data . we show thi result in shorter transmiss latenc and requir less work of the protocol particip , in term of the messag process . we also present a comparison of the secur and anonym of hord with previou protocol , use the first quantit definit of anonym and unlink . our analysi show that hord provid anonym in a degre similar to that of crowd and onion rout , but also that hord ha numer perform advantag","ordered_present_kp":[0,24,76,172,210,318,353,466,535,761,841,851,892],"keyphrases":["Hordes","protocol","Internet","personal privacy","cooperating network entities","initiator anonymity","forwarding mechanisms","multicast routing","transmission latencies","unlinkability","Crowds","Onion Routing","performance"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P"]} {"id":"1751","title":"An adaptive time step procedure for a parabolic problem with blow-up","abstract":"In this paper we introduce and analyze a fully discrete approximation for a parabolic problem with a nonlinear boundary condition which implies that the solutions blow up in finite time. We use standard linear elements with mass lumping for the space variable. For the time discretization we write the problem in an equivalent form which is obtained by introducing an appropriate time re-scaling and then, we use explicit Runge-Kutta methods for this equivalent problem. In order to motivate our procedure we present it first in the case of a simple ordinary differential equation and show how the blow up time is approximated in this case. We obtain necessary and sufficient conditions for the blowup of the numerical solution and prove that the numerical blow-up time converges to the continuous one. We also study, for the explicit Euler approximation, the localization of blow-up points for the numerical scheme","tok_text":"an adapt time step procedur for a parabol problem with blow-up \n in thi paper we introduc and analyz a fulli discret approxim for a parabol problem with a nonlinear boundari condit which impli that the solut blow up in finit time . we use standard linear element with mass lump for the space variabl . for the time discret we write the problem in an equival form which is obtain by introduc an appropri time re-scal and then , we use explicit runge-kutta method for thi equival problem . in order to motiv our procedur we present it first in the case of a simpl ordinari differenti equat and show how the blow up time is approxim in thi case . we obtain necessari and suffici condit for the blowup of the numer solut and prove that the numer blow-up time converg to the continu one . we also studi , for the explicit euler approxim , the local of blow-up point for the numer scheme","ordered_present_kp":[3,34,103,155,239,443,808],"keyphrases":["adaptive time step procedure","parabolic problem","fully discrete approximation","nonlinear boundary condition","standard linear elements","Runge-Kutta methods","explicit Euler approximation"],"prmu":["P","P","P","P","P","P","P"]} {"id":"1609","title":"Modeling undesirable factors in efficiency evaluation","abstract":"Data envelopment analysis (DEA) measures the relative efficiency of decision making units (DMUs) with multiple performance factors which are grouped into outputs and inputs. Once the efficient frontier is determined, inefficient DMUs can improve their performance to reach the efficient frontier by either increasing their current output levels or decreasing their current input levels. However, both desirable (good) and undesirable (bad) factors may be present. For example, if inefficiency exists in production processes where final products are manufactured with a production of wastes and pollutants, the outputs of wastes and pollutants are undesirable and should be reduced to improve the performance. Using the classification invariance property, we show that the standard DEA model can be used to improve the performance via increasing the desirable outputs and decreasing the undesirable outputs. The method can also be applied to situations when some inputs need to be increased to improve the performance. The linearity and convexity of DEA are preserved through our proposal","tok_text":"model undesir factor in effici evalu \n data envelop analysi ( dea ) measur the rel effici of decis make unit ( dmu ) with multipl perform factor which are group into output and input . onc the effici frontier is determin , ineffici dmu can improv their perform to reach the effici frontier by either increas their current output level or decreas their current input level . howev , both desir ( good ) and undesir ( bad ) factor may be present . for exampl , if ineffici exist in product process where final product are manufactur with a product of wast and pollut , the output of wast and pollut are undesir and should be reduc to improv the perform . use the classif invari properti , we show that the standard dea model can be use to improv the perform via increas the desir output and decreas the undesir output . the method can also be appli to situat when some input need to be increas to improv the perform . the linear and convex of dea are preserv through our propos","ordered_present_kp":[39,93,122,193,314,352,480,549,558,661,772,801,24],"keyphrases":["efficiency evaluation","data envelopment analysis","decision making units","multiple performance factors","efficient frontier","current output levels","current input levels","production processes","wastes","pollutants","classification invariance property","desirable outputs","undesirable outputs","final product manufacture","linear programming","undesirable factor modeling"],"prmu":["P","P","P","P","P","P","P","P","P","P","P","P","P","R","M","R"]} {"id":"1922","title":"Trends in agent communication language","abstract":"Agent technology is an exciting and important new way to create complex software systems. Agents blend many of the traditional properties of AI programs - knowledge-level reasoning, flexibility, proactiveness, goal-directedness, and so forth - with insights gained from distributed software engineering, machine learning, negotiation and teamwork theory, and the social sciences. An important part of the agent approach is the principle that agents (like humans) can function more effectively in groups that are characterized by cooperation and division of labor. Agent programs are designed to autonomously collaborate with each other in order to satisfy both their internal goals and the shared external demands generated by virtue of their participation in agent societies. This type of collaboration depends on a sophisticated system of inter-agent communication. The assumption that inter-agent communication is best handled through the explicit use of an agent communication language (ACL) underlies each of the articles in this special issue. In this introductory article, we will supply a brief background and introduction to the main topics in agent communication","tok_text":"trend in agent commun languag \n agent technolog is an excit and import new way to creat complex softwar system . agent blend mani of the tradit properti of ai program - knowledge-level reason , flexibl , proactiv , goal-directed , and so forth - with insight gain from distribut softwar engin , machin learn , negoti and teamwork theori , and the social scienc . an import part of the agent approach is the principl that agent ( like human ) can function more effect in group that are character by cooper and divis of labor . agent program are design to autonom collabor with each other in order to satisfi both their intern goal and the share extern demand gener by virtu of their particip in agent societi . thi type of collabor depend on a sophist system of inter-ag commun . the assumpt that inter-ag commun is best handl through the explicit use of an agent commun languag ( acl ) underli each of the articl in thi special issu . in thi introductori articl , we will suppli a brief background and introduct to the main topic in agent commun","ordered_present_kp":[32,156,9,761,694,269,295,310,321,347],"keyphrases":["agent communication language","agent technology","AI programs","distributed software engineering","machine learning","negotiation","teamwork","social sciences","agent societies","inter-agent communication","KQML","semantics","conversations"],"prmu":["P","P","P","P","P","P","P","P","P","P","U","U","U"]} {"id":"1508","title":"Rats, robots, and rescue","abstract":"In early May, media inquiries started arriving at my office at the Center for Robot-Assisted Search and Rescue (www.crasar.org). Because I'm CRASAR's director, I thought the press was calling to follow up on the recent humanitarian award given to the center's founder, John Blitch, for successfully using small, backpackable robots at the World Trade Center disaster. Instead, I found they were asking me to comment on the \"roborats\" study in the 2 May 2002 Nature. In this study, rats with medial force brain implants underwent operant conditioning to force them into a form of guided behavior, one aspect of which was thought useful for search and rescue. The article's closing comment suggested that a guided rat could serve as both a mobile robot and a biological sensor. Although a roboticist by training, I'm committed to any technology that will help save lives while reducing the risk to rescuers. But rats?","tok_text":"rat , robot , and rescu \n in earli may , media inquiri start arriv at my offic at the center for robot-assist search and rescu ( www.crasar.org ) . becaus i 'm crasar 's director , i thought the press wa call to follow up on the recent humanitarian award given to the center 's founder , john blitch , for success use small , backpack robot at the world trade center disast . instead , i found they were ask me to comment on the \" roborat \" studi in the 2 may 2002 natur . in thi studi , rat with medial forc brain implant underw oper condit to forc them into a form of guid behavior , one aspect of which wa thought use for search and rescu . the articl 's close comment suggest that a guid rat could serv as both a mobil robot and a biolog sensor . although a roboticist by train , i 'm commit to ani technolog that will help save live while reduc the risk to rescuer . but rat ?","ordered_present_kp":[717,735,687,97],"keyphrases":["robot-assisted search and rescue","guided rat","mobile robot","biological sensor"],"prmu":["P","P","P","P"]}