abstract
stringlengths 5
10.1k
| authors
stringlengths 9
1.96k
⌀ | title
stringlengths 5
367
| __index_level_0__
int64 1
1,000k
|
---|---|---|---|
A comprehensive framework for the analysis and synthesis of 3D human gait is presented. The framework consists of a realistic morphological representation of the human body involving 40 degrees of freedom and 17 body segments. Through the analysis of human gait, the joint reaction forces/moments can be estimated and parameters associated with postural stability can be quantified. The synthesis of 3D human gait is a complicated problem due to the synchronisation of a large number of joint variables. Herein, the framework is employed to reconstruct a dynamically balanced gait cycle and develop sets of reference trajectories that can be used for either the assessment of human mobility or the control of mechanical ambulatory systems. The gait cycle is divided into eight postural configurations based on particular gait events. Gait kinematic data is used to provide natural human movements. The balance stability analysis is performed with various ground reference points. The proposed reconstruction of the gait cycle requires two optimisation steps that minimise the error distance between evaluated and desired gait and balance constraints. The first step (quasi-static motion) is used to approximate the postural configurations to a region close to the second optimisation step target while preserving the natural movements of human gait. The second step (dynamic motion) considers a normal speed gait cycle and is solved using the spacetime constraint method and a global optimisation algorithm. An experimental validation of the generated reference trajectories is carried out by comparing the paths followed by 19 optical markers of a motion tracking system with the paths of the corresponding node points on the model. | ['Flavio Firmani', 'Edward J. Park'] | A framework for the analysis and synthesis of 3d dynamic human gait | 153,735 |
Mirror Surface Finishing of Silicon Wafer Edge Using Ultrasonic Assisted Fixed-Abrasive CMP (UF-CMP). | ['Yongbo Wu', 'Weiping Yang', 'Masakazu Fujimoto', 'br', 'Libo Zhou'] | Mirror Surface Finishing of Silicon Wafer Edge Using Ultrasonic Assisted Fixed-Abrasive CMP (UF-CMP). | 985,425 |
Given N request streams and L/spl les/N LRU caches, the cache assignment problem asks to which cache each stream should be assigned in order to minimize the overall miss rate. An efficient solution to this problem is provided, based on characterizing each stream using the stack reference model and characterizing the interaction of the streams using a bursty stream model. It is shown that for Bernoulli (purely random) mixing of streams, the optimal cache assignment is to have one cache per stream. In practice streams are mixed in a way that is much "burstier" than can be represented by the Bernoulli model. Therefore a method is presented for superposition of bursty streams. The performance of the methods developed for bursty stream superposition and cache assignment are tested using trace data obtained from the database system DB2. The resulting cache assignment recommendations are then applied to the DB2 system, and considerable performance improvement is found to result. | ['Hanoch Levy', 'Ted Messinger', 'Robert J. T. Morris'] | The cache assignment problem and its application to database buffer management | 270,518 |
IBM products increasingly implement a service-oriented architecture (SOA), in which programmers build services, use services, and develop solutions that aggregate services. IBM Software Group middleware products and toolssupport the development and deployment of SOA solutions, and increasingly make functional interfaces between components and products visible through a service model. Software Group components will increasingly use SOA standards for intracomponent communications. Our move to SOA encompasses both the programming model and lower-level infrastructure software, for example, systems-management and storage-management application programming interfaces and functions. This paper concisely defines the IBM SOA programming model and the product architecture that supports it. We provide the motivation for our programming-model and design decisions. This paper also focuses on the architectural concepts that underlie our programming model and product architecture. | ['Donald F. Ferguson', 'Marcia L. Stockton'] | Service-oriented architecture: programming model and product architecture | 138,975 |
System identification of plants with binary-valued output observations is of importance in understanding modeling capability and limitations for systems with limited sensor information, establishing relationships between communication resource limitations and identification complexity, and studying sensor networks. This paper resolves two issues arising in such system identification problems. First, regression structures for identifying a rational model contain non-smooth nonlinearities, leading to a difficult nonlinear filtering problem. By introducing a two-step identification procedure that employs periodic signals, empirical measures, and identifiability features, rational models can be identified without resorting to complicated nonlinear searching algorithms. Second, by formulating a joint identification problem, we are able to accommodate scenarios in which noise distribution functions are unknown. Convergence of parameter estimates is established. Recursive algorithms for joint identification and their key properties are further developed. | ['Le Yi Wang', 'Gang George Yin', 'Ji-Feng Zhang'] | Joint identification of plant rational models and noise distribution functions using binary-valued observations | 391,135 |
Machine Learning has become a very popular approach in addressing problems in the Computational Biology and Bioinformatics area. In addition, multi-classifier systems have also gained popularity among researchers working in machine learning and applications for their ability to fuse together multiple models and obtain better overall accuracy and classification results. This talk is concerned with current issues in the design of multi-classifier systems and presents some multi-classifier developments for several bioinformatics problems. The talk will first present an overview and current status of machine learning methods in bioinformatics and computational biology. The talk will then bring in some important issues in building ensembles of classifiers, with a focus on the diversity and combination of individual classifiers. Few diversification and combination schemes are presented along with guidelines for the selection of different training paradigms and performance metrics, based on the properties and distribution of the data. Then, the presentation will proceed with introducing our computational intelligence based multi-classifier developments for solving several bioinformatics problems, such as recognizing sequences in DNA strings, micro-array gene expression data analysis, protein structure prediction. The talk will also present related machine learning issues in developing such systems, such as learning from imbalanced datasets and using appropriate performance metrics for model selection and evaluation. The presented approaches and results will advocate that ensembles of classifiers can be used as effective modelling tools in solving challenging bioinformatics problems. | ['Vasile Palade'] | Ensembles of classifiers and their aplication to bioinformatics problems | 340,874 |
This study was conducted to construct a model on ubiquitous hub for digital natives. Respondents were 250 digital native generation students, from a higher learning institution in Malaysia. The result of the regression, structural equation model and path analysis revealed that multitask as well as gratification and reward nurture digital natives to learn in ubiquitous computing environment. Digital natives characteristics of reliant on graphic for communication, and attitude toward technology are rejected from the model based on the statistical evidence. Test of the relationship between multitask toward gratification and reward via structural equation model shows that both influence each other. Conclusion on the set-up of ubiquitous hub for digital natives based on the model derived are discussed. | ['Mohd Shafie Rosli', 'Nor Shela Saleh', 'Baharuddin Aris', 'Maizah Hura Ahmad', 'Shaharuddin Salleh'] | Ubiquitous Hub for Digital Natives | 646,816 |
This paper presents a chord recognition method from music signals using chroma vectors and musical knowledge known as “Doubly Nested Circle of Fifths (DNCOF)”. DNCOF represents the relationships of major and minor chords where the neighboring two triads are similar. We obtain a novel feature from chroma vectors by mapping them onto two-dimensional DNCOF coordinate, which we call “DNCOF vectors”. We expect that the DNCOF vectors can contribute to correcting false recognition obtained by the chroma vectors when their mapped positions are apart from one another in the DNCOF coordinate. In this research, we evaluated our proposal using the Beatles' datasets and showed its effectiveness. | ['Aiko Uemura', 'Jiro Katto'] | Chord recognition using Doubly Nested Circle of Fifths | 124,348 |
Hadoop has become the de facto standard for Big Data analytics, especially for workloads that use the MapReduce (M/R) framework. However, the lack of network awareness of the default MapReduce resource manager in Hadoop can cause unbalanced job scheduling, network bottleneck, and eventually increase the Hadoop run time if Hadoop nodes are clustered in several geographically distributed locations. In this paper, we present an application-aware network approach using software-defined networking (SDN) for distributed Hadoop clusters. We develop the SDN applications for this environment that consider network topology discovery, traffic monitoring, and flow rerouting in addition to loop avoidance mechanisms. | ['Shuai Zhao', 'Ali Sydney', 'Deep Medhi'] | Building Application-Aware Network Environments using SDN for Optimizing Hadoop Applications | 863,599 |
In this letter, we address the design of fractionally spaced adaptive equalizers when the input signal is sampled with noninteger, subsymbol sampling. We consider the problem of joint equalization and sample rate conversion and derive a stochastic gradient-based weight update algorithm for the equalizer. This enables us to decouple the equalization of channel impairments from that of fixed (periodic) distortion arising from noninteger, subsymbol sampling. This decoupling leads to a novel low-complexity architecture for the equalizer that shows superior or equal performance as compared to the existing architectures with higher complexity. | ['S. Faisal A. Shah', 'Lei Wang', 'Chuandong Li', 'Zhuhong Zhang'] | Low-Complexity Design of Noninteger Fractionally Spaced Adaptive Equalizers for Coherent Optical Receivers | 838,305 |
The main objective of this paper is to design and develop a self-adaptable service-oriented architecture (SASOA) for providing reliable composite multimedia service through policy-based actions. The distributed multimedia services deployed using service-oriented architecture (SOA) can be accessed in heterogeneous environments that are prone to changes during runtime. To provide reliable multimedia services, a powerful self-adaptable architecture with dynamic compositions of multimedia services is necessary that adapts at run time and reacts to the environment. Adaptability in this proposed architecture is achieved by enabling the service providers to monitor, analyse and act on the defined policies that support customisation of composition of multimedia services. The media service monitor (MSM) observes the business and quality metrics associated with the media services at run-time. The Adaptive Media Service Manager (AMSM) takes corrective actions based on the monitored results, through the policies defined as an extension of WS-Policy. The effectiveness of the proposed SASOA has been evaluated on Dynamic Composite Real-time Video-on-Demand Web Service (DCRVWS) for a maximum of 200 simultaneous clients' and the results were analysed. The analysis of results shows that the proposed architecture provides better improvement on reliability, response time and user satisfaction. | ['G. Maria Kalavathy', 'N. Edison Rathinam', 'P. Seethalakshmi'] | Policy-based self-adaptable service-oriented architecture for providing reliable composite multimedia service | 363,348 |
Track on Intelligent Computing and Applications: Selected papers from the 2012 International Workshop on Information, Intelligence and Computing (IWIIC 2012) | ['Shifei Ding', 'Zhongzhi Shi'] | Track on Intelligent Computing and Applications: Selected papers from the 2012 International Workshop on Information, Intelligence and Computing (IWIIC 2012) | 999,516 |
The Open Archives Initiative (OAI) Protocol for Metadata Harvesting (PMH) facilitates efficient interoperability between digital collections, in particular by enabling service providers to construct, with relatively modest effort, search portals that present aggregated metadata to specific communities. This paper describes the experiences of the University of Illinois at Urbana-Champaign Library as an OAI service provider. We discuss the creation of a search portal to an aggregation of metadata describing cultural heritage resources. We examine several key challenges posed by the aggregated metadata and present preliminary findings of a pilot study of the utility of the portal for a specific community (student teachers). We also comment briefly on the potential for using text analysis tools to uncover themes and relationships within the aggregated metadata. | ['Sarah L. Shreeves', 'Christine M. Kirkham', 'Joanne Kaczmarek', 'Timothy W. Cole'] | Utility of an OAI service provider search portal | 268,059 |
This paper proposes a critical survey of crowd analysis techniques using visual and non-visual sensors. Automatic crowd understanding has a massive impact on several applications including surveillance and security, situation awareness, crowd management, public space design, intelligent and virtual environments. In case of emergency, it enables practical safety applications by identifying crowd situational context information. This survey identifies different approaches as well as relevant work on crowd analysis by means of visual and non-visual techniques. Multidisciplinary research groups are addressing crowd phenomenon and its dynamics ranging from social, and psychological aspects to computational perspectives. The possibility to use smartphones as sensing devices and fuse this information with video sensors data, allows to better describe crowd dynamics and behaviors. Eventually, challenges and further research opportunities with reference to crowd analysis are exposed. | ['Muhammad Irfan', 'Lucio Marcenaro', 'Laurissa N. Tokarchuk'] | Crowd analysis using visual and non-visual sensors, a survey | 997,825 |
Robot-object contact perception using symbolic temporal pattern learning | ['Nawid Jamali', 'Petar Kormushev', 'Darwin G. Caldwell'] | Robot-object contact perception using symbolic temporal pattern learning | 99,184 |
In years have seen the advent of new types of automotive antennas, such as blade or 'shark-fin' antennas and conformal planar roof mounted antennas. In many cases it is desirable to paint these antennas to improve the appearance of the vehicle. In this communication we present an investigation of the effect that both metallic and non-metallic two-pack polyurethane paint has on a structure radiating at approximately 1.5 GHz (GPS Ll-band), with a particular emphasis on the impedance bandwidth and radiation performance. | ['Brendan Pell', 'Wayne S. T. Rowe', 'Edin Sulic', 'Kamran Ghorbani', 'Sabu John', 'Rahul Gupta', 'Kefei Zhang', 'B Hughes'] | Experimental Study of the Effect of Paint on Embedded Automotive Antennas | 331,225 |
Several new wait-free object-sharing schemes for real-time uniprocessors and multiprocessors are presented. These schemes have characteristics in common with the priority inheritance and priority ceiling protocols, but are nonblocking and implemented at the user level. In total, six new object-sharing schemes are proposed: two for uniprocessors and four for multiprocessors. Breakdown utilization experiments are presented that show that the multiprocessor schemes entail less overhead than lock-based schemes. | ['James H. Anderson', 'Rohit Jain', 'Srikanth Ramamurthy'] | Wait-free object-sharing schemes for real-time uniprocessors and multiprocessors | 226,395 |
This paper describes the construction of an application ontology for the legal domain. This ontology covering the semantic content of jurisprudence decisions can be deployed at several levels in our system for decisions research: improve the results of decision structuring, facilitate queries' formulation when accessing the system and finally optimize the search of decisions. We begin by presenting the methodology we have adopted for the ontology construction from textual jurisprudence decisions. We concentrate on the conceptualization stage by presenting our formal approach based on top level ontology as DOLCE and a core legal ontology. Then we will detail the deployment of the ontology at JDSM, the methodology of decision structuring decision that we have proposed‥ | ['Karima Dhouib', 'Faiez Gargouri'] | Legal application ontology in Arabic | 915,272 |
In enhanced distributed channel access (EDCA) protocol, small contention window (CW) sizes are used for frequent channel access by high-priority traffic (such as voice). But these small CW sizes, which may be suboptimal for a given network scenario, can introduce more packet collisions, and thereby, reduce overall throughput. This paper proposes enhanced collision avoidance (ECA) scheme for AC_VO access category queues present in EDCA protocol. The proposed ECA scheme alleviates intensive collisions between AC_VO queues to improve voice throughput under the same suboptimal yet necessary (small size) CW restrictions. The proposed ECA scheme is studied in detail using Markov chain numerical analysis and simulations carried out in NS-2 network simulator. The performance of ECA scheme is compared with original (legacy) EDCA protocol in both voice and multimedia scenarios. Also mixed scenarios containing legacy EDCA and ECA stations are presented to study their coexistence. Comparisons reveal that ECA scheme improves voice throughput performance without seriously degrading the throughput of other traffic types. | ['Khalim Amjad Meerja', 'Abdallah Shami'] | Analysis of Enhanced Collision Avoidance Scheme Proposed for IEEE 802.11e-Enhanced Distributed Channel Access Protocol | 29,132 |
Performance Evaluation of Teleoperation for Manipulating Micro Objects Using Two-Fingered Micro Hand. | ['Kenji Inoue', 'Daisuke Nishi', 'Tomohito Takubo', 'Tamio Tanikawa', 'Tatsuo Arai'] | Performance Evaluation of Teleoperation for Manipulating Micro Objects Using Two-Fingered Micro Hand. | 994,822 |
Shared Memory Synchronization. | ['Gadi Taubenfeld'] | Shared Memory Synchronization. | 790,081 |
The BER performance of a turbo product code (TPC) based space-time block coding (STBC) wireless system has been investigated. With the proposed system, both the good error correcting capability of TPC and the concurrent large diversity gain characteristic of STBC can be achieved. The BER upper bound has been derived taking BPSK modulation as an example. The simulation results show that the proposed system with the concatenated codes outperforms the one with only TPC or STBC and other reported schemes that concatenate STBC with convolutional Turbo codes or trellis codes. | ['Yinggang Du', 'Kam Tai Chan'] | Enhanced space-time block coded systems by concatenating turbo product codes | 520,992 |
The paper concerns task scheduling in dynamic SMP clusters based on the notion of moldable computational tasks. Such tasks have been used as atomic elements in program scheduling algorithms with warranty of schedule length. For program execution, a special shared memory system architecture is used. It is based on dynamic processor clusters, organized around shared memory modules by switching of processors between memory module busses. Fast shared data transfers between processors inside such clusters can be performed through data reads on the fly. The dynamic SMP clusters are implemented inside system on chip (SoC) modules additionally connected by a central global network. A task scheduling algorithm is presented for program macro dataflow graphs for execution in the assumed architecture. The algorithm first identifies a set moldable tasks in a given program graph. Next, this set is scheduled using a 2-phase algorithm including allotment of resources to moldable tasks and final list scheduling, with a warranty of schedule length. The complete algorithm has been implemented as a program package and examined using simulated execution of scheduled program graphs. | ['Lukasz Masko', 'Grégory Mounié', 'Denis Trystram', 'Marek Tudruj'] | Program Graph Structuring for Execution in Dynamic SMP Clusters Using Moldable Tasks | 471,734 |
XML and other semi-structured data can be represented by a graph model. The paths in a data graph are used as a basic constructor of a query. Especially, by using patterns on paths, a user can formulate more expressive queries. Patterns in a path enlarge the search space of a data graph and current research for indexing semi-structured data focuses on reducing the search space. However, the existing indexes cannot reduce the search space when a data graph has some references.In this paper, we introduce a partitioning technique for all paths in a data graph and an index graph which can effectively find appropriate path partitions for a path query with patterns. | ['Jongik Kim', 'Hyoung-Joo Kim'] | A partition index for XML and semi-structured data | 526,913 |
In this paper, stabilization of the distributed parameter system (DPS) with time delay is studied using Galerkin's method and fuzzy control. With the help of Galerkin's method, the dynamics of DPS with time delay can be first converted into a group of low-order functional ordinary differential equations, which will be used for design of the robust fuzzy controller. The fuzzy controller designed can guarantee exponential stability of the closed-loop DPS. Some sufficient conditions are derived for the stabilization together with the linear matrix inequality design approach. The effectiveness of the proposed control design methodology is demonstrated in numerical simulations. | ['Kun Yuan', 'Han-Xiong Li', 'Jinde Cao'] | Robust Stabilization of the Distributed Parameter System With Time Delay via Fuzzy Control | 465,727 |
Wireless communication technologies enabled the possibility of building spontaneous networks between two or more users to exchange data. The problem in the establishment of such networks lies in the configuration that has to be agreed on and in the way the communicating parties can be identified. In prior publications we have presented our vision of convenient networking in a heterogeneous environment. In this paper, we describe an implementation that offers a dashboard-like tool, which can, with the help of a cellular network, ease the formation of spontaneous networks among heterogeneous nodes. Furthermore, the provided implementation is able to secure the acquired communication links in the spontaneous network and therefore protect the exchanged information against possible abuse. | ['Marc Danzeisen', 'Torsten Braun', 'Simon Winiker', 'Daniel Rodellar'] | Implementation of a cellular framework for spontaneous network establishment | 287,964 |
E-Portfolios are a new type of software and it is still relatively vague to determine , which functions are obligatory ‐ that is which functions constitute characteristic features ‐ and which functions are just optional (“nice to have“). This article describes the concept and the preliminary results of a research project which was conducted to evaluate E-Portfolio software, and aims at providing decision guidance for implementing E-Portfolios in higher education - first and foremost from the pedagogical perspective. Which recommendations can be made to an institution which now wants to implement electronic portfolios with a certain objective? | ['Klaus Himpsl', 'Peter Baumgartner'] | Evaluation of E-Portfolio Software | 13,456 |
A Low-Power Baseband Filter Based on a 1.2-V 65-nm CMOS Bulk-Driven Linear Tunable Transconductor | ['Trinidad Sanchez-Rodriguez', 'Juan Antonio Gómez Galán', 'Fernando Muñoz Chavero', 'R.G. Carvajal'] | A Low-Power Baseband Filter Based on a 1.2-V 65-nm CMOS Bulk-Driven Linear Tunable Transconductor | 865,198 |
The nonlinear unconditionally stable energy-conserving integration method (ECM) is a new method for solving a continuous equation of motion. To our knowledge, there is still no report about its application on a hybrid test. Aiming to explore its effect on hybrid tests, the nonlinear beam-column element program is developed for computation. The program contains both the ECM and the average acceleration method (AAM). The comparison of the hybrid test results with thesetwo methods validates the effectiveness of the ECM in the hybrid simulation. We found that the energy error of hybrid test by using ECM is less than that of AAM. In addition, a new iteration strategy with reduction factor is presented to avoid the overshooting phenomena during iteration process with the finite element program. | ['Tianlin Pan', 'Bin Wu', 'Yongsheng Chen', 'Guoshan Xu'] | Application of the Energy-Conserving Integration Method to Hybrid Simulation of a Full-Scale Steel Frame | 793,475 |
Within a fairly weak formal theory of numbers and number-theoretic se- quences we give a direct proof of the contrapositive of countable finite choice for decid- able predicates. Our proof is at the same time a proof of a stronger form of it. In that way we think that we improve a proof given by Diener and Schuster. Within the same theory we prove properties of inhabited sets of naturals satisfying the general contra- positive of countable choice. Extending our base theory with the continuity principle, we prove that each such set is finite. In that way we generalize a result of Veldman, who proved, actually within the same extension, the finiteness of these sets, supposing additionally their decidability. | ['Iosif Petrakis'] | The Contrapositive of Countable Choice for Inhabited Sets of Naturals | 563,273 |
We present a fully automatic method to detect doctored digital images. Our method is based on a rigorous consistency checking principle of physical characteristics among different arbitrarily shaped image regions. In this paper, we specifically study the camera response function (CRF), a fundamental property in cameras mapping input irradiance to output image intensity. A test image is first automatically segmented into distinct arbitrarily shaped regions. One CRF is estimated from each region using geometric invariants from locally planar irradiance points (LPIPs). To classify a boundary segment between two regions as authentic or spliced, CRF-based cross fitting and local image features are computed and fed to statistical classifiers. Such segment level scores are further fused to infer the image level authenticity. Tests on two data sets reach performance levels of 70% precision and 70% recall, showing promising potential for real-world applications. Moreover, we examine individual features and discover the key factor in splicing detection. Our experiments show that the anomaly introduced around splicing boundaries plays the major role in detecting splicing. Such finding is important for designing effective and efficient solutions to image splicing detection. | ['Yu-Feng Hsu', 'Shih-Fu Chang'] | Camera Response Functions for Image Forensics: An Automatic Algorithm for Splicing Detection | 460,135 |
The dynamic evaluation tree is a method of visualizing expression evaluation that annotates a program’s source code with expression results. It is intended to reduce students’ visual attention problems by removing the need to alternate between disparate source code and expression evaluation windows. We generalise the dynamic evaluation tree to support arbitrary expressions in the C programming language, and present the rst ever implementation for a novicefocused program visualization and debugging tool. | ['Matthew Heinsen Egan', 'Chris McDonald'] | Dynamic evaluation trees for novice C programmers | 778,697 |
Spectrum has been sold at millions of dollars per megahertz through spectrum auctions. The staggering price hinders small network providers from becoming auction winners. Inspired by the group buying service on the Internet, group buying strategy has been introduced into the design of spectrum auctions to increase the buying power of small network providers. In this paper, we propose two truthful group buying auctions, namely, $\mathsf{TRUBA}$ and $\mathsf{TRUBA}^{+}$ , to take advantage of the collective buying power of secondary users (SUs) within each secondary network (SN). We carefully design the budget extraction for each secondary access point (SAP) within the SN to maximize the budget collected from the SUs. In addition, we allow the primary user (PU) to assign its channels strategically to boost the chance of successful transactions. These two features together empower $\mathsf{TRUBA}$ and $\mathsf{TRUBA}^{+}$ to significantly improve system performance, as compared with the existing group buying auction, in terms of the number of successful transactions (up to 16 times in the evaluation results), the number of winning SUs (up to 21 times), the average utility of the SUs (up to 19 times), and the utility of SAPs (up to 85 times). In $\mathsf{TRUBA}^{+}$ , the utility of the PU is improved by up to 44 times. | ['Dejun Yang', 'Guoliang Xue', 'Xiang Zhang'] | Group Buying Spectrum Auctions in Cognitive Radio Networks | 700,694 |
This paper proposes a TSK-type fuzzy controller (TFC) with a two-strategy reinforcement group cooperation based symbiotic evolution (TSR-GCSE) for solving various control problems. The TSR-GCSE proposes the two-strategy reinforcement (TSR) signal designed to improve the performance of the traditional reinforcement signal designed. Moreover, the TSR-GCSE is different from the traditional symbiotic evolution; with each population in the TSR-GCSE method is divided to several groups. Each group represents a set of the chromosomes that belongs to a fuzzy rule and can cooperate with other groups to generation the better chromosomes by using elites-base compensation crossover strategy (ECCS). The illustrative examples show that the proposed method has the better time steps and CPU times than other existing methods. | ['Sheng-Fuu Lin', 'Jyun-Wei Chang', 'Yu-Bi Hong', 'Yung-Chi Hsu'] | Two-Strategy reinforcement group cooperation based symbiotic evolution for TSK-type fuzzy controller design | 79,521 |
Distributed metadata management is an important issue in the design and implementation of Data Grid. The key challenge lies in the strategies of metadata synchronization and the representation of the distributed metadata. We have designed a Hierarchical Bloom Filter, which consists of two level Bloom filters, to facilitate the metadata management. A Recent Bloom Filter at the top level is based on the list of recent accessed files while a Summary Bloom Filter at the bottom level represents the set of entire files. Furthermore, we propose a novel update scheme to make Recent Bloom Filters synchronized among metadata servers. Each metadata server could use the Hierarchical Bloom Filters to reduce the update frequency and the network overhead. The experimental results show that the Hierarchical Bloom Filters improve the performance and scalability of Data Grid markedly. | ['Shihua Chen', 'Xiaomeng Huang', 'Pengzhi Xu', 'Weimin Zheng'] | Distributed Metadata Management Based on Hierarchical Bloom Filters in Data Grid | 254,583 |
Although the continuous HMM (CHMM) technique seems to be the most flexible and complete tool for speech modeling, it is not always used for the implementation of speech recognition systems due to several problems related to training and computational complexity. Besides, it is not clear the superiority of continuous models over other well-known types of HMMs, such as discrete (DHMM) or semicontinuous (SCHMM) models, or multiple vector quantization (MVQ) models, a new type of HMM modeling. The authors propose a new variant of HMM models, the SCMVQ, HMM models (semicontinuous multiple vector quantization HMM), that uses one VQ codebook per recognition unit and several quantization candidates, Formally, SCMVQ modeling is the closest one to CHMM, although requiring less computation than SCHMMs. Besides, the authors show that SCMVQs can obtain better recognition results than DHMMs, SCHMMs or MVQs. > | ['Antonio M. Peinado', 'José C. Segura', 'Antonio J. Rubio', 'M.C. Benitez'] | Using multiple vector quantization and semicontinuous hidden Markov models for speech recognition | 493,493 |
Call for papers: Special issue of Journal of Parallel and Distributed Computing: Heterogeneity in parallel and distributed computing | ['Alexey L. Lastovetsky'] | Call for papers: Special issue of Journal of Parallel and Distributed Computing: Heterogeneity in parallel and distributed computing | 611,464 |
The self-organizing map (SOM) is an unsupervised neural network approach that reduces a high-dimensional data set to a representative and compact two-dimensional grid. In so doing, a SOM reveals emergent clusters within the data. Research has shown that SOMs lend themselves to visual and computational analysis for exploratory and data mining purposes. However, an important requirement for many SOM interpretations is the characterization of the mappsilas emergent clusters. This process is often addressed by either a manual or automated map neuron labeling approach. This paper discusses techniques for the labeling of the unsupervised, supervised and semi-supervised variants of the SOM, and proposes some new methods. It also presents empirical results characterizing the performance of two automated labeling approaches for fully unsupervised SOMs when applied for example classification of experimental data sets. | ['W.S. van Heerden', 'A.P. Engelbrecht'] | A comparison of map neuron labeling approaches for unsupervised self-organizing feature maps | 124,906 |
In Service Oriented Architecture (SOA), web services may span several sites or logical tiers, each responsible for some part of the service. Most services need to be highly reliable and should allow no data corruption. A known problem in distributed systems that may lead to data corruption or inconsistency is the partition problem, also known as the split-brain phenomena. A split-brain occurs when a network, hardware, or software malfunction breaks a cluster of computer into several separate sub-clusters that reside side by side and are not aware of each other. When, during a session, two or more of these sub-clusters serve the same client, the data may become inconsistent or corrupted.#R##N##R##N#ITRA - Inter Tier Relationship Architecture [1] enables web services to transparently recover from multiple failures in a multi-tier environment and to achieve continuous availability. However, the ITRA protocol does not handle partitions. In this paper we propose an extension to ITRA that supports continuous availability under partitions. Our unique approach, discussed in this paper, deals with partitions in multi-tier environments using the collaboration of neighboring tiers. | ['Aviv Dagan', 'Eliezer Dekel'] | ITRA under partitions | 110,421 |
Digitally-Enabled Service Transformation (DEST) projects in public sector institutions are viewed as a choice of strategic response towards changes in policy. Such transformation can destruct institutional stability and legitimacy and result in failure if the complex institutional setting of the public sector is not comprehended in the change-institutionalisation effort. Through a multiple case enquiry, this study examines how institutional pressures contribute towards the emergence of DEST in public agencies and how newly introduced transformation is implemented and diffused within the institutional setting. The findings highlight that as a form of technology driven change, DEST is characterised and shaped dominantly by continuous interplay with institutional elements and the impact of these interactions define the institutionalisation, deinstitutionalisation and re-institutionalisation of DEST. Ability to recognise such stages and provide the required support will determine a public institution's ability to effectively manage DEST and attain its strategic goals. | ['Vishanth Weerakkody', 'Amizan Omar', 'Ramzi El-Haddadeh', 'Moaman Al-Busaidy'] | Digitally-enabled service transformation in the public sector: The lure of institutional pressure and strategic response towards change | 928,622 |
Tagging learning resources in repositories or web portals offers a way to meaningfully describe these resources. The more tags there are, however, the more di cult it is to find one's way around the repository, especially when they are user-generated free-text tags. This paper therefore presents a visualisation of tag clusters based on higher-order co-occurrences that allows users of such repositories a plain but simple way of exploring them in an intuitive manner. | ['Katja Niemann', 'Sarah Leon Rojas', 'Martin Wolpers', 'Maren Scheffel', 'Hendrik Drachsler', 'Marcus Specht'] | Getting a grasp on tag collections by visualising tag clusters based on higher-order co-occurrences | 745,223 |
Many studies have been conducted to evaluate the benefits of using layered video coding schemes as a means to improve the robustness of video communications systems. In this paper, we study a frame-aware nonlinear layering scheme for the transport of a DCT-based video over packet-switched networks. This scheme takes into account the relevance of the different elements of the video sequence composing the encoded video sequence. Throughout a detailed study over a large set of video streams, we show that by properly tuning the encoding parameters, it is feasible to gracefully degrade or even maintain the video quality while reducing the amount of data representing the video sequence. We then provide the major guidelines to properly tune up the encoding parameters allowing us to set the basis towards the development of more robust video communications systems. | ['Pedro Cuenca', 'Luis Orozco-Barbosa', 'Francisco M. Delicado', 'Antonio Garrido'] | Breakpoint tuning in DCT-based nonlinear layered video codecs | 101,844 |
In this paper we present a compact and efficient method for the calculation of the channel response over indoor wireless optical channel for multi-wavelength signals. It is based on the Modified Montecarlo Algorithm but allows not only processing composed signals as used in VLC transmission with white or RGB LED emitters, but also modeling reflections where wavelength components of the incoming signals are modified, as could be the case of phosphor materials or wavelength-dependent reflecting surfaces. It also allows modeling not only lambertian reflections but also non-lambertian, quasi-specular surface, as could be plastic or metallic furniture. The algorithm uses a wavelength transformation matrix that can be defined for each reflecting material, and even for each incidence angle. | ['A. M. Ramirez-Aguilera', 'Jose Martin Luna-Rivera', 'V. Guerra', 'J. Rabadan', 'R. Perez-Jimenez', 'F.J. Lopez-Hernandez'] | Multi-wavelength modelling for VLC indoor channels using Montecarlo simulation | 891,942 |
We describe our software system enabling a tight integration between vision and control modules on complex, high-DOF humanoid robots. This is demonstrated with the iCub humanoid robot performing visual object detection, reaching and grasping actions. A key capability of this system is reactive avoidance of obstacle objects detected from the video stream while carrying out reach-and-grasp tasks. The subsystems of our architecture can independently be improved and updated, for example, we show that by using machine learning techniques we can improve visual perception by collecting images during the robot’s interaction with the environment. We describe the task and software design constraints that led to the layered modular system architecture. | ['Jürgen Leitner', 'Simon Harding', 'A. Förster', 'Peter Corke'] | A Modular Software Framework for Eye–Hand Coordination in Humanoid Robots | 778,004 |
A substantial portion of information flow in the brain is directed top-down, from high processing areas downwards. Signals of this sort are regarded as conveying prior expectations, biasing the processing and eventual perception of incoming stimuli. In this perspective we describe a framework of top-down processing in the visual system in which predictions on the identity of objects in sight aid in their recognition. Focus is placed, in particular, on a relatively uncharted ramification of this framework, that of the fate of initial predictions that are eventually rejected during the process of selection. We propose that such predictions are rapidly inhibited in the brain after a competing option has been selected. Empirical support, along with behavioral, neuronal and computational aspects of this proposal are discussed, and future directions for related research are offered. | ['Amir Tal', 'Moshe Bar'] | The proactive brain and the fate of dead hypotheses | 78,483 |
In modern day world, time has become a precious resource. Therefore, different strategies and techniques are constantly being employed in all fields of life to save every bit of time. Increasingly many of such applications involve wireless sensor networks. A highly potential system that can be made significantly more efficient using WSNs is an elevator system. There have been numerous attempts to improve the serving efficiency of the elevator system over the course of time. This paper proposes to utilize the elevator system in a more productive manner so that more number of people can be served in a lesser time. Hence, people will be able to spend this valuable time on other important and crucial matters rather than waiting for the elevator and wasting their time in vain. To achieve this goal, we implement the elevator system based on a wireless adhoc network of intelligent floors (equipped with sensors) which can communicate with each other in a multi-hop fashion. In this way, every floor is aware of the traffic conditions, i.e., the number of upward/downward requesting-passengers, waiting at every other floor and the elevator positions in real-time and hence can make efficient decisions of where to direct/stop the elevator. | ['Hamza Ijaz Abbasi', 'Abdul Jabbar Siddiqui'] | Implementation of smart elevator system based on wireless multi-hop AdHoc sensor networks | 29,965 |
The contributions of leading scientists, such as Nobel Prize winners often play an important role in the progress of mankind. In this article, we propose new indices to recognize foundational work in science. Based on case studies of publications by 2016 Nobel Prize winners we make a distinction between two types of fundamental contributions. In a metaphoric way we refer to them as directly igniting or sparking. Our work contains an important message for research evaluation. Besides short-term evaluations it is also important to perform longer term evaluations, otherwise work of Nobel class may fall under the radar and is not rewarded according to its scientific value. It is further suggested that scientometric investigations should not overlook transitional characteristics of scientific progress. | ['Xiaojun Hu', 'Ronald Rousseau'] | Nobel Prize winners 2016: Igniting or sparking foundational publications? | 960,387 |
Limits of Greedy Approximation Algorithms for the Maximum Planar Subgraph Problem | ['Markus Chimani', 'Ivo Hedtke', 'Tilo Wiedera'] | Limits of Greedy Approximation Algorithms for the Maximum Planar Subgraph Problem | 916,193 |
We consider the problem of scheduling jobs with given start and finish times over two classes of multi-user channels, namely Multiple Access Channels and Degraded Broadcast Channels, and derive necessary and sufficient conditions for feasible scheduling of the jobs. | ['Dinkar Vasudevan', 'Vijay G. Subramanian', 'Douglas J. Leith'] | Scheduling jobs with hard deadlines over Multiple Access and Degraded Broadcast Channels | 44,255 |
This work addresses the problem of single robot coverage and exploration in an environment with the goal of finding a specific object previously known to the robot. As limited time is a constraint of interest we cannot search from an infinite number of points. Thus, we propose a multi-objective approach for such search tasks in which we first search for a good set of positions to place the robot sensors in order to acquire information from the environment and to locate the desired object. Given the interesting properties of the Generalized Voronoi Diagram, we restrict the candidate search points along this roadmap. We redefine the problem of finding these search points as a multi-objective optimization one. NSGA-II is used as the search engine and ELECTRE I is applied as a decision making tool to decide among the trade-off alternatives. We also solve a Chinese Postman Problem to optimize the path followed by the robot in order to visit the computed search points. Simulation results show a comparison between the solution found by our method and solutions defined by other known approaches. Finally, a real robot experiment indicates the applicability of our method in practical scenarios. | ['Kossar Jeddisaravi', 'Reza Javanmard Alitappeh', 'Luciano C. A. Pimenta', 'Frederico G. Guimarães'] | Multi-objective approach for robot motion planning in search tasks | 658,328 |
Telerobotic Surgery: Fuzzy Path Planning Control for a Telerobotic Assistant Surgery | ['Rahma Boucetta'] | Telerobotic Surgery: Fuzzy Path Planning Control for a Telerobotic Assistant Surgery | 631,278 |
A rooted acyclic digraph N with labeled leaves displays a tree T when there exists a way to select a unique parent of each hybrid vertex resulting in the tree T. Let Tr(N) denote the set of all trees displayed by the network N. In general, there may be many other networks M, such that Tr(M) = Tr(N). A network is regular if it is isomorphic with its cover digraph. If N is regular and {\cal D} is a collection of trees displayed by N, this paper studies some procedures to try to reconstruct N given {\cal D}. If the input is {\cal D}=Tr(N), one procedure is described, which will reconstruct N. Hence, if N and M are regular networks and Tr(N) = Tr(M), it follows that N = M, proving that a regular network is uniquely determined by its displayed trees. If {\cal D} is a (usually very much smaller) collection of displayed trees that satisfies certain hypotheses, modifications of the procedure will still reconstruct N given {\cal D}. | ['Stephen J. Willson'] | Regular Networks Can be Uniquely Constructed from Their Trees | 91,196 |
This paper presents a model-based optimization strategy for vapor compression refrigeration cycle. The optimization problem is formulated as minimizing the total operating cost of all energy consuming devices with mechanical limitations, component interactions, environment conditions and cooling load demands as constraints. Genetic algorithm is utilized to calculate optimal set point under different operating conditions. The simulation results comparison between the proposed algorithm and traditional on-off control verifies the energy saving effect of the proposed method. | ['Lei Zhao', 'Wenjian Cai', 'Xudong Ding'] | Optimization of vapor compression cycle based on genetic algorithm | 925,870 |
Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television. We propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem. Consequently, we identify geometric scene categorization as the first step toward robust and efficient depth estimation from single images. We introduce 15 typical 3D scene geometries called stages, each with a unique depth profile, which roughly correspond to a large majority of broadcast video frames. Stage information serves as a first approximation of global depth, narrowing down the search space in depth estimation and object localization. We propose different sets of low-level features for depth estimation, and perform stage classification on two diverse data sets of television broadcasts. Classification results demonstrate that stages can often be efficiently learned from low-dimensional image representations. | ['Vladimir Nedovic', 'Arnold W. M. Smeulders', 'Andre Redert', 'Jan-Mark Geusebroek'] | Stages as Models of Scene Geometry | 284,894 |
Active power filter (APF) can effectively compensate the harmonic current in the power grid. As APF is a nonlinear, multivariable, strong coupling system, it brings complexity to the controller design. Thus, a mathematical model is proposed to make the control law expression be effective and simple. Since the parameter perturbation of the grid-connected reactor during the operation process affected by external uncertain factors, the harmonic compensation effect of APF would be unstable. Therefore, an H ∞ robust control strategy against the parameter fluctuations is proposed. Simulation results show that the proposed control strategy has better parameter robustness than PI control strategy. The approach also guarantees the accurate compensation and fast dynamic response of the APF. | ['Shengzhou Ke', 'Ying Chen', 'Shaowei Huang', 'Xiuqiong Huang'] | H ∞ robust control of APF considering the parameter perturbation of the grid-connected reactor | 920,500 |
Electro-thermal coupling is only one aspect of numerous interactions between physical domains in microsystems. Different physical effects govern the functionality of microsystems and the system-level modelling using standard electro-thermal tools is not easy. In order to predict potential failures in microsystem designs and reduce the costs of prototyping, it is important to involve the simulation of electro-thermal effects at the system level, early in the design process. Also, it is necessary to conduct a final verification of the complete system with all governing subsystems. This paper considers different issues of electro-thermal modelling for microsystems and proposes analogue simulators with hardware description languages as a tool for the system-level modelling. With increasing system complexity, the mixed abstraction modelling is the only way to achieve an optimal blend of the accuracy and the speed. | ['Mirko Jakovljevic', 'Peter A. Fotiu', 'Zeljko Mrcarica', 'Vanco B. Litovski', 'Helmut Detter'] | Electro-thermal simulation of microsystems with mixed abstraction modelling | 337,743 |
In 3D-HEVC, single depth intra mode has been applied and has been integrated into depth intra skip mode for efficient depth map coding. With single depth intra mode, one 2N×2N prediction unit (PU) is predicted without high computational prediction process. In this paper, we propose a fast single depth intra mode decision method to address the problem of high computational complexity burden in depth intra mode decision of 3D-HEVC. To remove unnecessary computational complexity at the encoder, we early decide single depth intra mode for pruning quadtree in 3D-HEVC. This paper characterizes the statistics of smooth depth map signals for depth intra modes and analyzes distortion metrics of view synthesis optimization functionality as a decision criterion. With this proposed criterion, a single depth intra mode for intra coding has been detected and hierarchical CU/PU selection for intra coding can be stopped in 3D-HEVC. As a consequence, it utilizes the correlation between hierarchical block-based video coding and coding unit (CU)/PU mode decision for depth map coding so that a large number of recursive rate-distortion cost calculations can be skipped. We demonstrate the effectiveness of our approach experimentally. The simulation results show that the proposed scheme can achieve approximately 25.6% encoding time saving with 0.07% video PSNR/total bitrate gain and 0.18% synthesized view PSNR/total bitrate loss under all intra configuration. | ['Miok Kim', 'Nam Ling', 'Li Song'] | Fast single depth intra mode decision for depth map coding in 3D-HEVC | 685,170 |
Mobile devices provide people with a conduit to the rich infor-mation resources of the Web. With consent, the devices can also provide streams of information about search activity and location that can be used in population studies and real-time assistance. We analyzed geotagged mobile queries in a privacy-sensitive study of potential transitions from health information search to in-world healthcare utilization. We note differences in people's health infor-mation seeking before, during, and after the appearance of evidence that a medical facility has been visited. We find that we can accu-rately estimate statistics about such potential user engagement with healthcare providers. The findings highlight the promise of using geocoded search for sensing and predicting activities in the world. | ['Shuang-Hong Yang', 'Ryen W. White', 'Eric Horvitz'] | Pursuing insights about healthcare utilization via geocoded search queries | 35,742 |
The security concerns with outsourcing XML databases are well known. In the past few years researchers have proposed solutions to many of the concerns in the current outsourced database model. However one area remains relatively untouched, the securing of queries to outsourced XML databases. Most current research fails to even specify how the user will actually query the outsourced data. Therefore this paper proposes a new outsourced database model that allows for data and query confidentiality and data privacy. It also defines a structure for how the user will query the outsourced data. An access control method is also proposed to ensure data privacy in the proposed model. Through experimentation, it was found that the access control algorithm is a viable option for data privacy in the proposed outsourced XML database model. With a combination of the access control algorithm and the proposed data flow specified, the proposed model allows for queries to be secure from interception and modification from start to finish. | ['Brent Kimpton', 'Eric Pardede'] | Securing Queries to Outsourced XML Databases | 114,643 |
A bipartite graph G is bipancyclic if G has a cycle of length l for every even 4 ≤ l ≤ |V(G)|. For a bipancyclic graph G and any edge e, G is edge-bipancyclic if e lies on a cycle of any even length l of G. In this paper, we show that the bubble-sort graph Bn is bipancyclic for n ≥ 4 and also show that it is edge-bipancyclic for n ≥ 5. Assume that F is a subset of E(Bn). We prove that Bn - F is bipancyclic, when n ≥ 4 and |F| ≤ n-3. Since Bn is a (n - 1)-regular graph, this result is optimal in the worst case. | ['Yosuke Kikuchi', 'Toru Araki'] | Edge-bipancyclicity and edge-fault-tolerant bipancyclicity of bubble-sort graphs | 59,061 |
We explore the possibility of employing Alexandroff pretopologies as structures on the digital plane Z 2 Z 2 convenient for the study of geometric and topological properties of digital images. These pretopologies are known to be in one-to-one correspondence with reflexive binary relations so that graph-theoretic methods may be used when investigating them. We discuss such Alexandroff pretopologies on Z 2 Z 2 that possess a rich variety of digital Jordan curves obtained as circuits in a natural graph with the vertex set Z 2 Z 2 . Of these pretopologies, we focus on the minimal ones and study their quotient pretopologies on Z 2 Z 2 , which are shown to allow for various digital Jordan curve theorems. We also develop a method for identifying Jordan curves in the minimal pretopological spaces by using Jordan curves in their quotient spaces. Utilizing this method, we conclude the paper with proving a digital Jordan curve theorem for the minimal pretopologies. | ['Josef Šlapal'] | Alexandroff pretopologies for structuring the digital plane | 835,393 |
Fact-oriented conceptual modelling begins with the search for facts about a universe of discourse (UoD). These facts may be obtained from many sources, including information systems reports, tables, manuals and descriptive narrative both verbal and written. This paper presents some initial findings that support the use of discourse analysis techniques as an approach to developing elementary fact based sentences for information systems conceptual schema development from written text.#R##N#Although this discussion paper only considers the NIAM (fact-oriented) conceptual schema modelling method, the IS087 report from which the research case study is taken describes other conceptual methods for which the research contained in this paper could be applicable (e.g. Entity Relationship analysis). The case study could be modelled exactly in the form in which the text is initially found, but grammatical analysis focuses consideration on alternative, potentially better, expressions of a sentence, a theme which is described and demonstrated. As a result of having applied grammatical sentence simplification with co-ordinate clause splitting, each sentence could be expressed as a complete, finite, independent collection of declarative simple statements.#R##N#The outcome from the application of the techniques described provides at a minimum a discourse analysis of descriptive narrative which will have retained its meaning and contextual integrity while at the same time providing a simplified and independent clause representation for input to the fact-oriented conceptual schema modelling procedure. | ['Bruce A. Calway', 'James A. Sykes'] | Grammatical Conversion of Descriptive Narrative - an application of discourse analysis in conceptual modelling | 241,793 |
A prime objective of modeling genetic regulatory networks is the identification of potential targets for therapeutic intervention. To date, optimal stochastic intervention has been studied in the context of probabilistic Boolean networks, with the control policy based on the transition probability matrix of the associated Markov chain and dynamic programming used to find optimal control policies. Dynamical programming algorithms are problematic owing to their high computational complexity. Two additional computationally burdensome issues that arise are the potential for controlling the network and identifying the best gene for intervention. This paper proposes an algorithm based on mean first-passage time that assigns a stationary control policy for each gene candidate. It serves as an approximation to an optimal control policy and, owing to its reduced computational complexity, can be used to predict the best control gene. Once the best control gene is identified, one can derive an optimal policy or simply utilize the approximate policy for this gene when the network size precludes a direct application of dynamic programming algorithms. A salient point is that the proposed algorithm can be model-free. It can be directly designed from time-course data without having to infer the transition probability matrix of the network. | ['Golnaz Vahedi', 'Babak Faryabi', 'Jean-Francois Chamberland', 'Aniruddha Datta', 'Edward R. Dougherty'] | Intervention in Gene Regulatory Networks via a Stationary Mean-First-Passage-Time Control Policy | 424,274 |
Geotagged tweets allow one to extract geo-information-trend, search local events, and identify natural disasters. In this paper, we propose a Hidden-Markov-based model to integrate tweet contents and user movements for geotagging. A language model is obtained for different locations from training datasets and movements of users among cities are analyzed. Home cities of users are considered in modeling the patterns of user movements. Evaluation on a large Twitter dataset shows that our method can significantly improve geotagging accuracy by 55% for home cities and 2% for other non-home cities as well as reduce error distances by orders of magnitude compared with pure text-based methods. | ['Zhi Liu', 'Yan Huang'] | Where are You Tweeting?: A Context and User Movement Based Approach | 910,704 |
A Transductive Support Vector Machine Algorithm Based on Ant Colony Optimization | ['Xu Yu', 'Chun-nian Ren', 'Yanping Zhou', 'Yong Wang'] | A Transductive Support Vector Machine Algorithm Based on Ant Colony Optimization | 855,563 |
ABSTRACT#R##N##R##N#In this paper, we study the problem of power allocation for streaming multiple variable-bit-rate (VBR) videos in the downlink of a cellular network. We consider a deterministic model for VBR video traffic and finite playout buffer at the mobile users. The objective is to derive the optimal downlink power allocation for the VBR video sessions, such that the video data can be delivered in a timely fashion without causing playout buffer overflow and underflow. The formulated problem is a nonlinear nonconvex optimization problem. We analyze the convexity conditions for the formulated problem and propose a two-step greedy approach to solve the problem. We also develop a distributed algorithm based on the dual decomposition technique, which can be incorporated into the two-step solution procedure. The performance of the proposed algorithms is validated with simulations using VBR video traces under realistic scenarios. Copyright © 2012 John Wiley & Sons, Ltd. | ['Yingsong Huang', 'Shiwen Mao', 'Yihan Li'] | On downlink power allocation for multiuser variable‐bit‐rate video streaming | 300,314 |
Cloud databases achieve high availability by automatically replicating data on multiple nodes. However, the overhead caused by the replication process can lead to an increase in the mean and variance of transaction response times, causing unforeseen impacts on the offered quality-of-service (QoS). In this paper, we propose a measurement-driven methodology to predict the impact of replication on Database-as-a-Service (DBaaS) environments. Our methodology uses operational data to parameterize a closed queueing network model of the database cluster together with a Markov model that abstracts the dynamic replication process. Experiments on Amazon RDS show that our methodology predicts response time mean and percentiles with errors of just 1% and 15% respectively, and under operational conditions that are significantly different from the ones used for model parameterization. We show that our modeling approach surpasses standard modeling methods and illustrate the applicability of our methodology for automated DBaaS provisioning. | ['Rasha Osman', 'Juan F. Perez', 'Giuliano Casale'] | Quantifying the Impact of Replication on the Quality-of-Service in Cloud Databases | 890,659 |
We are concerned with the problem of counting the distinct flows on a high speed network link. Flow counting programs, which must peek at all incoming packets, must run very quickly in order to keep up with the high packet arrival rates of modern networks. Previous approaches for flow counting based on bitmap algorithms can underestimate the number of flows. We propose a new timestamp-vector algorithm that retains the fast estimation and small memory requirement of the bitmap-based algorithms, while reducing the possibility of underestimating the number of active flows. | ['Hyang-Ah Kim', "David R. O'Hallaron"] | Counting network flows in real time | 125,800 |
Practical and efficient algorithms for concurrent data structures are difficult to construct and modify. Algorithms in the literature are often optimized for a specific setting, making it hard to separate the algorithmic insights from implementation details. The goal of this work is to systematically construct algorithms for a concurrent data structure starting from its sequential implementation. Towards that goal, we follow a construction process that combines manual steps corresponding to high-level insights with automatic exploration of implementation details. To assist us in this process, we built a new tool called Paraglider. The tool quickly explores large spaces of algorithms and uses bounded model checking to check linearizability of algorithms. Starting from a sequential implementation and assisted by the tool, we present the steps that we used to derive various highly-concurrent algorithms. Among these algorithms is a new fine-grained set data structure that provides a wait-free contains operation, and uses only the compare-and-swap (CAS) primitive for synchronization. | ['Martin T. Vechev', 'Eran Yahav'] | Deriving linearizable fine-grained concurrent objects | 299,304 |
Shared randomness is an important resource in cryptography. It is well-known that in the information-theoretic setting there is no protocol that allows two parties who do not trust each other to obtain a uniformly distributed shared bit string solely by exchanging messages such that a dishonest party can not influence the result. On the other hand, in the situation where the two parties already share a random bit string and want to use it in order to construct a longer random bit string, it is only known to be impossible when the protocols are restricted in the number of messages to be exchanged. In this paper we prove that it is also impossible when arbitrarily many messages are allowed. | ['Gregor Seiler', 'Ueli Maurer'] | On the impossibility of information-theoretic composable coin toss extension | 872,192 |
This paper describes an object-based video coding scheme (OBVC) that was proposed by Texas Instruments to the emerging ISO MPEG-4 video compression standardization effort. This technique achieves efficient compression by separating coherently moving objects from stationary background and compactly representing their shape, motion, and the content. In addition to providing improved coding efficiency at very low bit rates, the technique provides the ability to selectively encode, decode, and manipulate individual objects in a video stream. This technique supports all three MPEG-4 functionalities tested in the November 1995 tests, namely, improved coding efficiency, error resilience, and content scalability. This paper also describes the error protection and concealment schemes that enable robust transmission of compressed video over noisy communication channels such as analog phone lines and wireless links. The noise introduced by the communication channel is characterized by both burst errors and random bit errors. Applications of this object-based video coding technology include videoconferencing, video telephony, desktop multimedia, and surveillance video. | ['Raj Talluri', 'Karen L. Oehler', 'Tom Barmon', 'Jon D. Courtney', 'Arnab Das', 'Judy Liao'] | A robust, scalable, object-based video compression technique for very low bit-rate coding | 28,257 |
Concealing Secrets in Embedded Processors Designs. | ['Hannes Gross', 'Manuel Jelinek', 'Stefan Mangard', 'Thomas Unterluggauer', 'Mario Werner'] | Concealing Secrets in Embedded Processors Designs. | 988,095 |
Ultra-wideband (UWB) radios are expected to be the next generation of the transmission systems that can support high data rate and power-constrained applications. Since the UWB signals have wide bandwidth, two problems arise, namely the high speed analog-to-digital-converter (ADC) required at the receiver side and the coexistence with other narrowband interference (NBI) systems that share the same part of the spectrum. The two problems can be addressed using compressive sensing (CS) based on the fact that narrowband signals have sparse representation in the discrete cosine transform (DCT) domain. It becomes more complicated when multiuser case is considered. For training based NBI mitigation with CS, three groups of pilot symbols are used to estimate the NBI signal subspace, the UWB signal subspace and to provide information about the channel. This work aims to evaluate the system performance in the presence of multiuser interference in addition to the NBI. The paper extends the derivations to include the effect of the multiuser interference together with the NBI and the additive white Gaussian noise. Direct sequence combined with time hopping coding is applied for the multiple access technique. Simulation illustrates that as more users being active the performance of the system degrades. Additionally, when the number of the secondary users increases, the multiuser becomes the most dominant factor that affects the system performance irrespective of the NBI. | ['Saleh A. Alawsh', 'Ali H. Muqaibel'] | Compressive sensing based NBI mitigation in UWB systems in the presence of multiuser interference | 887,845 |
Formal Verification of Steady-State Errors in Unity-Feedback Control Systems | ['Muhammad Ahmad', 'Osman Hasan'] | Formal Verification of Steady-State Errors in Unity-Feedback Control Systems | 623,565 |
This paper examines the collaborative process of developing Arc, a computer numerical controlled (CNC) engraving tool for ceramics that offers a new window onto traditional forms of craft. In reflecting on this case and scholarship from the social sciences, we make two contributions. First, we show that fabrication tools may integrate multiple and distinct roles (as copiers, translators and connectors) in their production of form, selectively limiting the agency of the maker and machine. Second, we situate small-scale manufacturing in a wider historical context of "mimetic machinery": machines for mechanical reproduction that draw their symbolic power from a material connection with the phenomena represented (in this case, sound and gesture). We end by sharing lessons learned for fabrication research based on this study. | ['Hidekazu Saegusa', 'Thomas Tran', 'Daniela K. Rosner'] | Mimetic Machines: Collaborative Interventions in Digital Fabrication with Arc | 725,298 |
FDB is an in-memory query engine for factorised databases, which are relational databases that use compact factorised representations at the physical layer to reduce data redundancy and boost query performance.#R##N##R##N#We demonstrate FDB using real data sets from IMDB, DBLP, and the NELL repository of facts learned from Web pages. The users can inspect factorisations as well as plans used by FDB to compute factorised results of select-project-join queries on factorised databases. | ['Nurzhan Bakibayev', 'Dan Olteanu', 'Jakub Závodný'] | Demonstration of the FDB query engine for factorised databases | 48,676 |
Search engines award their advertising space through keyword auctions. Some bidders may adopt an aggressive bidding strategy known as Competitor Busting, where they submit higher bids than what is strictly needed to win the auction so as to oust the other bidders. Despite the widespread concern for such practice, we show that the Competitor Busting strategy is largely ineffective. The lifetime of non-aggressive bidders, their presence in the auction, and the proportion of slots they are awarded are not affected by the presence of aggressive bidders. These conclusions are valid as long as the aggressive bidders do not have a significant budget advantage over non-aggressive ones. | ['Maurizio Naldi', 'Antonio Pavignani', 'Antonio Grillo', 'Alessandro Lentini', 'Giuseppe F. Italiano'] | The Competitor Busting Strategy in Keyword Auctions: Who's Worst Hit? | 523,934 |
Presents a hybrid modeling technique that is used for the first time in hidden Markov model-based handwriting recognition. This new approach combines the advantages of discrete and continuous Markov models and it is shown that this is especially suitable for modeling the features typically used in handwriting recognition. The performance of this hybrid technique is demonstrated by an extensive comparison with traditional modeling techniques for a difficult large vocabulary handwriting recognition task. | ['Gerhard Rigoll', 'Andreas Kosmala', 'Daniel Willett'] | A new hybrid approach to large vocabulary cursive handwriting recognition | 518,595 |
We give a novel algorithm for finding a parsimonious context tree (PCT) that best fits a given data set. PCTs extend traditional context trees by allowing context-specific grouping of the states of a context variable, also enabling skipping the variable. However, they gain statistical efficiency at the cost of computational efficiency, as the search space of PCTs is of tremendous size. We propose pruning rules based on efficiently computable score upper bounds with the aim of reducing this search space significantly. While our concrete bounds exploit properties of the BIC score, the ideas apply also to other scoring functions. Empirical results show that our algorithm is typically an order-of-magnitude faster than a recently proposed memory-intensive algorithm, or alternatively, about equally fast but using dramatically less memory. | ['Ralf Eggeling', 'Mikko Koivisto'] | Pruning rules for learning parsimonious context trees | 959,773 |
In spontaneous speech understanding a sophisticated integration of speech recognition and language processing is especially crucial. However, the two modules are traditionally designed independently, with independent linguistic rules. In Japanese speech recognition the bunsetsu phrase is the basic processing unit and in language processing the sentence is the basic unit. This difference has made it impractical to use a unique set of linguistic rules for both types of processing. Further, spontaneous speech contains unexpected utterances other than well formed sentences, while linguistic rules for both speech and language processing expect well-formed sentences. They therefore fail to process everyday spoken language. To bridge the gap between speech and language processing, we propose that pauses be treated as phrase demarcators and that the interpausal phrase be the basic common processing unit. And to treat the linguistic phenomena of spoken language properly, we survey relevant features in spontaneous speech data. We then examine the effect of integrating pausal and spontaneous speech phenomena into syntactic rules for speech recognition, using 118 sentences. Our experiments show that incorporating pausal phenomena as purely syntactic constraints degrades recognition accuracy considerably, while the additional degradation if some further spontaneous speech features are also incorporated. | ['Junko Hosaka', 'Mark Seligman', 'Harald Singer'] | Pause as a phrase demarcator for speech and language processing | 280,554 |
This paper presents a thorough experimental study on key generation principles, i.e., temporal variation, channel reciprocity, and spatial decorrelation, through a testbed constructed by using wireless open-access research platform. It is the first comprehensive study through: 1) carrying out a number of experiments in different multipath environments, including an anechoic chamber, a reverberation chamber, and an indoor office environment, which represents little, rich, and moderate multipath, respectively; 2) considering static, object moving, and mobile scenarios in these environments, which represents different levels of channel dynamicity; and 3) studying two most popular channel parameters, i.e., channel state information and received signal strength. Through results collected from over a hundred tests, this paper offers insights to the design of a secure and efficient key generation system. We show that multipath is essential and beneficial to key generation as it increases the channel randomness. We also find that the movement of users/objects can help introduce temporal variation/randomness and help users reach an agreement on the keys. This paper complements existing research by experiments constructed by a new hardware platform. | ['Junqing Zhang', 'Roger F. Woods', 'Trung Q. Duong', 'Alan Marshall', 'Yuan Ding', 'Yi Huang', 'Qian Xu'] | Experimental Study on Key Generation for Physical Layer Security in Wireless Communications | 884,882 |
In this study, the upward (I US ) and downward (I DS ) slopes of the QRS complex are proposed as indices for quantifying ischemia-induced electrocardiogram (ECG) changes. Using ECG recordings acquired before and during percutaneous transluminal coronary angioplasty (PTCA), it is found that the QRS slopes are considerably less steep during artery occlusion, in particular for I DS . With respect to ischemia detection, the slope indices outperform the often used high-frequency index (defined as the root mean square (rms) of the bandpass-filtered QRS signal for the frequency band 150-250 Hz) as the mean relative factors of change are much larger for I US and I DS than for the high-frequency index (6.9 and 7.3 versus 3.7). The superior performance of the slope indices is equally valid when other frequency bands of the high-frequency index are investigated (the optimum one is found to be 125-175 Hz). Employing a simulation model in which the slopes of a template QRS are altered by different techniques, it is found that the slope changes observed during PTCA are mostly due to a widening of the QRS complex or a decrease of its amplitudes, but not a reduction of its high-frequency content or a combination of this and the previous effects. It is concluded that QRS slope information can be used as an adjunct to the conventional ST segment analysis in the monitoring of myocardial ischemia. | ['Esther Pueyo', 'Leif Sörnmo', 'Pablo Laguna'] | QRS Slopes for Detection and Characterization of Myocardial Ischemia | 114,835 |
Linear convergence of an algorithm for computing the largest eigenvalue of a nonnegative tensor | ['Liping Zhang', 'Liqun Qi'] | Linear convergence of an algorithm for computing the largest eigenvalue of a nonnegative tensor | 417,744 |
Fortune cookie management for information technology support professionals | ['John E. Bucher'] | Fortune cookie management for information technology support professionals | 414,270 |
artifacts that have been labeled as ontologies have many different qualities and intended outcomes. This is particularly true of bio-ontologies where high demand has led to a rapid growth in the number of these artifacts. Good communication between the human agents involved in the life cycle of ontologies is essential for the ontologist to encode the right knowledge in the ontology. Not only this, but it should be encoded such that subsequent retrieval of the knowledge from the ontology by any agent can be clear and precise. The ontologist can encode ontological statements, for interpretation by a computer agent, or meta-ontological statements, for interpretation by human agents. We consider how the current communication between agents and ontologies produces drawbacks that add to the considerable overheads associated with ontology development. We describe the processes of communication between human agents and ontologies as Ontology Comprehension. We then suggest how these processes could be augmented, particularly with the use of Web 2.0 ideas. By exposing and enhancing the social interactions involved in ontology comprehension, development overheads are potentially reduced and the prospect of ontology sharing and reuse is improved. | ['A.A.P. Gibson', 'Katy Wolstencroft', 'Robert Stevens'] | Promotion of Ontological Comprehension: Exposing Terms and Metadata with Web 2.0 | 341,444 |
JPEG2000 is the latest international standard for compression of still images. Although the JPEG2000 codec is designed to compress images, we illustrate that it can also be used to compress other signals. As an example, we illustrate how the JPEG2000 codec can be used to compress electrocardiogram (ECG) data. Experiments using the MIT-BIH arrhythmia database illustrate that the proposed approach outperforms many existing ECG compression schemes. The proposed scheme allows the use of existing hardware and software JPEG2000 codecs for ECG compression, and can be especially useful in eliminating the need for specialized hardware development. The desirable characteristics of the JPEG2000 codec, such as precise rate control and progressive quality, are retained in the presented scheme. The goal of this paper is to demonstrate the ECG application as an example. This example can be extended to other signals that exist within the consumer electronics realm. | ['Ali Bilgin', 'Michael W. Marcellin', 'Maria I. Altbach'] | Compression of electrocardiogram signals using JPEG2000 | 341,123 |
Innovation of Practice-Based Teaching Strategy in University via Web | ['Jiangyu Li', 'Shaogang Zhang'] | Innovation of Practice-Based Teaching Strategy in University via Web | 320,978 |
A multibeam satellite communications network serving multiple zones with S-ALOHA random access uplinks and dynamically switched transponders in the downlinks is studied. The overhead of switching transponders between zones may degrade the performance of the system significantly. Two different strategies are introduced and studied. In the guard time strategy, each slot time is equal to the packet transmission time plus the transponder switching time, allowing the transponder to be pointed to a new zone at the beginning of each slot. In the idle waiting strategy, each slot time is equal to the packet transmission time. If a transponder is switched to a new zone, it will take k slot time where k is the smallest integer greater than the switching time divided by the slot time. The throughputs of these two strategies are analyzed and compared. > | ['Cheng-Shong Wu', 'Victor O. K. Li'] | Random access for a multibeam satellite with dynamic transponder switching | 468,624 |
A searching method based on 'what' and 'how' problem descriptions, within a special software component library, is presented. The 'what' problem description is based on a high-level representation of general features of initial and final data the problem can process or produce. The 'how' problem description is based on another high-level representation of the algorithmic features of the problem solution. In this paper, the basic idea of the 'what' and 'how' problem descriptions, as well as attributes and corresponding multimedia symbols, are considered. Some examples of interface panels that use these attributes are also described. | ['Yutaka Watanobe', 'Rentaro Yoshioka', 'Nikolay N. Mirenkov'] | A searching method based on problem description and algorithmic features | 27,897 |
In the field of undersea research, underwater vehicles usually carry camera systems for recording. The captured image or video often has two undesired characteristics: color distortion and low visibility. This is because that the light is exponentially attenuated while penetrating through water. Furthermore, the quality of attenuation is associated with the wavelength of spectrum. This paper simplifies the Jaffe-McGlamery optical model and proposes an effective algorithm to recover underwater images. In our approach, a red-dark channel prior was defined and derived to estimate the background light and the transmission. The visibility of scene was compensated by the object-camera distance to recover the colors of the background and objects. Subsequently, by analyzing the physical property of the point spread function, we developed a simple but efficient low-pass filter to deblur degraded underwater images. A wide variety of underwater images with different scenarios were used for the experiments. The experimental results indicated that the proposed algorithm effectively recovered underwater images while eliminating the influence of absorption and scattering. We believe that this new restoration algorithm is promising in many underwater image processing applications. | ['Chia-Yang Cheng', 'Chia-Chi Sung', 'Herng-Hua Chang'] | Underwater image restoration by red-dark channel prior and point spread function deconvolution | 652,595 |
Making Spectrum Reform "Thinkable". | ['James B. Speta'] | Making Spectrum Reform "Thinkable". | 798,395 |
As transistor sizes shrink, interconnects represent an increasing bottleneck for chip designers. Several groups are developing new interconnection methods and system architectures to cope with this trend. New architectures require new methods for high-level application mapping and hardware/software codesign. We present high-level scheduling and interconnect topology synthesis techniques for embedded multiprocessor systems-on-chip that are streamlined for one or more digital signal processing applications. That is, we seek to synthesize an application-specific interconnect topology. We show that flexible interconnect topologies utilizing low-hop communication between processors offer advantages for reduced power and latency. We show that existing multiprocessor scheduling algorithms can deadlock if the topology graph is not strongly connected, or if a constraint is imposed on the maximum number of hops allowed for communication. We detail an efficient algorithm that can be used in conjunction with existing scheduling algorithms for avoiding this deadlock. We show that it is advantageous to perform application scheduling and interconnect synthesis jointly, and present a probabilistic scheduling/interconnect algorithm that utilizes graph isomorphism to pare the design space. | ['Neal K. Bambha', 'Shuvra S. Bhattacharyya'] | Joint application mapping/interconnect synthesis techniques for embedded chip-scale multiprocessors | 333,913 |
This communication discusses considerations for the establishment of a spatialized computer model formalizing a prospective of the environmental impacts of public policies regarding land uses. This objective, once applied in West Africa, needs to face several challenges specific to the sub-region: on the one hand, the difficulty of creating a set of data strong enough for enabling the modelling process and on the other hand, the integration of the high variability of its environment, both biophysical (rainfall first) and socially (land tenures first). | ['Mahamadou Belem', 'Melio Sáenz', 'Mehdi Saqalli', 'Nicolas Maestripieri'] | Integrated assessment modelling of environmental impacts of land use policy in West-Africa: A conceptual model | 670,425 |
A number of signature schemes and standards have been recently designed, based on the discrete logarithm problem. Examples of standards are the DSA and the KCDSA. Very few formal design/security validations have already been conducted for both the KCDSA and the DSA, but in the "full" so-called random oracle model. In this paper we try to minimize the use of ideal hash functions for several Discrete Logarithm (DSS-like) signatures (abstracted as generic schemes). Namely, we show that the following holds: "if they can be broken by an existential forgery using an adaptively chosen-message attack then either the discrete logarithm problem can be solved, or some hash function can be distinguished from an ideal one, or multi-collisions can be found." Thus for these signature schemes, either they are equivalent to the discrete logarithm problem or there is an attack that takes advantage of properties of practical hash functions (SHA-1 or whichever high quality cryptographic hash function is used). What is interesting is that the schemes we discuss include KCDSA and slight variations of DSA. Further, since our schemes are very close to their standard counterparts they benefit from their desired properties (efficiency of computation/space, employment of certain mathematical operations and wide applicability to various algebraic structures). We feel that adding variants with strong validation of security is important to this family of signature schemes since, as we have experienced in the recent past, lack of such validation has led to attacks on standard schemes, years after their introduction. In addition, schemes with formal validation which is made public, may ease global standardization since they neutralize much of the suspicions regarding potential knowledge gaps and unfair advantages gained by the scheme designer's country (e.g. the NSA being the designers of DSS). | ['Ernest F. Brickell', 'David Pointcheval', 'Serge Vaudenay', 'Moti Yung'] | Design Validations for Discrete Logarithm Based Signature Schemes | 143,205 |
Feedback has been shown to affect performance when using a Brain-Computer Interface (BCI) based on sensorimotor rhythms. In contrast, little is known about the influence of feedback on P300-based BCIs. There is still an open question whether feedback affects the regulation of P300 and consequently the operation of P300-based BCIs. In this paper, for the first time, the influence of feedback on the P300-based BCI speller task is systematically assessed. For this purpose, 24 healthy participants performed the classic P300-based BCI speller task, while only half of them received feedback. Importantly, the number of flashes per letter was reduced on a regular basis in order to increase the frequency of providing feedback. Experimental results showed that feedback could significantly improve the P300-based BCI speller performance, if it was provided in short time intervals (e.g. in sequences as short as 4 to 6 flashes per row/column). Moreover, our offline analysis showed that providing feedback remarkably enhanced the relevant ERP patterns and attenuated the irrelevant ERP patterns, such that the discrimination between target and non-target EEG trials increased. | ['Mahnaz Arvaneh', 'Tomas Ward', 'Ian H. Robertson'] | Effects of Feedback Latency on P300-based Brain-computer Interface | 597,242 |
Jace is a Multi-threaded Java environment that permits to implement and execute distributed asynchronous iterative algorithms. This class of algorithm is very suitable in a grid computing context because it suppresses all synchronizations between computation nodes, tolerates the loss of messages and enables the overlapping of communications by computation. The aim of this paper is to present new results obtained with the new improved version of Jace. This version is a complete rewriting of the environment. Several functionalities have been added to achieve better performances. In particular, the communication and the task management layers have been completely redesigned. Our evaluation is based on solving scientific applications using the french Grid'5000 platform and shows that the new version of Jace performs better than the old one. | ['Jacques M. Bahi', 'Raphaël Couturier', 'David Laiymani', 'Kamel Mazouzi'] | Distributed Asynchronous Iterative Algorithms: New Experimentations with the Jace Environment | 185,331 |
Motivation: Various computational methods have been proposed to tackle the problem of predicting the peptide binding ability for a specific MHC molecule. These methods are based on known binding peptide sequences. However, current available peptide databases do not have very abundant amounts of examples and are highly redundant. Existing studies show that MHC molecules can be classified into supertypes in terms of peptide-binding specificities. Therefore, we first give a method for reducing the redundancy in a given dataset based on information entropy, then present a novel approach for prediction by learning a predictive model from a dataset of binders for not only the molecule of interest but also for other MHC molecules.#R##N##R##N#Results: We experimented on the HLA-A family with the binding nonamers of A1 supertype (HLA-A*0101, A*2601, A*2902, A*3002), A2 supertype (A*0201, A*0202, A*0203, A*0206, A*6802), A3 supertype (A*0301, A*1101, A*3101, A*3301, A*6801) and A24 supertype (A*2301 and A*2402), whose data were collected from six publicly available peptide databases and two private sources. The results show that our approach significantly improves the prediction accuracy of peptides that bind a specific HLA molecule when we combine binding data of HLA molecules in the same supertype. Our approach can thus be used to help find new binders for MHC molecules.#R##N##R##N#Contact: [email protected]#R##N##R##N#Supplementary information: Supplementary data are available at Bioinformatics online. | ['Shanfeng Zhu', 'Keiko Udaka', 'John Sidney', 'Alessandro Sette', 'Kiyoko F. Aoki-Kinoshita', 'Hiroshi Mamitsuka'] | Improving MHC binding peptide prediction by incorporating binding data of auxiliary MHC molecules | 99,837 |
This paper presents an experience in developing professional ethics by an approach that integrates knowledge, teaching methodologies and assessment coherently. It has been implemented for students in both the Software Engineering and Computer Engineering degree programs of the Technical University of Madrid, in which professional ethics is studied as a part of a required course. Our contribution of this paper is a model for formative assessment that clarifies the learning goals, enhances the results, simplifies the scoring and can be replicated in other contexts. A quasi-experimental study that involves many of the students of the required course has been developed. To test the effectiveness of the teaching process, the analysis of ethical dilemmas and the use of deontological codes have been integrated, and a scoring rubric has been designed. Currently, this model is also being used to develop skills related to social responsibility and sustainability for undergraduate and postgraduate students of diverse academic context. | ['Rafael Miñano', 'Ángel Uruburu', 'Ana Moreno-Romero', 'Diego Pérez-López'] | Strategies for Teaching Professional Ethics to IT Engineering Degree Students and Evaluating the Result | 622,620 |
Phone placement, i.e., where the phone is carried/stored, is an important source of information for context-aware applications. Extracting information from the integrated smart phone sensors, such as motion, light and proximity, is a common technique for phone placement detection. In this paper, the efficiency of an accelerometer-only solution is explored, and it is investigated whether the phone position can be detected with high accuracy by analyzing the movement, orientation and rotation changes. The impact of these changes on the performance is analyzed individually and both in combination to explore which features are more efficient, whether they should be fused and, if yes, how they should be fused. Using three different datasets, collected from 35 people from eight different positions, the performance of different classification algorithms is explored. It is shown that while utilizing only motion information can achieve accuracies around 70%, this ratio increases up to 85% by utilizing information also from orientation and rotation changes. The performance of an accelerometer-only solution is compared to solutions where linear acceleration, gyroscope and magnetic field sensors are used, and it is shown that the accelerometer-only solution performs as well as utilizing other sensing information. Hence, it is not necessary to use extra sensing information where battery power consumption may increase. Additionally, I explore the impact of the performed activities on position recognition and show that the accelerometer-only solution can achieve 80% recognition accuracy with stationary activities where movement data are very limited. Finally, other phone placement problems, such as in-pocket and on-body detections, are also investigated, and higher accuracies, ranging from 88% to 93%, are reported, with an accelerometer-only solution. | ['Ozlem Durmaz Incel'] | Analysis of Movement, Orientation and Rotation-Based Sensing for Phone Placement Recognition | 240,289 |
With transistor mask costs soaring and the delays associated with full design re-spins escalating, post-mask Engineering Change Orders (ECOs) - design changes after the masks have been prepared - are increasingly carried out by keeping transistor masks intact and revising only the metal masks. In this paper, we propose a novel design flow for achieving technology remapping for post-mask ECOs. In contrast to conventional technology mapping and placement algorithms that have no notion of the quantity for each gate type and the location of placed spare/recycled cells, our flow ECO-Map provides an ideal scalable framework for achieving global optimization in a post-mask ECO scenario. Given the changed logic due to a functional ECO and a limited number of placed spare/recycled cells, ECO-Map finds a resource-feasible Boolean cover and optimally fits the changed logic into the available resources. This ensures minimal perturbation of the existing solution and keeps transistor masks intact, thus reducing non-recurring engineering (NRE) costs. Experiments performed on MCNC benchmarks show the effectiveness of our approach. | ['Nilesh Modi', 'Malgorzata Marek-Sadowska'] | ECO-Map: Technology remapping for post-mask ECO using simulated annealing | 59,636 |