abstract
stringlengths
5
10.1k
authors
stringlengths
9
1.96k
title
stringlengths
5
367
__index_level_0__
int64
1
1,000k
Virtual project management in global organizations is both challenging and important. Being able to identify and apply best practices is an essential skill, as is an understanding of how to leverage the right technology for communication and information sharing. Based on a typology of virtual projects, and using the theoretical frame of patterns, we propose an integrative way of identifying and applying best practices for the management of virtual projects. We present an assessment approach that allows managers to determine the nature of their virtual projects and discover and apply patterns for managing them
['Deepak Khazanchi', 'Ilze Zigurs']
An Assessment Framework for Discovering and Using Patterns in Virtual Project Management
315,805
This paper describes our submission, cmu--heafield--combo, to the WMT 2010 machine translation system combination task. Using constrained resources, we participated in all nine language pairs, namely translating English to and from Czech, French, German, and Spanish as well as combining English translations from multiple languages. Combination proceeds by aligning all pairs of system outputs then navigating the aligned outputs from left to right where each path is a candidate combination. Candidate combinations are scored by their length, agreement with the underlying systems, and a language model. On tuning data, improvement in BLEU over the best system depends on the language pair and ranges from 0.89% to 5.57% with mean 2.37%.
['Kenneth Heafield', 'Alon Lavie']
CMU Multi-Engine Machine Translation for WMT 2010
439,261
Many problems in sequential decision making and stochastic control naturally enjoy strong multiscale structure: sub-tasks are often assembled together to accomplish complex goals. However, systematically inferring and leveraging hierarchical structure has remained a longstanding challenge. We describe a fast multiscale procedure for repeatedly compressing or homogenizing Markov decision processes (MDPs), wherein a hierarchy of sub-problems at different scales is automatically determined. Coarsened MDPs are themselves independent, deterministic MDPs, and may be solved using any method. The multiscale representation delivered by the algorithm decouples sub-tasks from each other and improves conditioning. These advantages lead to potentially significant computational savings when solving a problem, as well as immediate transfer learning opportunities across related tasks.
['Jake V. Bouvrie', 'Mauro Maggioni']
Efficient solution of Markov decision problems with multiscale representations
940,206
Fixed Spectrum Allocation (FSA) results in suboptimal spectrum utilization, unbalanced system loading and the inability to adapt to system traffic variations when employed in multi-Radio Access Technology (RAT) systems. This paper proposes the use of flexible spectrum management techniques in multi-RAT systems. A Dynamic Spectrum Management (DSM) framework that addresses the shortcomings of FSA, while ensuring the reliable operation of all co-deployed RATs, is presented. Simulations for a system co-deploying HSPA and LTE show that the proposed framework outperforms FSA schemes and is capable of adapting to system traffic variations.
['Ahmed Alsohaily', 'Elvino S. Sousa']
Dynamic Spectrum Management in Multi-Radio Access Technology (RAT) Cellular Systems
299,284
Presents a new approach to rendering arbitrary views of real-world 3D objects of complex shapes. We propose to represent an object by a sparse set of corresponding 2D views, and to construct any other view as a combination of these reference views. We show that this combination can be linear, assuming proximity of the views, and we suggest how the visibility of constructed points can be determined. Our approach makes it possible to avoid difficult 3D reconstruction, assuming only rendering is required. Moreover, almost no calibration of views is needed. We present preliminary results on real objects, indicating that the approach is feasible. >
['Tomas Werner', 'Roger D. Hersch', 'Václav Hlaváč']
Rendering real-world objects using view interpolation
249,725
We present a new action recognition deep neural network which adaptively learns the best action velocities in addition to the classification. While deep neural networks have reached maturity for image understanding tasks, we are still exploring network topologies and features to handle the richer environment of video clips. Here, we tackle the problem of multiple velocities in action recognition, and provide state-of-the-art results for gesture recognition, on known and new collected datasets. We further provide the training steps for our semi-supervised network, suited to learn from huge unlabeled datasets with only a fraction of labeled examples.
['Otkrist Gupta', 'Dan Raviv', 'Ramesh Raskar']
Multi-velocity neural networks for gesture recognition in videos
693,713
This paper reports a study of the strategic network problem of routing military convoys between specific origin and destination pairs. Known as the Convoy Movement Problem (CMP), this problem is formulated as an integer linear program. The proposed mathematical model is evaluated on the basis of average number of iterations and average CPU times. LP-based lower bounds and heuristic based upper bounds were generated and used for evaluating the proposed model, particularly for large problem instances for which optimal solutions could not be obtained.
['P.N. Ram Kumar', 'T. T. Narendran']
Integer programming formulation for convoy movement problem
128,326
This paper studies whether a sequence of myopic blockings leads to a stable matching in the roommate problem. We prove that if a stable matching exists and preferences are strict, then for any unstable matching, there exists a finite sequence of successive myopic blockings leading to a stable matching. This implies that, starting from any unstable matching, the process of allowing a randomly chosen blocking pair to form converges to a stable matching with probability one. This result generalizes those of Roth and Vande Vate [Econometrica 58 (1990) 1475] and Chung [Games Econ. Behav. 33 (2000) 206] under strict preferences.
['Effrosyni Diamantoudi', 'Eiichi Miyagawa', 'Licun Xue']
Random Paths to Stability in the Roommate Problem
247,696
E-Learning and experimental teaching, the former is a burgeoning and significant branch of the area of education, and the latter is a vital and practical means to combine knowledge and ability, theories and practice, to train the students? basic skills and to cultivate their research ability. Based on the experimental teaching of the course of Automatic Control Theories and with the combination of modern informational techniques and Automatic Control Theories, this dissertation studies the new experimental teaching patterns, probes into the teaching methods of cooperative learning, investigative learning and discovery learning. Overall, this dissertation devotes a lot to cultivating high-intelligence creative talents and improving the comprehensive intelligence of students.
['Lin Xu', 'Jianhui Wang', 'Xiaoke Fang', 'Yan Zheng', 'Dakuo He']
Research on Experimental Teaching Patterns Based on E-learning
124,923
Two-way relaying (TWR) transmission achieves virtual full duplexing, which has attracted much attention in recent years for its high spectral efficiency. In a TWR system, two source nodes exchange information with each other simultaneously via a relay node. This paper proposes a decode-and-forward two-way relaying (DF-TWR) scheme and a denoise-and-forward two-way relaying (DNF-TWR) scheme using non-coherent multiple differential phase-shift keying (M-DPSK) modulation, in which channel state information (CSI) is not required. Firstly, we design the relay decoder in DF-TWR scheme, the relay denoising mapping function in DNF-TWR scheme, and the same source decoder in the two schemes using maximum likelihood (ML) principle. Then we analyse the end-to-end symbol error rate (SER) of DNF-TWR scheme with M-DPSK modulation. The simulation results show that the BER performance of DF-TWR scheme and DNF-TWR scheme are similar. The BER performance of the two schemes with 2DPSK modulation are good, and the BER performance with M-DPSK modulation are getting worse with the increase of modulation order M.
['Jie Fan', 'Lixin Li', 'Tao Bao', 'Huisheng Zhang']
Two-way relaying with differential MPSK modulation in virtual full duplexing system
699,611
This report describes the work done by the RFIA group at the Departamento de Sistemas Informaticos y Computacion of the Universidad Politecnica of Valencia for the 2005 edition of the CLEF Question Answering task. We participated in three monolingual tasks: Spanish, Italian and French, and in two cross-language tasks: spanish to english and english to spanish. Since this was our first participation, we focused our work on the passage-based search engine while using simple pattern matching rules for the Answer Extraction phase. As regards the cross-language tasks, we had resort to the most common web translation tools.
['José Manuel Gómez Soriano', 'Empar Bisbal Asensi', 'Davide Buscaldi', 'Paolo Rosso', 'Emilio Sanchis Arnal']
Monolingual and Cross-language QA using a QA-oriented Passage Retrieval System
373,913
Call4Tender Challenges in Practice: a Field.
['Jorge Hochstetter', 'Cristina Cachero', 'Carlos Cares', 'Samuel Sepulveda']
Call4Tender Challenges in Practice: a Field.
746,118
Decreasing device sizes in integrated circuits lead to increasing vulnerability of hardware to errors resulting from radiation, crosstalk or power-supply disturbances. Especially in the automotive domain many tasks of electronics are safety relevant, so that solid error detection and correction is imperative. However, completely safe hardware is too expensive for the cost sensitive automotive market. Hence, software safety mechanisms must deal with errors originating from hardware to ensure safe system behavior. To verify safe system behavior under the influence of hardware errors, fault injection is currently done at integration level, but software redesign at this design stage should be avoided due to high costs. To early detect code vulnerable to hardware errors, we propose fault injection at unit level. Thanks to short simulation scenarios and good parallelization capability, even exhaustive fault injection is possible for multiple representative workloads. Using the results from the fault-injection campaigns, the software designer is able to consider reliability during the implementation phase and avoid costly redesigns.
['Petra R. Maier', 'Veit B. Kleeberger']
Embedded software reliability testing by unit-level fault injection
685,678
The Role of Embodiment on Children's Memory Recall through LEGO Robotics Activities.
['Carol M. Lu', 'John B. Black', 'Seokmin Kang']
The Role of Embodiment on Children's Memory Recall through LEGO Robotics Activities.
758,994
The machine-to-machine (M2M) service platform is being standardized to enable the intercommunication of devices, which is the basis for smart environments and intelligent transport systems (ITS) applications. In such environments, adapting the data exchange between devices and applications to the requirements of the application is a critical step in ensuring the functionality and reliability of the service. This article employs test-cases to analyze the data exchange of the oneM2M standard using an M2M-based automotive service delivery platform. Following the analysis, it proposes enhancements such as application-data-dependent criteria for data notification in combination with aggregation of different subscriptions to the same resource. Finally, the article discusses the proposed enhancements against the background of M2M design considerations and improved privacy.
['Markus Glaab', 'Woldemar F. Fuhrmann', 'Joachim Wietzke', 'Bogdan V. Ghita']
Toward enhanced data exchange capabilities for the oneM2M service platform
577,321
Design of a novel three-dimensional (3D) shape-preserving smoothing approach is described. Three-dimensional surfaces are smoothed without shrinkage artifacts typical for many other approaches. Using our new representation of the 3D surface, the process of smoothing performs substantially faster than direct convolution in the spatial domain. The approach shows good smoothing results that preserve 3D shape. The design can be easily extended to n-dimensional filtering.
['Chaohuang Zeng', 'I. Sonka']
Local three-dimensional shape-preserving smoothing without shrinkage
367,131
A general theoretical basis for the design of adaptive digital filters used for the equalization of the response of multichannel sound reproduction systems is described. The approach is applied to the two-channel case and then extended to deal with arbitrary numbers of channels. The intention is to equalize not only the response of the loudspeakers and the listening room but also the crosstalk transmission from right loudspeaker to left ear and vice versa. The formulation is a generalization of the Atal-Schroeder crosstalk canceler. However, the use of a least-squares approach to the digital filter design and of appropriate modeling delays potentially allows the effective equalization of nonminimum phase components in the transmission path. A stochastic gradient algorithm which facilitates the adaptation of the digital filters to the optimal solution, thereby providing the possibility of designing the filters in situ, is presented. Some experimental results for the two-channel case are given. >
['Philip A. Nelson', 'Hareo Hamada', 'Stephen J. Elliott']
Adaptive inverse filters for stereophonic sound reproduction
2,681
The Crisis Modeler tool presented in this paper allows for exploring financial crisis predictions. Despite wide interest in crisis prediction, little attention has been given to generalizable modeling solutions, real-time implementations, thorough comparisons among methods and interactive interfaces to explore models. This paper combines many approaches used in predicting financial crises within a fully-fledged framework for modeling and evaluation, and provides an implementation of a general-purpose tool with a web-based interactive interface to explore model output. We illustrate the use of the Crisis Modeler with a case study on European banks, including a horse race of methods and investigations of different specifications. The case study illustrates the versatility and suitability of the tool for supporting exploration and communication of models for crisis prediction.
['Markus Holopainen', 'Peter Sarlin']
Crisis Modeler: A Tool for Exploring Crisis Predictions
611,735
Dense deployment of femtocells has proved to be an effective solution to handle increasing demands of indoor mobile data. A femtocell not only helps reducing operational and capital expenditure but also improves the energy efficiency of the network. Femtocells are able to increase spectrum efficiency by manyfold by reusing the available spectrum for indoor users. However, it has been seen that traditional cell selection schemes limit the user count under femtocell. Additionally, dense deployment of femtocells comes with the cost of increased interference to the neighbouring femtocell and macrocell users. In this paper, we first analyse various cell selection schemes to improve user association and resource utilization in femtocells. Then, we focus on improving energy efficiency and throughput of femtocell based cellular networks. For this, we formulate an optimization problem that efficiently reuses macrocell spectrum in femtocells with power control while satisfying macrocell users’ interference and rate-loss constraints.
['Rahul Thakur', 'Rajkarn Singh', 'C. Siva Ram Murthy']
An energy efficient framework for user association and power allocation in HetNets with interference and rate-loss constraints
883,638
This paper presents haptic rendering method of drilling into femur bone with graded stiffness. Volume rendering is preferred than surface rendering in drilling or burr simulation because the volume rendering can contain information such as density and rigidity of each voxel. However, it is difficult to implement real-time graphics and haptic rendering because of the large computational workload. Therefore, we propose surface-data-based haptic rendering of drilling process of stiffness graded material. Contact surface update and bone erosion algorithms are suggested to implement the drilling process. The proposed algorithms are adapted to the closed reduction and internal fixation surgery simulator. The proposed method allows the user of the simulation to feel the different forces according to the drilled depth.
['Jang Ho Cho', 'Hoeryong Jung', 'Kyungno Lee', 'Doo Yong Lee', 'Hyung Soo Ahn']
Haptic Rendering of Drilling into Femur Bone with Graded Stiffness
78,361
Iterative Cartesian Genetic Programming: Creating General Algorithms for Solving Travelling Salesman Problems
['Patricia Ryser-Welch', 'Julian F. Miller', 'Jerry Swan', 'Martin A. Trefzer']
Iterative Cartesian Genetic Programming: Creating General Algorithms for Solving Travelling Salesman Problems
729,054
Identification of high-efficiency 3'GG gRNA motifs in indexed FASTA files with ngg2.
['Elisha D O Roberson']
Identification of high-efficiency 3'GG gRNA motifs in indexed FASTA files with ngg2.
589,118
It is quite natural to define a software language as an extension of a base language. A compiler builder usually prefers to work on a representation in the base language, while programmers prefer to program in the extended language. As we define a language extension, we want to ensure that desugaring it into the base language is provably sound. We present a lightweight approach to verifying soundness by embedding the base language and its extensions in Haskell. The embedding uses the final tagless style, encoding each language as a type class. As a result, combination and enhancement of language extensions are expressed in a natural way. Soundness of the language extension corresponds to well-typedness of the Haskell terms, so no extra tool but the compiler is needed.
['Alejandro Serrano', 'Jurriaan Hage']
Lightweight soundness for towers of language extensions
959,753
The aim of this review was to assess the current viable technologies for wireless power delivery and data transmission through metal barriers. Using such technologies sensors enclosed in hermetical metal containers can be powered and communicate through exterior power sources without penetration of the metal wall for wire feed-throughs. In this review, we first discuss the significant and essential requirements for through-metal-wall power delivery and data transmission and then we: (1) describe three electromagnetic coupling based techniques reported in the literature, which include inductive coupling, capacitive coupling, and magnetic resonance coupling; (2) present a detailed review of wireless ultrasonic through-metal-wall power delivery and/or data transmission methods; (3) compare various ultrasonic through-metal-wall systems in modeling, transducer configuration and communication mode with sensors; (4) summarize the characteristics of electromagnetic-based and ultrasound-based systems, evaluate the challenges and development trends. We conclude that electromagnetic coupling methods are suitable for through thin non-ferromagnetic metal wall power delivery and data transmission at a relatively low data rate; piezoelectric transducer-based ultrasonic systems are particularly advantageous in achieving high power transfer efficiency and high data rates; the combination of more than one single technique may provide a more practical and reliable solution for long term operation.
['Dingxin Yang', 'Zheng Hu', 'Hong Zhao', 'Haifeng Hu', 'Yun-zhe Sun', 'Bao-jian Hou']
Through-Metal-Wall Power Delivery and Data Transmission for Enclosed Sensors: A Review
576,566
In this article we present a general and robust approach to the problem of close-range 3D reconstruction of objects from stereo correspondence of luminance patches. The method is largely independent on the camera geometry, and can employ an arbitrary number of CCD cameras. The robustness of the approach is due to the physicality of the matching process, which is performed in the 3D space. In fact, both 3D location and local orientation of the surface patches are estimated, so that the geometric distortion can be accounted for. The method takes into account the viewer-dependent radiometric distortion as well. The method has been implemented with a calibrated set of three standard TV-resolution CCD cameras. Experiments on a variety of real scenes have been conducted with satisfactory results. Quantitative and qualitative results are reported.
['Federico Pedersini', 'Pasquale Pigazzini', 'Augusto Sarti', 'Stefano Tubaro']
3D area matching with arbitrary multiview geometry
492,252
Purpose#R##N##R##N##R##N##R##N##R##N#This study aims to determine the technical requirements for copyright protection of theses and dissertations for proposing a model for applying in Iran’s National System for Theses and Dissertations (INSTD).#R##N##R##N##R##N##R##N##R##N#Design/methodology/approach#R##N##R##N##R##N##R##N##R##N#This study used a mixed research methodology. The grounded theory was used in the qualitative phase, and a researcher-made checklist was applied in the quantitative phase for surveying the status of the INSTD. Research population included INSTD as well as six information specialists and copyright experts. Data were analysed by using open, axial and selective coding.#R##N##R##N##R##N##R##N##R##N#Findings#R##N##R##N##R##N##R##N##R##N#Based on data extracted from the completed checklists, some technical requirements had been provided in the system. The technical requirements that interviewees pointed out included the following two main classes: technical components and technical-software infrastructures, explored in the phase of the grounded theory. The individual categories included access control, copy control, technical-software challenges, protecting standards, hypertext transfer protocol secure, certificate authority, documentation of thesis and dissertation information, the use of digital object identifiers, copy detection systems, thesis and dissertation integrated systems, digital rights management systems and electronic copyright management systems.#R##N##R##N##R##N##R##N##R##N#Research limitations/implications#R##N##R##N##R##N##R##N##R##N#Considering the subject of this study, only the technical aspect was investigated, and other aspects were not included. In addition, electronic theses and dissertation (ETD) providers were not well aware of copyright issues.#R##N##R##N##R##N##R##N##R##N#Practical implications#R##N##R##N##R##N##R##N##R##N#Using the technical requirements with high security is effective in the INSTD to gain the trust of the authors and encourage them to deposit their ETDs.#R##N##R##N##R##N##R##N##R##N#Social implications#R##N##R##N##R##N##R##N##R##N#The increased use of the system encourages the authors to be more innovative in conducting their research.#R##N##R##N##R##N##R##N##R##N#Originality/value#R##N##R##N##R##N##R##N##R##N#Considering the continued violation of copyright in electronic databases, applying technical requirements for copyright protection and regulating users’ access to the information of theses and dissertations are needed in the INSTD.
['Zeinab Papi', 'Saeid Rezaei Sharifabadi', 'Sedigheh Mohammadesmaeil', 'Nadjla Hariri']
Technical requirements for copyright protection of electronic theses and dissertations in INSTD: A grounded theory study
981,021
Studies of the stimulating effect of ultrasound as a tactile display have recently become more intensive in the haptic domain. In this paper, we present the design, development, and evaluation of Haptogram; a system designed to provide point-cloud tactile display via acoustic radiation pressure. A tiled 2-D array of ultrasound transducers is used to produce a focal point that is animated to produce arbitrary 2-D and 3-D tactile shapes. The switching speed is very high, so that humans feel the distributed points simultaneously. The Haptogram system comprises a software component and a hardware component. The software component enables users to author and/or select a tactile object, create a point-cloud representation, and generate a sequence of focal points to drive the hardware. The hardware component comprises a tiled 2-D array of ultrasound transducers, each driven by an FPGA. A quantitative analysis is conducted to measure the Haptogram ability to display various tactile shapes, including a single point, 2-D shapes (a straight line and a circle) and a 3-D object (a hemisphere). Results show that all displayed tactile objects are perceivable by the human skin (an average of 2.65 kPa for 200 focal points). A usability study is also conducted to evaluate the ability of humans to recognize 2-D shapes. Results show that the recognition rate was well above the chance level (average of 59.44% and standard deviation of 12.75%) while the recognition time averaged 13.87 s (standard deviation of 3.92 s).
['Georgios Korres', 'Mohamad Eid']
Haptogram: Ultrasonic Point-Cloud Tactile Stimulation
888,968
When there is no division circuit available, the arithmetical function of division is normally performed by a library subroutine. The library subroutine normally allows both the divisor and the dividend to be variables, and requires the execution of hundreds of assembly instructions. This correspondence provides a fast algorithm for performing the integer division of a variable by a predetermined divisor. Based upon this algorithm, an efficient division routine has been constructed for each odd divisor up to 55. These routines may be implemented in assembly languages, in microcodes, and in special-purpose circuits.
['Shuo-Yen Robert Li']
Fast Constant Division Routines
511,322
This paper introduces a new gate sizing approach with accurate delay evaluation. The approach solves gate sizing problems by iterating local sizing results from linear programming within small variable ranges of gate sizes. In each iterative step, variable ranges of gate sizes are updated according to the result from a previous step. Solutions with accurate delay evaluation which consider input signal slopes and separately evaluate rising and falling delays are obtained after several iterative steps. A speedup technique is used to pick out gates actually involved in each local sizing step so as to reduce CPU time. Experiments on sample circuits show that our approach can provide solutions with smaller circuit area than conventional approaches for the same circuit delay or provide solutions under tight delay constraints where conventional approaches can not reach. Moreover, our approach is faster than the conventional approaches for most circuits, especially under loose delay constraints.
['Guangqiu Chen', 'Hidetoshi Onodera', 'Keikichi Tamaru']
An iterative gate sizing approach with accurate delay evaluation
361,030
A class of fourth-order partial differential equations (PDEs) are proposed to optimize the trade-off between noise removal and edge preservation. The time evolution of these PDEs seeks to minimize a cost functional which is an increasing function of the absolute value of the Laplacian of the image intensity function. Since the Laplacian of an image at a pixel is zero if the image is planar in its neighborhood, these PDEs attempt to remove noise and preserve edges by approximating an observed image with a piecewise planar image. Piecewise planar images look more natural than step images which anisotropic diffusion (second order PDEs) uses to approximate an observed image. So the proposed PDEs are able to avoid the blocky effects widely seen in images processed by anisotropic diffusion, while achieving the degree of noise removal and edge preservation comparable to anisotropic diffusion. Although both approaches seem to be comparable in removing speckles in the observed images, speckles are more visible in images processed by the proposed PDEs, because piecewise planar images are less likely to mask speckles than step images and anisotropic diffusion tends to generate multiple false edges. Speckles can be easily removed by simple algorithms such as the one presented in this paper.
['Yu-Li You', 'Mostafa Kaveh']
Fourth-order partial differential equations for noise removal
43,482
The Wisdom of Sustainable Communities in the Digital Era: The Case of Efficient Energy Management
['Alessandro Tavoni', 'Luigi Telesca']
The Wisdom of Sustainable Communities in the Digital Era: The Case of Efficient Energy Management
395,587
Modern building HVAC systems consist of thousands of sensors and actuators networked together for control and monitoring of operations. Traditional Building Management Systems (BMS) are vertically integrated systems and usable by trained operators. Recent advances in literature has democratized this information by integrating information from diverse sources and making it available to third-party developers through standard APIs. We present Building Control Engine (BCE) that allows developers to safely control the actuators in the HVAC system. BCE provides an abstraction for handling the growing safety and reliability concerns when HVAC systems are made publicly available for actuation control. We focus on the Variable Air Volume (VAV) box in the HVAC system, and develop BCE by analyzing and actuation of a VAV in a real building. We show that ensuring reliability and safety of operation is non-trivial as actuators have hierarchical dependencies. BCE captures this domain knowledge and ensures developers cannot cause damage to actuators by checking for both frequency and range of control. In addition, BCE allows static checking of control sequences and rolls back the actuators to safe operation in case of application failure.
['Jason Koh', 'Bharathan Balaji', 'Rajesh E. Gupta', 'Yuvraj Agarwal']
Poster Abstract: Controlling Actuation in Central HVAC Systems in Buildings
623,022
We consider the problem of control traffic overhead in MANETs with long-lived connections, operating under a reactive routing protocol (e.g. AODV). In such settings, control traffic overhead origins can be traced principally to connection link failures, which trigger expensive global route discoveries. In this paper, we introduce a route maintenance scheme developed with the objective of reducing global route discoveries in such settings. The proposed scheme decrease the expected number of route discovery attempts by taking preemptive action to counteract impending link disconnections due to node movement . The proposed scheme was implemented as an extension of AODV in ns2, and compared with the standard AODV under different network regimes. Through the analysis of data derived from extensive simulations, we demonstrate that the proposed scheme significantly decreases overall control traffic while maintaining comparable packet delivery rates, at the cost of only very minor degradation in path optimality.
['Zeki Bilgin', 'Bilal Khan', 'Ala I. Al-Fuqaha']
Using connection expansion to reduce control traffic in MANETs
138,362
Telecom big data implies abundant user information. In this paper, it employs the telecom data and proposes a user clustering and influence power ranking scheme. The scheme is implemented through three stages, i.e. the user portrait analysis stage, the user clustering analysis stage and the ranking stage of user influence power. Experimental results have shown that, marketing promotion effectiveness based on this scheme has been improved significantly, while the advertising costs are also considerably reduced.
['Yuwei Jia', 'Kun Chao', 'Xinzhou Cheng', 'Mingqiang Yuan', 'Mingjun Mu']
Big data based user clustering and influence power ranking
945,019
Does the emotional content of a robot's speech affect how people teach it? In this experiment, participants were asked to demonstrate several "dances" for a robot to learn. Participants moved their bodies in response to instructions displayed on a screen behind the robot. Meanwhile, the robot faced the participant and appeared to emulate the participant's movements. After each demonstration, the robot received an accuracy score and the participant chose whether or not to demonstrate that dance again. Regardless of the participant's input, however, the robot's dancing and the scores it received were arranged in advance and constant across all participants. The only variation between groups in this study was what the robot said in response to its scores. Participants saw one of three conditions: appropriate emotional responses, often-inappropriate emotional responses, or apathetic responses. Participants that taught the robot with appropriate emotional responses demonstrated the dances, on average, significantly more frequently and significantly more accurately than participants in the other two conditions.
['Dan Leyzberg', 'Eleanor R. Avrunin', 'Jenny Liu', 'Brian Scassellati']
Robots that express emotion elicit better human teaching
532,268
Syntactic knowledge is important for pronoun resolution. Traditionally, the syntactic information for pronoun resolution is represented in terms of features that have to be selected and defined heuristically. In the paper, we propose a kernel-based method that can automatically mine the syntactic information from the parse trees for pronoun resolution. Specifically, we utilize the parse trees directly as a structured feature and apply kernel functions to this feature, as well as other normal features, to learn the resolution classifier. In this way, our approach avoids the efforts of decoding the parse trees into the set of flat syntactic features. The experimental results show that our approach can bring significant performance improvement and is reliably effective for the pronoun resolution task.
['Xiaofeng Yang', 'Jian Su', 'Chew Lim Tan']
Kernel-Based Pronoun Resolution with Structured Syntactic Knowledge
30,762
Design of Intelligent Laboratory Based on IOT
['Jiya Tian', 'Xiaoguang Li', 'Dongfang Wan', 'Nan Li', 'Yulan Wang']
Design of Intelligent Laboratory Based on IOT
946,587
Some efficient visualizations (such as treemaps) have been proposed for trees, but the interaction they provide to explore and acces data is often poor, especially for very large trees. We have designed a consistent set of navigation techniques that makes it possible to use treemaps as zoomable interfaces. We introduce structure-aware navigation , the property of using the structure of the displayed information to guide navigation, property that our interaction techniques share.
['Renaud Blanch', 'Eric Lecolinet']
Treemaps zoomables: techniques d'interaction multi-échelles pour les treemaps
79,446
We apply the leaving-one-out concept to the estimation of 'small' probabilities, i.e., the case where the number of training samples is much smaller than the number of possible classes. After deriving the Turing-Good formula in this framework, we introduce several specific models in order to avoid the problems of the original Turing-Good formula. These models are the constrained model, the absolute discounting model and the linear discounting model. These models are then applied to the problem of bigram-based stochastic language modeling. Experimental results are presented for a German and an English corpus.
['Hermann Ney', 'Ute Essen', 'Reinhard Kneser']
On the estimation of 'small' probabilities by leaving-one-out
377,285
Motivation: RBSDesigner predicts the translation efficiency of existing mRNA sequences and designs synthetic ribosome binding sites (RBSs) for a given coding sequence (CDS) to yield a desired level of protein expression. The program implements the mathematical model for translation initiation described in Na et al. (Mathematical modeling of translation initiation for the estimation of its efficiency to computationally design mRNA sequences with a desired expression level in prokaryotes. BMC Syst. Biol., 4, 71). The program additionally incorporates the effect on translation efficiency of the spacer length between a Shine–Dalgarno (SD) sequence and an AUG codon, which is crucial for the incorporation of fMet-tRNA into the ribosome. RBSDesigner provides a graphical user interface (GUI) for the convenient design of synthetic RBSs. Availability: RBSDesigner is written in Python and Microsoft Visual Basic 6.0 and is publicly available as precompiled stand-alone software on the web (http://rbs.kaist.ac.kr). Contact: [email protected]
['Dokyun Na', 'Doheon Lee']
RBSDesigner: software for designing synthetic ribosome binding sites that yields a desired level of protein expression
166,881
While educational data mining has often focused on modeling behavior at the level of individual students, we consider developing statistical models to give us insight into the dynamics of student populations. In this talk, we consider two case studies in this vein. The first involves analyzing the evolution of gender balance in a college computer science program, showing that focusing on percentages of underrepresented groups in the overall population may not always provide an accurate portrayal of the impact of various program changes. We propose a new statistical model based on Fisher's Noncentral Hypergeometric Distribution that better captures how program changes are impacting the dynamics of gender balance in a population, especially in the case where the overall population is rapidly increasing (as has been the case in CS in recent years). Our second study looks at the performance of student populations in an introductory college programming course during the past eight years to better understand the evolving mix of students' abilities given the rapid growth in the number of students taking CS courses. Often accompanying such growth is a concern from faculty that the additional students choosing to pursue computing may not have the same aptitude for the subject as was seen in prior student populations. To directly address this question, we present a statistical analysis of students' performance using mixture modeling. Importantly, in this setting many variables that would normally confound such a study are directly controlled for. We find that the distribution of student performance during this period, as reflected in their programming assignment scores, remains remarkably stable despite the large growth in course enrollments. The results of this analysis also show how conflicting perceptions of students' abilities among faculty can be consistently explained. The presentation includes work done jointly with Sarah Evans, Chris Piech, and Katie Redmond.
['Mehran Sahami']
Statistical Modeling to Better Understand CS Students
830,083
This paper focuses on large scale experiments with Java and asynchronous iterative applications. In those applications, tasks are dependent and the use of distant clusters may be difficult, for example, because of latencies, heterogeneity, and synchronizations. Experiments have been conducted on the Grid'5000 platform using a new version of the Jace environment. We study the behavior of an application (the Poisson problem) with the following experimentation conditions: one and several sites, large number of processors (from 80 to 500), different communication protocols (RMI, sockets and NIO), synchronous and asynchronous model. The results we obtained, demonstrate both the scalability of the Jace environment and its ability to support wide-area deployments and the robustness of asynchronous iterative algorithms in a large scale context.
['Jacques M. Bahi', 'Raphaël Couturier', 'David Laiymani', 'Kamel Mazouzi']
Java and asynchronous iterative applications: large scale experiments
477,008
This paper addresses the problem of learning archetypal structural models from examples. To this end we define a generative model for graphs where the distribution of observed nodes and edges is governed by a set of independent Bernoulli trials with parameters to be estimated from data in a situation where the correspondences between the nodes in the data graphs and the nodes in the model are not not known ab initio and must be estimated from local structure. This results in an EM-like approach where we alternate the estimation of the node correspondences with the estimation of the model parameters. Parameter estimation and model order selection is addressed within a Minimum Message Length (MML) framework.
['Andrea Torsello', 'David L. Dowe']
Supervised learning of a generative model for edge-weighted graphs
40,167
In this paper, we investigate the problem of compressed learning, i.e. learning directly in the compressed domain. In particular, we provide tight bounds demonstrating that the linear kernel SVMs classifier in the measurement domain, with high probability, has true accuracy close to the accuracy of the best linear threshold classifier in the data domain. Furthermore, we indicate that for a family of well-known deterministic compressed sensing matrices, compressed learning is provided on the fly. Finally, we support our claims with experimental results in the texture analysis application.
['A. Robert Calderbank', 'Sina Jafarpour']
Finding needles in compressed haystacks
232,344
For the wide range of vibration frequencies, the human capacity for vibrotactile frequency discrimination has been reported constant. However, vibrotactile detection depend on two different receptors, one is Meissner corpuscle for low frequencies and the other is Pacinian corpuscle for high frequencies. To examine the impact of input pathway on frequency discrimination task, discrimination capacity has been compared directly by using supra-threshold and near-threshold stimuli since near-threshold stimuli mainly activate one input pathway. Each standard frequencies 15, 30, 60, 120, 240 and 480 Hz at amplitude 6dB and 16 dB detection threshold, was paired with a series of comparison frequencies, and discrimination capacity was quantified by the discriminable frequency increment (Δf) and the Weber Fraction (Δf/f). The result revealed constant and good discrimination capacities for strong stimulus conditions but discrete and bad capacities for weak stimulus conditions. Near-threshold stimuli produced a marked impairment in vibrotactile discrimination at the high standard frequencies around 240 Hz, probably detected by Pacinian corpuscle, but relatively little effect at lower frequencies, mainly detected by Meissner corpuscle.
['Scinob Kuroki', 'Junji Watanabe', 'Shin’ya Nishida']
Dissociation of vibrotactile frequency discrimination performances for supra-threshold and near-threshold vibrations
619,107
This paper describes a 5-transistor (5T) SRAM bitcell that uses a novel asymmetric sizing approach to achieve increased read stability. Measurements of a 32 kb 5T SRAM in a 45nm bulk CMOS technology validate the design, showing read functionality below 0.5V. The 5T bitcell has lower write margin than the 6T, but measurements of the 45nm 5T array confirm that a write assist method restores comparable writability with a 6T down to 0.7 V.
['Satyanand Nalam', 'Benton H. Calhoun']
Asymmetric sizing in a 45nm 5T SRAM to improve read stability over 6T
375,381
Tracking is one of the most important but still difficult tasks in computer vision and pattern recognition. The main difficulties in the tracking field are appearance variation and occlusion. Most traditional tracking methods set the parameters or templates to track target objects in advance and should be modified accordingly. Thus, we propose a new and robust tracking method using a Fully Convolutional Network (FCN) to obtain an object probability map and Dynamic Programming (DP) to seek the globally optimal path through all frames of video. Our proposed method solves the object appearance variation problem with the use of a FCN and deals with occlusion by DP. We show that our method is effective in tracking various single objects through video frames.
['Jin-Ho Lee', 'Brian Kenji Iwana', 'Shouta Ide', 'Seiichi Uchida']
Globally Optimal Object Tracking with Fully Convolutional Networks
962,269
A Weighted Combination of Speech with Text-based Models for Arabic Diacritization.
['Aisha S. Azim', 'Xiaoxuan Wang', 'Khe Chai Sim']
A Weighted Combination of Speech with Text-based Models for Arabic Diacritization.
807,054
In the Internet of Things (IoT), Internet-connected things provide an influx of data and resources that offer unlimited possibility for applications and services. Smart City IoT systems refer to the things that are distributed over wide physical areas covering a whole city. While the new breed of data and resources looks promising, building applications in such large scale IoT systems is a difficult task due to the distributed and dynamic natures of entities involved, such as sensing, actuating devices, people and computing resources. In this paper, we explore the process of developing Smart City IoT applications from a coordination-based perspective. We show that a distributed coordination model that oversees such a large group of distributed components is necessary in building Smart City IoT applications. In particular, we propose Adaptive Distributed Dataflow, a novel Dataflow-based programming model that focuses on coordinating city-scale distributed systems that are highly heterogeneous and dynamic.
['Nam Ky Giang', 'Rodger Lea', 'Michael Blackstock', 'Victor C. M. Leung']
On Building Smart City IoT Applications: a Coordination-based Perspective
952,906
Attention-based LSTM Network for Cross-Lingual Sentiment Classification.
['Xinjie Zhou', 'Xiaojun Wan', 'Jianguo Xiao']
Attention-based LSTM Network for Cross-Lingual Sentiment Classification.
959,686
In this paper, a novel approach to visual self-localization in an unknown environment is presented. The proposed method makes possible the recognition of new landmark without using GPS or any other communication links or pre-training. An image-based self-localization technique is used to automatically label landmarks that are detected in real-time using a computationally efficient and recursive algorithm. Real-time experiments are carried in outdoor environment at Lancaster University using a real mobile robot Pioneer 3DX in order to build a map the local environment without using any communication links. The presented experimental results in real situations show the effectiveness of the proposed method.
['Pouria Sadeghi-Tehran', 'Sasmita Behera', 'Plamen Angelov', 'Javier Andreu']
Autonomous visual self-localization in completely unknown environment
353,334
Self-adaptive software is a closed-loop system, since it continuously monitors its context (i.e. environment) and/or self (i.e. software entities) in order to adapt itself properly to changes. We believe that representing adaptation goals explicitly and tracing them at run-time are helpful in decision making for adaptation. While goal-driven models are used in requirements engineering, they have not been utilized systematically yet for run-time adaptation. To address this research gap, this article focuses on the deciding process in self-adaptive software, and proposes the Goal-Action-Attribute Model (GAAM). An action selection mechanism, based on cooperative decision making, is also proposed that uses GAAM to select the appropriate adaptation action(s). The emphasis is on building a light-weight and scalable run-time model which needs less design and tuning effort comparing with a typical rule-based approach. The GAAM and action selection mechanism are evaluated using a set of experiments on a simulated multi-tier enterprise application, and two sample ordinal and cardinal action preference lists. The evaluation is accomplished based on a systematic design of experiment and a detailed statistical analysis in order to investigate several research questions. The findings are promising, considering the obtained results, and other impacts of the approach on engineering self-adaptive software. Although, one case study is not enough to generalize the findings, and the proposed mechanism does not always outperform a typical rule-based approach, less effort, scalability, and flexibility of GAAM are remarkable. Copyright © 2011 John Wiley & Sons, Ltd.#R##N##R##N#(Objective and goal are different in some contexts. For instance, Keeney et al. use objectives in a higher level of abstraction [14]. In this article, without loss of generality, we assume that goals and objectives are the same. Therefore, we use them interchangeably hereafter.)
['Mazeiar Salehie', 'Ladan Tahvildari']
Towards a goal-driven approach to action selection in self-adaptive software
4,900
A cooperative approach to model information systems.
['Nahla Haddar', 'Garoourfaïez', 'Abdelmajid Ben Hamadou', 'Charles François Ducateau']
A cooperative approach to model information systems.
762,617
This paper treats a context-based multimedia content management system (MCMS), whose various types of contents are easily gathered from everywhere at anytime using mobile phones, and stored in a web server as a multimedia database. In this paper, the authors propose the framework of software components for developing location-aware web applications that use multimedia contents stored in the multimedia database on the web server. The authors also introduce several practical location-aware web applications, e.g., Google Maps based Sight-seeing information system, Google Maps based web Natural Science Dictionary, etc. that are already developed using the proposed components. One of the prospective applications of the proposed framework is Google Maps based Life-log system. Data stored by a lot of users using their mobile phones are regarded as their Life-log data because the data includes location data by GPS, date/time data, other related multimedia data such as picture images, movies, recorded sounds and texts. By analyzing them using any data-mining methods, it is possible to extract activity patterns of the users those are very useful for various web services like recommendation systems. As a theoretical aspect of the paper, the authors discuss data analyzing techniques to be used in the proposed framework.
['Shuhei Kumamoto', 'Shigeru Takano', 'Yoshihiro Okada']
Context-based Multimedia Content Management Framework and Its Location-aware Web Applications
156,160
We live in a world in which there is a great disparity between the lives of the rich and the poor. Technology offers great promise in bridging this gap. In particular, wireless technology unfetters developing communities from the constraints of infrastructure providing a great opportunity to leapfrog years of neglect and technological waywardness. In this paper, we highlight the role of resource pooling for wireless networks in the developing world. Resource pooling involves (i) abstracting a collection of networked resources to behave like a single unified resource pool and (ii) developing mechanisms for shifting load between the various parts of the unified resource pool. The popularity of resource pooling stems from its ability to provide resilience, high utilization, and flexibility at an acceptable cost. We show that ``resource pooling'', which is very popular in its various manifestations, is the key unifying principle underlying a diverse number of successful wireless technologies (such as white space networking, community networks, etc.). We discuss various applications of resource pooled wireless technologies and provide a discussion on open issues.
['Junaid Qadir', 'Arjuna Sathiaseelan', 'Liang Wang', 'Jon Crowcroft']
“Resource Pooling” for Wireless Networks: Solutions for the Developing World
807,867
We propose a novel approach to optimize Partially Observable Markov Decisions Processes (POMDPs) defined on continuous spaces. To date, most algorithms for model-based POMDPs are restricted to discrete states, actions, and observations, but many real-world problems such as, for instance, robot navigation, are naturally defined on continuous spaces. In this work, we demonstrate that the value function for continuous POMDPs is convex in the beliefs over continuous state spaces, and piecewise-linear convex for the particular case of discrete observations and actions but still continuous states. We also demonstrate that continuous Bellman backups are contracting and isotonic ensuring the monotonic convergence of value-iteration algorithms. Relying on those properties, we extend the algorithm, originally developed for discrete POMDPs, to work in continuous state spaces by representing the observation, transition, and reward models using Gaussian mixtures, and the beliefs using Gaussian mixtures or particle sets. With these representations, the integrals that appear in the Bellman backup can be computed in closed form and, therefore, the algorithm is computationally feasible. Finally, we further extend to deal with continuous action and observation sets by designing effective sampling approaches.
['Josep M. Porta', 'Nikos A. Vlassis', 'Matthijs T. J. Spaan', 'Pascal Poupart']
Point-Based Value Iteration for Continuous POMDPs
262,642
Objective: Text and data mining play an important role in obtaining insights from Health and Hospital Information Systems. This paper presents a text mining system for detecting admissions marked as positive for several diseases: Lung Cancer, Breast Cancer, Colon Cancer, Secondary Malignant Neoplasm of Respiratory and Digestive Organs, Multiple Myeloma and Malignant Plasma Cell Neoplasms, Pneumonia, and Pulmonary Embolism. We specifically examine the effect of linking multiple data sources on text classification performance. Methods: Support Vector Machine classifiers are built for eight data source combinations, and evaluated using the metrics of Precision, Recall and F-Score. Sub-sampling techniques are used to address unbalanced datasets of medical records. We use radiology reports as an initial data source and add other sources, such as pathology reports and patient and hospital admission data, in order to assess the research question regarding the impact of the value of multiple data sources. Statistical significance is measured using the Wilcoxon signed-rank test. A second set of experiments explores aspects of the system in greater depth, focusing on Lung Cancer. We explore the impact of feature selection; analyse the learning curve; examine the effect of restricting admissions to only those containing reports from all data sources; and examine the impact of reducing the sub-sampling. These experiments provide better understanding of how to best apply text classification in the context of imbalanced data of variable completeness. Results: Radiology questions plus patient and hospital admission data contribute valuable information for detecting most of the diseases, significantly improving performance when added to radiology reports alone or to the combination of radiology and pathology reports. Conclusion: Overall, linking data sources significantly improved classification performance for all the diseases examined.
['Simon Kocbek', 'Lawrence Cavedon', 'David Martinez', 'Christopher Bain', 'Chris Mac Manus', 'Gholamreza Haffari', 'Ingrid Zukerman', 'Karin Verspoor']
Text mining electronic hospital records to automatically classify admissions against disease: Measuring the impact of linking data sources
906,929
Brain-computer interfaces represent a range of acknowledged technologies that translate brain activity into computer commands. The aim of our research is to develop and evaluate a BCI control application for certain assistive technologies that can be used for remote telepresence or remote driving. The communication channel to the target device is based on the steady-state visual evoked potentials. In order to test the control application, a mobile robotic car MRC was introduced and a four-class BCI graphical user interface with live video feedback and stimulation boxes on the same screen for piloting the MRC was designed. For the purpose of evaluating a potential real-life scenario for such assistive technology, we present a study where 61 subjects steered the MRC through a predetermined route. All 61 subjects were able to control the MRC and finish the experiment mean time 207.08 s, SD 50.25 with a mean SD accuracy and ITR of 93.03% 5.73 and 14.07 bits/min 4.44, respectively. The results show that our proposed SSVEP-based BCI control application is suitable for mobile robots with a shared-control approach. We also did not observe any negative influence of the simultaneous live video feedback and SSVEP stimulation on the performance of the BCI system.
['Piotr Stawicki', 'Felix Gembler', 'Ivan Volosyak']
Driving a Semiautonomous Mobile Robotic Car Controlled by an SSVEP-Based BCI
862,250
In order to investigate the behavior of the solvation, particularly, to quantify the solvent effects on the properties of solutes on a molecular basis, several simulation approaches are proposed, which make enable the modeling and rigorous examination of the thermodynamic properties of cluster-ion system. Typical simulation to calculate these nucleation properties is time-consuming due to the large amount of data and the phase space sampling. Although cluster-based simulation brings in parallel computing and shortens the simulation time, there are still some time wasted. This is mainly due to the waiting time of the new job submitted to the end of the queues after some certain parameters modified. In order to further reduce the execution time, we present a novel simulation approach which combines the DA-TC execution model in this paper. Experiments show that the DA-TC mechanism can further improve the application execution performance and increase the resource utilization in multicluster grid environment.
['Zhifeng Yun', 'Samuel J. Keasler', 'Maoyuan Xie', 'Zhou Lei', 'Bin Chen', 'Gabrielle Allen']
An innovative simulation approach for water mediated attraction based on grid computing
429,550
Nowadays, embedded systems have been widely used in all types of application areas, some of which belong to the safety and reliability critical domains. The functional correctness and design robustness of the embedded systems involved in such domains are crucial for the safety of personal/enterprise property or even human lives. Thereby, a holistic design procedure that considers all the important design concerns is essential. In this article, we approach embedded systems design from an integral perspective. We consider not only the classic real-time and quality of service requirements, but also the emerging security and power efficiency demands. Modern embedded systems are not any more developed for a fixed purpose, but instead designed for undertaking various processing requests. This leads to the concept of multimode embedded systems, in which the number and nature of active tasks change during runtime. Under dynamic situations, providing high performance along with various design concerns becomes a really difficult problem. Therefore, we propose a novel power-aware secure embedded systems design framework that efficiently solves the problem of runtime quality optimization with security and power constraints. The efficiency of our proposed techniques are evaluated in extensive experiments.
['Ke Jiang', 'Petru Eles', 'Zebo Peng']
Power-Aware Design Techniques of Secure Multimode Embedded Systems
631,519
We investigate an efficient strategy to collect false positives from very large training sets in the context of object detection. Our approach scales up the standard bootstrapping procedure by using a hierarchical decomposition of an image collection which reflects the statistical regularity of the detector's responses. Based on that decomposition, our procedure uses a Monte Carlo Tree Search to prioritize the sampling toward sub-families of images which have been observed to be rich in false positives, while maintaining a fraction of the sampling toward unexplored sub-families of images. The resulting procedure increases substantially the proportion of false positive samples among the visited ones compared to a naive uniform sampling. We apply experimentally this new procedure to face detection with a collection of 100,000 background images and to pedestrian detection with 32,000 images. We show that for two standard detectors, the proposed strategy cuts the number of images to visit by half to obtain the same amount of false positives and the same final performance.
['Olivier Canévet', 'F. Fleuret']
Large Scale Hard Sample Mining with Monte Carlo Tree Search
799,129
This paper addresses the problem of stability analysis of a class of linear systems with time-varying delays. We develop conditions for robust stability that can be tested using semidefinite programming using the sum of squares decomposition of multivariate polynomials and the Lyapunov-Krasovskii theorem. We show how appropriate Lyapunov-Krasovskii functionals can be constructed algorithmically to prove stability of linear systems with a variation in delay, by using bounds on the size and rate of change of the delay. We also explore the quenching phenomenon, a term used to describe the difference in behaviour between a system with fixed delay and one whose delay varies with time. Numerical examples illustrate changes in the stability window as a function of the bound on the rate of change of delay.
['Antonis Papachristodoulou', 'Matthew M. Peet', 'Silviu Iulian Niculescu']
Stability analysis of linear systems with time-varying delays: Delay uncertainty and quenching
389,789
Testing of Multi-Tasking Real-Time Systems with Critical Sections.
['Anders Pettersson', 'Henrik Thane']
Testing of Multi-Tasking Real-Time Systems with Critical Sections.
974,365
Construction of intrinsic Delaunay triangulation (iDt for short) on 2-manifolds has attracted considerable attentions recently, due to its theoretical contributions to surface reconstruction. In this paper we analyze a 2-manifold sampling criterion based on iDt in a combinatorial way. The main contribution of this work is to establish the theoretical bounds of iDt mesh quality based on this sampling criterion. In order to construct the iDt mesh from sample points, we propose an approximate iDt mesh reconstruction algorithm using an edge propagation scheme. In real-world point cloud, holes or under sampling regions frequently exist. Based on the sampling criterion, an up sampling scheme and a hole filling algorithm are presented in this paper. Finally examples are presented, illustrating the effectiveness of our proposed algorithms.
['Wen-Yong Gong', 'Yong-Jin Liu', 'Kai Tang', 'Tieru Wu']
2-Manifold Surface Sampling and Quality Estimation of Reconstructed Meshes
264,567
We present IQBEES, a novel prototype system for similar entity search by example on semantic knowledge graphs that is based on the concept of maximal aspects. The system makes it possible for the user to provide positive and negative relevance feedback to iteratively refine the information need. The maximal aspects model supports diversity-aware results.
['Marcin Sydow', 'G. Sobczak', 'Ralf Schenkel', 'Krzysztof Mioduszewski']
iQbees: Interactive Query-by-example Entity Search in semantic knowledge graphs
895,291
Input from the lower body in human-computer interfaces can be beneficial, enjoyable and even entertaining when users are expected to perform tasks simultaneously. Users can navigate a virtual (game) world or even an (empirical) dataset while having their hands free to issue commands. We compared the Wii Balance Board to a hand-held Wiimote for navigating a maze and found that users completed this task slower with the Balance Board. However, the Balance Board was considered more intuitive, easy to learn and ‘much fun’.
['Wim Fikkert', 'Niek Hoeijmakers', 'Paul E. van der Vet', 'Anton Nijholt']
Navigating a Maze with Balance Board and Wiimote
354,081
A new dual-loop digital phase-locked loop (DPLL) architecture is presented. It employs a stochastic time-to-digital converter (STDC) and a high-frequency delta-sigma dithering to achieve wide PLL bandwidth and low jitter at the same time. The STDC exploits the stochastic properties of a set of latches to achieve high resolution. A prototype DPLL test chip has been fabricated in a 0.13-mum CMOS process, features a 0.7-1.7-GHz oscillator tuning range and a 6.9-ps rms jitter, and consumes 17 mW under 1.2-V supply while operating at 1.2 GHz.
['Volodymyr Kratyuk', 'Pavan Kumar Hanumolu', 'Kerem Ok', 'Un-Ku Moon', 'Kartikeya Mayaram']
A Digital PLL With a Stochastic Time-to-Digital Converter
329,535
This paper presents a general approach to sequential blind extraction of instantaneously mixed sources for several major ill-conditioned cases as well as the regular case of full column rank mixing matrices. Four ill-conditioned cases are considered: The mixing matrix is square but singular; the number of sensors is less than that of sources; the number of sensors is larger than that of sources, but the column rank of the mixing matrix is deficient; and the number of sources is unknown and the column rank of the mixing matrix is deficient. First, a solvability analysis is presented for a general case. A necessary and sufficient condition for extractability is derived. A sequential blind extraction approach is then proposed to extract all theoretically separable sources. Next, a principle and a cost function based on fourth-order cumulants are presented for blind source extraction. By minimizing the cost function under a nonsingularity constraint of the extraction matrix, all theoretically separable sources can be extracted sequentially. Finally, simulation results are presented to demonstrate the validity and performance of the blind source extraction approach.
['Yuanqing Li', 'Jun Wang']
Sequential blind extraction of instantaneously mixed sources
246,804
We summarize several recent results about hybrid automata. Our goal is to demonstrate that concepts from the theory of discrete concurrent systems can give insights into partly continuous systems, and that methods for the verification of finite-state systems can be used to analyze certain systems with uncountable state spaces.
['Thomas A. Henzinger']
The theory of hybrid automata
295,271
Vendor selection is of strategic importance in any industry. This paper uses fuzzy tools for vendor selection to improve decision-making through a more systematic and logical approach. Expert opinion is portrayed by allocating different fuzzy weights to the linguistic data in the form of triangular fuzzy numbers (TFN). The scores are evaluated using fuzzy arithmetic operations and the index of optimism is used to evaluate the vendors. Also, to bring in consistency in the evaluation, results are compared with the fuzzy technique for order preference by similarity to an ideal solution (TOPSIS), another well known multicriteria decision-making (MCDM) approach affected by uncertainty. This concept, if adopted, can be used for any industry where vendor selection is based on a set criteria and linguistic judgement variables.
['Rakesh Verma', 'Saroj Koul', 'Kannan Govindan']
Vendor selection and uncertainty
203,330
This article presents a new incremental learning algorithm for classification tasks, called Net Lines, which is well adapted for both binary and real-valued input patterns. It generates small, compact feedforward neural networks with one hidden layer of binary units and binary output units. A convergence theorem ensures that solutions with a finite number of hidden units exist for both binary and real-valued input patterns. An implementation for problems with more than two classes, valid for any binary classifier, is proposed. The generalization error and the size of the resulting networks are compared to the best published results on well-known classification benchmarks. Early stopping is shown to decrease overfitting, without improving the generalization performance.
['J. Manuel Moreno', 'Mirta B. Gordon']
Efficient adaptive learning for classification tasks with binary units
247,514
In this paper, we report on nonlinearity compensation for a solid-state fiber bragg grating (FBG) sensor interrogation system based on an arrayed waveguide grating (AWG) device. A lookup table with calibration data is used to improve system linearity. A reduction in the absolute value of the measurement error from 120 mustrain or 15degC to 4.8 mustrain or 0.6degC, respectively, is experimentally demonstrated.
['Grzegorz Fusiek', 'Pawel Niewczas', 'Andrew J. Willshire', 'J.R. McDonald']
Nonlinearity Compensation of the Fiber Bragg Grating Interrogation System Based on an Arrayed Waveguide Grating
16,719
In this paper, a dynamic offloading model together with a cloud-friendly depth estimation algorithm is proposed to minimize the energy consumption of mobile devices by exploiting cloud computational resources for 2D-to-3D conversion. The cloud-friendly depth estimation algorithm partitions an input image into several parts, classifies each part to a specific type, and applies a specific conversion algorithm to each type to generate depth maps, which facilitates allocating the partitions between the mobile device and the cloud dynamically. Then, a dynamic offloading model is proposed for mobile energy minimization by allocating the partitions to be processed dynamically between the cloud and the mobile. The complexity of depth estimation, the processing capability of the cloud, and the power consumption of the mobile are considered jointly into the model to provide an optimized solution. Several simulations based on parameters of real mobile devices demonstrate that our method can save an average of almost 21.17% of total energy on different mobile devices and an average of 17.09% of total energy under different transmitting rates than the existing algorithms for 2D-to-3D conversion.
['Qian Li', 'Xin Jin', 'Zhanqi Liu', 'Qionghai Dai']
Dynamic cloud offloading for 2D-to-3D conversion
986,306
Image convolution is an integral operator in the field of digital image processing. For any operation to be processed in images say whether it is edge detection, image smoothing, image blurring, etc. process of convolution comes into picture. Generally in image processing the convolution is done by using a mask known as the kernel. As the values of the kernel is changed the operation on image also changes. For each operation, the kernel will be different. In the conventional way of image convolution, the number of multiplications are very high. Thereby the time complexity is also high. In this paper, a new and efficient method is proposed to do convolution on the images with lesser time complexity. We exploit the sub matrix structure of the kernel matrix and systematically assign the values to a new H matrix. Since the produced H matrix is a spare matrix, the output is realized here by using Sparse Matrix Vector Multiplication technique. Compressed Row Storage format (CSR) is the format that is used here for the Sparse Matrix Vector Multiplication (SMVM) technique. Using the CSR format with Sparse Matrix Vector Multiplication technique, convolution processes achieves 3.4 times and 2.4 times faster than conventional methods for image smoothing and edge detection operations respectively.
['B Bipin', 'Jyothisha J Nair']
Image convolution optimization using sparse matrix vector multiplication technique
933,868
Analog circuit observer blocks (ACOBs) are a low-overhead design for test technique for analog and mixed-signal circuits that aim to reduce the need for precision in test. We present a new ACOB scheme, pseudoduplication, for single-ended switched capacitor filters. Fault detection is based on a data duplication code which is achieved without actual hardware duplication. Simulation is used to demonstrate how faults are detected. With layouts and simulation, we demonstrate that the overhead is not significant.
['Bapiraju Vinnakota', 'Ramesh Harjani', 'Wooyoung Choi']
Pseudoduplication-an ACOB technique for single-ended circuits
301,872
In recent years, characterized by the innovation of technology and the digital revolution, the field of media has become important. The transfer and exchange of multimedia data and duplication have become major concerns of researchers. Consequently, protecting copyrights and ensuring service safety is needed. Cryptography has a specific role, is to protect secret files against unauthorized access. In this paper, a hierarchical cryptosystem algorithm based on Logistic Map chaotic systems is proposed. The results show that the proposed method improves the security of the image. Experimental results on a database of 200 medical images show that the proposed method significantly gives better results.
['Med Karim Abdmouleh', 'Ali Khalfallah', 'Med Salim Bouhlel']
A new Watermarking Technique for Medical Image using Hierarchical Encryption
640,278
The paper gives a survey on the new Expectation-based Multifocal Saccadic vision (EMS-Vision) system for autonomous vehicle guidance developed at the Universitat der Bundeswehr Munchen (UBM). EMS-Vision is the third generation dynamic vision system following the 4-D approach. Its core element is a new camera arrangement, mounted on a high bandwidth pan-tilt head for active gaze control. Central knowledge representation and a hierarchical system architecture allow efficient activation and control of behavioral capabilities for perception and action. The system has been implemented on commercial off-the-shelf (COTS) hardware components in both UBM test vehicles VaMoRs and VaMP. Results from autonomous turnoff maneuvers, performed on army proving grounds, are discussed.
['Rudolf Gregor', 'Michael Lützeler', 'Martin Pellkofer', 'Karl-Heinz Siedersberger', 'Ernst D. Dickmanns']
EMS-vision: A perceptual system for autonomous vehicles
892,324
The paper considers the output tracking problem for nonlinear systems whose performance output is also a flat output of the system itself. A desired output signal is sought on the actual performance output by using a feedforward inverse input that is periodically updated with discrete-time feedback of the sampled state of the system. The proposed method is based on an iterative output replanning that uses the desired output trajectory and the sampled state to replan an output trajectory whose inverse input helps in reducing the tracking error. This iterative replanning exploits the Hermite interpolating polynomials to achieve an overall arbitrarily smooth input and a tracking error that can be made arbitrarily small if the state sampling period is sufficiently small and mild assumptions are considered. Some simulation results are presented for the cases of an unicycle and a one-trailer system affected by additive noise.
['Luca Consolini', 'Gabriele Lini', 'Aurelio Piazzi']
Iterative output replanning for flat systems affected by additive noise
220,169
Proper testing is an essential and critical part of any development effort. However, software testing is a complex undertaking, especially in the midst of today's security threats. Hackers, social engineering scams, and unaware users, are just a few potential threats that developers must consider not only during development, but more importantly during testing. There are significant reputation and financial losses related to security aspects that could have been addressed during requirements specification. While a variety of approaches to security requirements specification have been proposed, there is a tangible lack in the support that they offer during testing. In this paper we describe the tool support of a new security requirements engineering technique called SURE-Secure and Usable Requirements Engineering. ASSURE - Automated Support for Secure and Usable Requirements Engineering -, is a system developed to aid in the mapping of security requirements into testing artifacts. This support goes beyond mapping and aids also in the management of users and projects.
['Jose Romero-Mariona', 'Hadar Ziv', 'Debra J. Richardson']
ASSURE: automated support for secure and usable requirements engineering
194,867
A new ABFT architecture is proposed to tolerate multiple soft-errors with low overheads. It memorizes operands on a stack upon error detection and corrects errors by recomputing. This allows uninterrupted input data streams to lie processed without data loss.
['Yves Blaquiere', 'Gabriel Gagné', 'Yvon Savaria', 'Claude Évéquoz']
Cost analysis of a new algorithmic-based soft-error tolerant architecture
476,718
Resistive Random Access Memories (RRAMs) have gained high attention for a variety of promising applications especially the design of non-volatile in-memory computing devices. In this paper, we present an approach for the synthesis of RRAM-based logic circuits using the recently proposed Majority-Inverter Graphs (MIGs). We propose a bi-objective algorithm to optimize MIGs with respect to the number of required RRAMs and computational steps in both MAJ-based and IMP-based realizations. Since the number of computational steps is recognized as the main drawback of the RRAM-based logic, we also present an effective algorithm to reduce the number of required steps. Experimental results show that the proposed algorithms achieve higher efficiency compared to the general purpose MIG optimization algorithms, either in finding a good trade-off between both cost metrics or reducing the number of steps. In comparison with the RRAM-based circuits implemented by the state-of-the-art approaches using other well-known data structures the number of required computational steps obtained by our proposed MIG-oriented synthesis approach for large benchmark circuits is reduced up to factor of 26. This strong gain comes from the use of MIGs that provide an efficient and intrinsic representation for RRAM-based computing—particularly in MAJ-based realizations—and the use of techniques proposed for optimization.
['Saeideh Shirinzadeh', 'Mathias Soeken', 'Pierre-Emmanuel Gaillardon', 'Rolf Drechsler']
Fast logic synthesis for RRAM-based in-memory computing using Majority-Inverter Graphs
648,920
MPI is commonly used standard in development of scientific applications. It focuses on interlanguage operability and is not very well object oriented. The paper proposes a general pattern enabling design of distributed and object oriented applications. It also presents its sample implementations and performance tests.
['Karol Banczyk', 'Tomasz Boiński', 'Henryk Krawczyk']
Object serialization and remote exception pattern for distributed C++/MPI application
427,628
Time-series classification is a widely examined data mining task with various scientific and industrial applications. Recent research in this domain has shown that the simple nearest-neighbor classifier using Dynamic Time Warping (DTW) as distance measure performs exceptionally well, in most cases outperforming more advanced classification algorithms. Instance selection is a commonly applied approach for improving efficiency of nearest-neighbor classifier with respect to classification time. This approach reduces the size of the training set by selecting the best representative instances and use only them during classification of new instances. In this paper, we introduce a novel instance selection method that exploits the hubness phenomenon in time-series data, which states that some few instances tend to be much more frequently nearest neighbors compared to the remaining instances. Based on hubness, we propose a framework for score-based instance selection, which is combined with a principled approach of selecting instances that optimize the coverage of training data. We discuss the theoretical considerations of casting the instance selection problem as a graph-coverage problem and analyze the resulting complexity. We experimentally compare the proposed method, denoted as INSIGHT, against FastAWARD, a state-of-the-art instance selection method for time series. Our results indicate substantial improvements in terms of classification accuracy and drastic reduction (orders of magnitude) in execution times.
['Krisztian Buza', 'Alexandros Nanopoulos', 'Lars Schmidt-Thieme']
INSIGHT: efficient and effective instance selection for time-series classification
189,888
In the present work the problem of reconstructing locations of events in an arbitrary scene, or "world" for example 3D scene in real applications from an over-complete set of measurements, typically of lower dimension each such as multiple 2D images obtained from different cameras is addressed. A non-parametric approach based on Principal Component Analysis PCA has been developed where a training set of points with known world-coordinates is used to reconstruct the forward and inverse transformation matrices from the scene to the measurement space. The technique does not use any a priori information about the data acquisition geometry or its parameters and can also identify degenerate cases where either the training or the measurement sets are insufficient to perform robust reconstruction. As a first illustration a simple test-set of objects and their 2D images has been presented in order to illustrate the validity of the method. The next realistic application involves a data set of explosions recorded from two high-speed cameras. The method successfully reconstructs the positions of the explosions and in addition their intensities were also quantified.
['George Petkov', 'Nikolay Mladenov', 'Stiliyan Kalitzin']
Integral single-event scene reconstruction from general over-complete sets of measurements with application to explosions localization and charge estimation
533,196
Gesture Recognition Algorithm using Morphological Analysis
['Tae-Eun Kim']
Gesture Recognition Algorithm using Morphological Analysis
560,751
Behavior-based approaches to robot control are extremely popular in mobile robotics, but still rarely used for manipulation. We propose a behavior-based system that performs manipulation tasks using visual feedback. The distinctive points of our proposal are: (i) visual behavior is implemented using a new camera-based approach; (ii) reactive fuzzy rules are used to arbitrate behavior; and (iii) the outputs of concurrent behavior are fused using fuzzy logic. We show experiments on a real arm performing a pick-and-place task that illustrate our approach and demonstrate its advantages over current approaches.
['Zbigniew Wasik', 'A. Safiotti']
A fuzzy behavior-based control system for manipulation
226,880
Special issue of PSAM12 Conference selected papers
['Curtis Smith', 'Mohammad Pourgol-Mohammad']
Special issue of PSAM12 Conference selected papers
774,729
We study the problem of dynamic resource allocation of a GPS server with two traffic classes when the leaky bucket scheme is employed as a traffic policing mechanism. Three popular input traffic models - independent Poisson arrival, autoregressive model, and partially observed traffic (hidden Markov model) - are investigated. Theoretically, optimal control can be obtained by a basic dynamic programming algorithm. However, such a solution is computationally prohibitive due to the curse of dimensionality. Instead, we propose several heuristic policies with improvements using rollout, parallel rollout, and hindsight optimization techniques under the aforementioned traffic models and show that these techniques can significantly reduce the penalty associated with delays and dropped packets.
['P. Tinnakomsrisuphap', 'Sarut Vanichpun', 'Richard J. La']
Dynamic resource allocation of GPS queues with leaky buckets
502,357
The paper investigates a stochastic production scheduling problem with unrelated parallel machines. A closed-loop scheduling technique is presented that on-line controls the production process. To achieve this, the scheduling problem is reformulated as a special Markov Decision Process. A near-optimal control policy of the resulted MDP is calculated in a homogeneous multi-agent system. Each agent applies a trial-based approximate dynamic programming method. Different cooperation techniques to distribute the value function computation among the agents are described. Finally, some benchmark experimental results are shown.
['Balázs Csanád Csáji', 'László Monostori']
Stochastic reactive production scheduling by multi-agent based asynchronous approximate dynamic programming
5,176
We propose a methodology for designing the local mapping rule for fully synchronized but energy-limited sensors in a distributed detection system, where sensors communicate with the fusion center over multiaccess channels. Using the proposed methodology, we come up with the modified detect-and-forward scheme and the modified amplify-and-forward scheme. The performance of the two schemes is analyzed. We show that optimizing the local mapping rule can lead to a larger error exponent under total power constraint.
['Feng Li', 'Jamie S. Evans', 'Subhrakanti Dey']
Design of Distributed Detection Schemes for Multiaccess Channels
23,548
Engaging Citizens in Policy Issues: Multidimensional Approach, Evidence and Lessons Learned
['Elena Sánchez-Nielsen', 'Deirdre Lee', 'Eleni Panopoulou', 'Simon Delakorda', 'Gyula Takács']
Engaging Citizens in Policy Issues: Multidimensional Approach, Evidence and Lessons Learned
461,438
In this correspondence uniform decomposition procedures are extended to incompletely specified sequential machines. Any given Moore sequential machine is realized by interconnecting copies of a universal two-state component machine. The information processed by each component machine is represented by its corresponding partial mapping. When the sequential machine is incompletely specified, a reduction in the number of component machine copies is possible. The reduction is found by a uniform-cost search algorithm which finds a minimal cover on the set of partial mappings.
['George Williams']
Uniform Decomposition of Incompletely Specified Sequential Machines
466,586
Ionospheric effects are a potential error source for the estimation of surface quantities such as sea surface salinity, using L-band radiometry. This study is carried out in the context of the SMOS future space mission, which uses an interferometric radiometer. We first describe the way the Faraday rotation angle due to electron content along the observing path varies across the two-dimensional field of view. Over open ocean surfaces, we show that it is possible to retrieve the total electron content (TEC) at nadir from radiometric data considered over the bulk of the field of view, with an accuracy better than 0.5 TEC units, compatible with requirements for surface salinity observations. Using a full-polarimetric design improves the accuracy on the estimated TEC value. The random uncertainty on retrieved salinity is decreased by about 15% with respect to results obtained when using only data for the first Stokes parameter, which is immune to Faraday rotation. Similarly, TEC values over land surfaces may be retrieved with the accuracy required in the context of soil moisture measurements. Finally, direct TEC estimation provides information which should allow to correct for ionospheric attenuation as well.
['Philippe Waldteufel', 'Nicolas Floury', 'Emmanuel P. Dinnat', 'Gérard Caudal']
Ionospheric effects for L-band 2-D interferometric radiometry
339,945
It is important to assess the risk in chemical plants. HAZOP is widely used in the risk assessment to identify hazard. An automatic analysis system is developed to perform HAZOP effectively. In this study, semi-automatic analysis system was developed by using the Signed Directed Graph (SDG) as a deviation in the behavior of the propagation of equipment. Versatility of analysis is raised based on the propagation of deviation by adding the device in accordance with the rules. Our developed HAZOP analysis system is applied to one chemical process. And the future works for this study are explained.
['Ken Isshiki', 'Yoshiomi Munesawa', 'Atsuko Nakai', 'Kazuhiko Suzuki']
HAZOP analysis system compliant with equipment models based on SDG
545,837
The performance of each datanode in a heterogeneous Hadoop cluster differs, and the number of slots that can be numbered to simultaneously execute tasks differs. For this reason, Hadoop is susceptible to replica placement problems and data replication problems. Because of this, replication problems and allocation problems occur. These problems can deteriorate the performance of Hadoop. In this paper, we summarize existing research to improve data locality, and design a data replication method to solve replication and allocation problems.
['Daeshin Park', 'Kiwook Kang', 'Jiman Hong', 'Yookun Cho']
An efficient Hadoop data replication method design for heterogeneous clusters
861,400
Seidel's switching of a vertex in a given graph results in making the vertex adjacent to precisely those vertices it was nonadjacent before, while keeping the rest of the graph unchanged. Two graphs are called switching equivalent if one can be transformed into the other one by a sequence of Seidel's switchings. We consider the computational complexity of deciding if an input graph can be switched into a graph having a desired graph property. Among other results we show that switching to a regular graph is NP-complete. The proof is based on an NP-complete variant of hypergraph bicoloring that we find interesting in its own.
['Jan Kratochvíl']
Complexity of hypergraph coloring and Seidel's switching
886,727
Combining Semantic Lifting and Ad-hoc Contextual Analysis in a Data Loss Scenario
['Antonia Azzini', 'Ernesto Damiani', 'Francesco Zavatarelli']
Combining Semantic Lifting and Ad-hoc Contextual Analysis in a Data Loss Scenario
606,316
This paper presents a NoC topology exploration based on a real-world mobile multimedia application example. An abstract simulation model is used for the exploration. The input parameters of the model and the evaluation of the network topologies are based on synthesized router architectures that enable us to investigate the trade-off between area and maximum clock frequency. We consider deadlock-related issues like routing cycles as well as message dependencies that are neglected by many other topology exploration publications. In our simulations we show that an enhanced unidirectional ring topology shows the best performance regarding latency and chip area among the examined topologies.
['Andreas Lankes', 'Andreas Herkersdorf', 'Sören Sonntag', 'Helmut Reinig']
NoC topology exploration for mobile multimedia applications
479,591
Matrix Factorization for Near Real-time Geolocation Prediction in Twitter Stream.
['Nghia Duong-Trung', 'Nicolas Schilling', 'Lucas Drumond', 'Lars Schmidt-Thieme']
Matrix Factorization for Near Real-time Geolocation Prediction in Twitter Stream.
991,051
The Tall Building Push-Over Analysis program (TBPOA) puts forward precise, concise and practical ways of 3D push-over analysis and static elasto-plastic analysis for reinforced concrete tall buildings. Column and beam elements of TBPOA are modeled by nonlinear bar units, while wall elements by nonlinear bar units considering shearing deformation. Based on the plane cross-section assumption, surface integral of the bar unit is substituted with curvilinear integral along the edge. Besides, a quadratic term is introduced into geometric equation in order to have tall building’s P-Δ effect precisely measured. Wall element model is acquired by adding nonlinear shear spring into bar element, which make it possible to calculate axial, bending and shearing deformation at one time in the program. Newton-Raphson iteration method and arc-length method are introduced to solve the nonlinear equations towards the softening phase of concrete.
['Jie-Jiang Zhu', 'Yang Lee']
Push-Over Analysis Programming for Reinforced Concrete Structure
355,284